url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/5720
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5720/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5720/comments
https://api.github.com/repos/huggingface/transformers/issues/5720/events
https://github.com/huggingface/transformers/issues/5720
656,070,101
MDU6SXNzdWU2NTYwNzAxMDE=
5,720
TypeError: To be compatible with tf.contrib.eager.defun, Python functions must return zero or more Tensors; in compilation of <function tf_if_stmt.<locals>.error_checking_body at 0x7f55400e3c80>, found return value of type <class 'tensorflow.python.keras.losses.MeanSquaredError'>, which is not a Tensor.
{ "login": "Misoknisky", "id": 12208899, "node_id": "MDQ6VXNlcjEyMjA4ODk5", "avatar_url": "https://avatars.githubusercontent.com/u/12208899?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Misoknisky", "html_url": "https://github.com/Misoknisky", "followers_url": "https://api.github.com/users/Misoknisky/followers", "following_url": "https://api.github.com/users/Misoknisky/following{/other_user}", "gists_url": "https://api.github.com/users/Misoknisky/gists{/gist_id}", "starred_url": "https://api.github.com/users/Misoknisky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Misoknisky/subscriptions", "organizations_url": "https://api.github.com/users/Misoknisky/orgs", "repos_url": "https://api.github.com/users/Misoknisky/repos", "events_url": "https://api.github.com/users/Misoknisky/events{/privacy}", "received_events_url": "https://api.github.com/users/Misoknisky/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I encounter one case: `if object: do something` will trigger this error, while if you use the condition `if object is not None` you will get through it.", "Did you solved it?", "> I encounter one case: `if object: do something` will trigger this error, while if you use the condition `if object is not None` you will get through it.\r\n\r\nThanks, you are the hero." ]
1,594
1,678
1,600
NONE
null
INFO:tensorflow:Error reported to Coordinator: in converted code: /data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/transformers/trainer_tf.py:511 _forward * per_example_loss, _ = self._run_model(features, labels, True) /data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/transformers/trainer_tf.py:534 _run_model * outputs = self.model(features, labels=labels, training=training)[:2] /data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py:778 __call__ outputs = call_fn(cast_inputs, *args, **kwargs) /data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/transformers/modeling_tf_roberta.py:530 call * loss = self.compute_loss(labels, reshaped_logits) /data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/transformers/modeling_tf_utils.py:135 compute_loss * if shape_list(logits)[1] == 1: /data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/autograph/operators/control_flow.py:918 if_stmt basic_symbol_names, composite_symbol_names) /data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/autograph/operators/control_flow.py:956 tf_if_stmt error_checking_orelse) /data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/util/deprecation.py:507 new_func return func(*args, **kwargs) /data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py:1174 cond return cond_v2.cond_v2(pred, true_fn, false_fn, name) /data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/ops/cond_v2.py:83 cond_v2 op_return_value=pred) /data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/framework/func_graph.py:983 func_graph_from_py_func expand_composites=True) /data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/util/nest.py:568 map_structure structure[0], [func(*x) for x in entries], /data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/util/nest.py:568 <listcomp> structure[0], [func(*x) for x in entries], /data0/liuyongkang/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow_core/python/framework/func_graph.py:943 convert (str(python_func), type(x))) TypeError: To be compatible with tf.contrib.eager.defun, Python functions must return zero or more Tensors; in compilation of <function tf_if_stmt.<locals>.error_checking_body at 0x7f55400e3c80>, found return value of type <class 'tensorflow.python.keras.losses.MeanSquaredError'>, which is not a Tensor.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5720/reactions", "total_count": 8, "+1": 8, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5720/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5719
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5719/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5719/comments
https://api.github.com/repos/huggingface/transformers/issues/5719/events
https://github.com/huggingface/transformers/issues/5719
656,057,545
MDU6SXNzdWU2NTYwNTc1NDU=
5,719
generator` yielded an element that could not be converted to the expected type. The expected type was int32, but the yielded element was None.
{ "login": "Misoknisky", "id": 12208899, "node_id": "MDQ6VXNlcjEyMjA4ODk5", "avatar_url": "https://avatars.githubusercontent.com/u/12208899?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Misoknisky", "html_url": "https://github.com/Misoknisky", "followers_url": "https://api.github.com/users/Misoknisky/followers", "following_url": "https://api.github.com/users/Misoknisky/following{/other_user}", "gists_url": "https://api.github.com/users/Misoknisky/gists{/gist_id}", "starred_url": "https://api.github.com/users/Misoknisky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Misoknisky/subscriptions", "organizations_url": "https://api.github.com/users/Misoknisky/orgs", "repos_url": "https://api.github.com/users/Misoknisky/repos", "events_url": "https://api.github.com/users/Misoknisky/events{/privacy}", "received_events_url": "https://api.github.com/users/Misoknisky/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,594
1,600
1,600
NONE
null
# 🐛 Bug ## Information Model I am using (RoBerta): Language I am using the model on (English): The problem arises when using: * [ ] the official example scripts: (give details below) inputs = tokenizer( text_a, text_b, add_special_tokens=True, max_length=max_length, padding="max_length", truncation=True, return_overflowing_tokens=True, ) if "num_truncated_tokens" in inputs and inputs["num_truncated_tokens"] > 0: logger.info( "Attention! you are cropping tokens (swag task is ok). " "If you are training ARC and RACE and you are poping question + options," "you need to try to use a bigger max seq length!" ) choices_inputs.append(inputs) label = label_map[example.label] input_ids = [x["input_ids"] for x in choices_inputs] attention_mask = ( [x["attention_mask"] for x in choices_inputs] if "attention_mask" in choices_inputs[0] else None ) token_type_ids = ( [x["token_type_ids"] for x in choices_inputs] if "token_type_ids" in choices_inputs[0] else None ) """"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" self.dataset = tf.data.Dataset.from_generator( gen, ( { "example_id": tf.int32, "input_ids": tf.int32, "attention_mask": tf.int32, "token_type_ids": tf.int32, }, tf.int64, ), """""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" The inputs dose not include the key"token_type_ids",so a None is return,but a None can't convert to tf.int32 in the tensorflow2,so the code can't work ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5719/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5719/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5718
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5718/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5718/comments
https://api.github.com/repos/huggingface/transformers/issues/5718/events
https://github.com/huggingface/transformers/pull/5718
656,054,810
MDExOlB1bGxSZXF1ZXN0NDQ4NDIwODM5
5,718
[Don't merge - Bert2Bert] Add training scripts and slight changes to Trainer
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5718?src=pr&el=h1) Report\n> Merging [#5718](https://codecov.io/gh/huggingface/transformers/pull/5718?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fa5423b1695cd24856bcff47214172e0f540d924&el=desc) will **decrease** coverage by `0.46%`.\n> The diff coverage is `10.76%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5718/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5718?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5718 +/- ##\n==========================================\n- Coverage 77.79% 77.33% -0.47% \n==========================================\n Files 145 146 +1 \n Lines 25355 25413 +58 \n==========================================\n- Hits 19726 19652 -74 \n- Misses 5629 5761 +132 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5718?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/bert\\_encoder\\_decoder\\_summary.py](https://codecov.io/gh/huggingface/transformers/pull/5718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZXJ0X2VuY29kZXJfZGVjb2Rlcl9zdW1tYXJ5LnB5) | `0.00% <0.00%> (ø)` | |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.41% <37.50%> (-0.55%)` | :arrow_down: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `77.77% <100.00%> (+0.22%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5718/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5718?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5718?src=pr&el=footer). Last update [fa5423b...d9f6d07](https://codecov.io/gh/huggingface/transformers/pull/5718?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,594
1,594
1,594
MEMBER
null
Just a draft to keep track of Bert2Bert summary training.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5718/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5718/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5718", "html_url": "https://github.com/huggingface/transformers/pull/5718", "diff_url": "https://github.com/huggingface/transformers/pull/5718.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5718.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/5717
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5717/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5717/comments
https://api.github.com/repos/huggingface/transformers/issues/5717/events
https://github.com/huggingface/transformers/pull/5717
656,048,363
MDExOlB1bGxSZXF1ZXN0NDQ4NDE1NTI2
5,717
Update tokenization_t5.py
{ "login": "gauravmishra", "id": 1448938, "node_id": "MDQ6VXNlcjE0NDg5Mzg=", "avatar_url": "https://avatars.githubusercontent.com/u/1448938?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gauravmishra", "html_url": "https://github.com/gauravmishra", "followers_url": "https://api.github.com/users/gauravmishra/followers", "following_url": "https://api.github.com/users/gauravmishra/following{/other_user}", "gists_url": "https://api.github.com/users/gauravmishra/gists{/gist_id}", "starred_url": "https://api.github.com/users/gauravmishra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gauravmishra/subscriptions", "organizations_url": "https://api.github.com/users/gauravmishra/orgs", "repos_url": "https://api.github.com/users/gauravmishra/repos", "events_url": "https://api.github.com/users/gauravmishra/events{/privacy}", "received_events_url": "https://api.github.com/users/gauravmishra/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5717?src=pr&el=h1) Report\n> Merging [#5717](https://codecov.io/gh/huggingface/transformers/pull/5717?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7096e47513127d4f072111a7f58f109842a2b6b0&el=desc) will **increase** coverage by `0.76%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5717/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5717?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5717 +/- ##\n==========================================\n+ Coverage 77.22% 77.99% +0.76% \n==========================================\n Files 146 146 \n Lines 26005 26005 \n==========================================\n+ Hits 20083 20283 +200 \n+ Misses 5922 5722 -200 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5717?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.77% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.82% <0.00%> (-0.29%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5717/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5717?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5717?src=pr&el=footer). Last update [7096e47...0538a54](https://codecov.io/gh/huggingface/transformers/pull/5717?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks!" ]
1,594
1,594
1,594
CONTRIBUTOR
null
Minor doc fix.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5717/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5717/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5717", "html_url": "https://github.com/huggingface/transformers/pull/5717", "diff_url": "https://github.com/huggingface/transformers/pull/5717.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5717.patch", "merged_at": 1594699324000 }
https://api.github.com/repos/huggingface/transformers/issues/5716
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5716/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5716/comments
https://api.github.com/repos/huggingface/transformers/issues/5716/events
https://github.com/huggingface/transformers/pull/5716
655,962,156
MDExOlB1bGxSZXF1ZXN0NDQ4MzQ1NjU5
5,716
Add generic text classification example in TF
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5716?src=pr&el=h1) Report\n> Merging [#5716](https://codecov.io/gh/huggingface/transformers/pull/5716?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7cbf0f722d23440f3342aafc27697b50ead5996b?el=desc) will **increase** coverage by `0.10%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5716/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5716?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5716 +/- ##\n==========================================\n+ Coverage 80.32% 80.43% +0.10% \n==========================================\n Files 174 174 \n Lines 33446 33446 \n==========================================\n+ Hits 26867 26903 +36 \n+ Misses 6579 6543 -36 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5716?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/5716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.32% <0.00%> (-73.63%)` | :arrow_down: |\n| [src/transformers/activations\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `79.03% <0.00%> (-7.80%)` | :arrow_down: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `86.87% <0.00%> (-0.36%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.01% <0.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.27% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5716/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.04% <0.00%> (+0.13%)` | :arrow_up: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/5716/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5716?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5716?src=pr&el=footer). Last update [7cbf0f7...c97f433](https://codecov.io/gh/huggingface/transformers/pull/5716?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "@LysandreJik does it looks ok for you?" ]
1,594
1,600
1,600
CONTRIBUTOR
null
This PR adds a new example script for text classification in TensorFlow with the :hugs:nlp lib. The script allows users to run a text classification task on their own CSV files.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5716/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5716/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5716", "html_url": "https://github.com/huggingface/transformers/pull/5716", "diff_url": "https://github.com/huggingface/transformers/pull/5716.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5716.patch", "merged_at": 1600790705000 }
https://api.github.com/repos/huggingface/transformers/issues/5715
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5715/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5715/comments
https://api.github.com/repos/huggingface/transformers/issues/5715/events
https://github.com/huggingface/transformers/issues/5715
655,946,056
MDU6SXNzdWU2NTU5NDYwNTY=
5,715
Extending vocabulary by a large size crashes RobertaTokenizerFast
{ "login": "RudrakshTuwani", "id": 16378764, "node_id": "MDQ6VXNlcjE2Mzc4NzY0", "avatar_url": "https://avatars.githubusercontent.com/u/16378764?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RudrakshTuwani", "html_url": "https://github.com/RudrakshTuwani", "followers_url": "https://api.github.com/users/RudrakshTuwani/followers", "following_url": "https://api.github.com/users/RudrakshTuwani/following{/other_user}", "gists_url": "https://api.github.com/users/RudrakshTuwani/gists{/gist_id}", "starred_url": "https://api.github.com/users/RudrakshTuwani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RudrakshTuwani/subscriptions", "organizations_url": "https://api.github.com/users/RudrakshTuwani/orgs", "repos_url": "https://api.github.com/users/RudrakshTuwani/repos", "events_url": "https://api.github.com/users/RudrakshTuwani/events{/privacy}", "received_events_url": "https://api.github.com/users/RudrakshTuwani/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I tried using BertTokenizerFast but the problem persists.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Is there any update on the issue? \r\nI have the same problem...", "Also having the same problem while using `tokenizer.add_tokens` with a `unique_list` that holds the words I am trying to add to my tokenizer.", "I also have this issue. ", "same issue" ]
1,594
1,659
1,600
NONE
null
# 🐛 Bug ## Information Model I am using: RoBERTa Language I am using the model on: English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ```python from transformers import RobertaTokenizerFast tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base') tokenizer.add_tokens([str(i) for i in range(60000)]) ``` Here's the stack trace: ``` thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: CompiledTooBig(10485760)', /__w/tokenizers/tokenizers/tokenizers/src/tokenizer/added_vocabulary.rs:299:13 stack backtrace: 0: backtrace::backtrace::libunwind::trace at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.46/src/backtrace/libunwind.rs:86 1: backtrace::backtrace::trace_unsynchronized at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.46/src/backtrace/mod.rs:66 2: std::sys_common::backtrace::_print_fmt at src/libstd/sys_common/backtrace.rs:78 3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt at src/libstd/sys_common/backtrace.rs:59 4: core::fmt::write at src/libcore/fmt/mod.rs:1069 5: std::io::Write::write_fmt at src/libstd/io/mod.rs:1537 6: std::sys_common::backtrace::_print at src/libstd/sys_common/backtrace.rs:62 7: std::sys_common::backtrace::print at src/libstd/sys_common/backtrace.rs:49 8: std::panicking::default_hook::{{closure}} at src/libstd/panicking.rs:198 9: std::panicking::default_hook at src/libstd/panicking.rs:218 10: std::panicking::rust_panic_with_hook at src/libstd/panicking.rs:477 11: rust_begin_unwind at src/libstd/panicking.rs:385 12: core::panicking::panic_fmt at src/libcore/panicking.rs:89 13: core::option::expect_none_failed at src/libcore/option.rs:1272 14: tokenizers::tokenizer::added_vocabulary::AddedVocabulary::add_tokens 15: tokenizers::tokenizer::Tokenizer::add_tokens 16: tokenizers::tokenizer::__init11742626496714830824::__init11742626496714830824::__wrap 17: method_vectorcall_VARARGS_KEYWORDS at /tmp/build/80754af9/python_1593706424329/work/Objects/descrobject.c:332 18: _PyObject_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127 19: call_function at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963 20: _PyEval_EvalFrameDefault at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3486 21: function_code_fastcall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:283 22: _PyFunction_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:410 23: _PyObject_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127 24: call_function at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963 25: _PyEval_EvalFrameDefault at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3486 26: _PyEval_EvalCodeWithName at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4298 27: _PyFunction_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:435 28: _PyObject_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127 29: method_vectorcall at /tmp/build/80754af9/python_1593706424329/work/Objects/classobject.c:60 30: _PyObject_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127 31: call_function at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963 32: _PyEval_EvalFrameDefault at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3515 33: _PyEval_EvalCodeWithName at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4298 34: _PyFunction_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:435 35: _PyObject_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127 36: call_function at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963 37: _PyEval_EvalFrameDefault at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3486 38: gen_send_ex at /tmp/build/80754af9/python_1593706424329/work/Objects/genobject.c:222 39: _PyGen_Send at /tmp/build/80754af9/python_1593706424329/work/Objects/genobject.c:292 40: _PyEval_EvalFrameDefault at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:2053 41: gen_send_ex at /tmp/build/80754af9/python_1593706424329/work/Objects/genobject.c:222 42: _PyGen_Send at /tmp/build/80754af9/python_1593706424329/work/Objects/genobject.c:292 43: _PyEval_EvalFrameDefault at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:2053 44: gen_send_ex at /tmp/build/80754af9/python_1593706424329/work/Objects/genobject.c:222 45: _PyGen_Send at /tmp/build/80754af9/python_1593706424329/work/Objects/genobject.c:292 46: _PyEval_EvalFrameDefault at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:2053 47: gen_send_ex at /tmp/build/80754af9/python_1593706424329/work/Objects/genobject.c:222 48: _PyGen_Send at /tmp/build/80754af9/python_1593706424329/work/Objects/genobject.c:292 49: task_step_impl at /usr/local/src/conda/python-3.8.3/Modules/_asynciomodule.c:2638 50: task_step at /usr/local/src/conda/python-3.8.3/Modules/_asynciomodule.c:2931 51: _PyObject_MakeTpCall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:159 52: context_run at /tmp/build/80754af9/python_1593706424329/work/Python/context.c:634 53: cfunction_vectorcall_FASTCALL_KEYWORDS at /tmp/build/80754af9/python_1593706424329/work/Objects/methodobject.c:437 54: PyVectorcall_Call at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:199 55: do_call_core at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4983 56: _PyEval_EvalFrameDefault at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3559 57: function_code_fastcall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:283 58: _PyFunction_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:410 59: _PyObject_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127 60: call_function at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963 61: _PyEval_EvalFrameDefault at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3486 62: function_code_fastcall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:283 63: _PyFunction_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:410 64: _PyObject_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127 65: call_function at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963 66: _PyEval_EvalFrameDefault at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3486 67: function_code_fastcall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:283 68: _PyFunction_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:410 69: _PyObject_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127 70: call_function at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963 71: _PyEval_EvalFrameDefault at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3486 72: function_code_fastcall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:283 73: _PyFunction_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:410 74: _PyObject_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127 75: call_function at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963 76: _PyEval_EvalFrameDefault at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3486 77: function_code_fastcall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:283 78: _PyFunction_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:410 79: _PyObject_FastCallDict at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:96 80: _PyObject_Call_Prepend at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:887 81: slot_tp_call at /tmp/build/80754af9/python_1593706424329/work/Objects/typeobject.c:6521 82: _PyObject_MakeTpCall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:159 83: _PyObject_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:125 84: call_function at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963 85: _PyEval_EvalFrameDefault at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3500 86: function_code_fastcall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:283 87: _PyFunction_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:410 88: _PyObject_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127 89: call_function at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963 90: _PyEval_EvalFrameDefault at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3486 91: _PyEval_EvalCodeWithName at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4298 92: _PyFunction_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:435 93: _PyObject_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127 94: method_vectorcall at /tmp/build/80754af9/python_1593706424329/work/Objects/classobject.c:60 95: _PyObject_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127 96: call_function at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963 97: _PyEval_EvalFrameDefault at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3515 98: _PyEval_EvalCodeWithName at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4298 99: _PyFunction_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:435 100: _PyObject_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127 101: call_function at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963 102: _PyEval_EvalFrameDefault at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3486 103: _PyEval_EvalCodeWithName at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4298 104: _PyFunction_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:435 105: _PyObject_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127 106: call_function at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963 107: _PyEval_EvalFrameDefault at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3486 108: function_code_fastcall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:283 109: _PyFunction_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:410 110: _PyObject_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127 111: call_function at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963 112: _PyEval_EvalFrameDefault at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3486 113: _PyEval_EvalCodeWithName at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4298 114: _PyFunction_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:435 115: _PyObject_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127 116: method_vectorcall at /tmp/build/80754af9/python_1593706424329/work/Objects/classobject.c:89 117: PyVectorcall_Call at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:199 118: do_call_core at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:5010 119: _PyEval_EvalFrameDefault at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3559 120: _PyEval_EvalCodeWithName at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4298 121: _PyFunction_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Objects/call.c:435 122: _PyObject_Vectorcall at /tmp/build/80754af9/python_1593706424329/work/Include/cpython/abstract.h:127 123: call_function at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4963 124: _PyEval_EvalFrameDefault at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:3500 125: _PyEval_EvalCodeWithName at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4298 126: PyEval_EvalCodeEx at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:4327 127: PyEval_EvalCode at /tmp/build/80754af9/python_1593706424329/work/Python/ceval.c:718 128: run_eval_code_obj at /tmp/build/80754af9/python_1593706424329/work/Python/pythonrun.c:1125 129: run_mod at /tmp/build/80754af9/python_1593706424329/work/Python/pythonrun.c:1147 130: PyRun_FileExFlags at /tmp/build/80754af9/python_1593706424329/work/Python/pythonrun.c:1063 131: PyRun_SimpleFileExFlags at /tmp/build/80754af9/python_1593706424329/work/Python/pythonrun.c:428 132: pymain_run_file at /tmp/build/80754af9/python_1593706424329/work/Modules/main.c:387 133: pymain_run_python at /tmp/build/80754af9/python_1593706424329/work/Modules/main.c:571 134: Py_RunMain at /tmp/build/80754af9/python_1593706424329/work/Modules/main.c:650 135: Py_BytesMain at /tmp/build/80754af9/python_1593706424329/work/Modules/main.c:1096 136: __libc_start_main 137: <unknown> at ../sysdeps/x86_64/elf/start.S:103 note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace. fatal runtime error: failed to initiate panic, error 5 Aborted (core dumped ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Tokens should get added normally ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-5.3.0-1032-azure-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyTorch version (GPU?): 1.5.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5715/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5715/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5714
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5714/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5714/comments
https://api.github.com/repos/huggingface/transformers/issues/5714/events
https://github.com/huggingface/transformers/issues/5714
655,936,236
MDU6SXNzdWU2NTU5MzYyMzY=
5,714
facebook/bart-large-mnli input format
{ "login": "chrisdoyleIE", "id": 44365591, "node_id": "MDQ6VXNlcjQ0MzY1NTkx", "avatar_url": "https://avatars.githubusercontent.com/u/44365591?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chrisdoyleIE", "html_url": "https://github.com/chrisdoyleIE", "followers_url": "https://api.github.com/users/chrisdoyleIE/followers", "following_url": "https://api.github.com/users/chrisdoyleIE/following{/other_user}", "gists_url": "https://api.github.com/users/chrisdoyleIE/gists{/gist_id}", "starred_url": "https://api.github.com/users/chrisdoyleIE/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chrisdoyleIE/subscriptions", "organizations_url": "https://api.github.com/users/chrisdoyleIE/orgs", "repos_url": "https://api.github.com/users/chrisdoyleIE/repos", "events_url": "https://api.github.com/users/chrisdoyleIE/events{/privacy}", "received_events_url": "https://api.github.com/users/chrisdoyleIE/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Thanks for the kind words!\r\n\r\n1) definitely needs to be `<s>` instead of `[EOS]`, but the tokenizer should do this for you.\r\n2) I suspect that the tokenizer takes pairs of sentences. I know that @VictorSanh and I have used that model inside the run_glue.py script and gotten reasonable accuracy.\r\nThe logic seems to be here: https://github.com/huggingface/transformers/blob/fcf0652460753f8a81f7576e8abdaa6b3742f00e/src/transformers/data/processors/glue.py#L132\r\n\r\nAlso note that there is a different class to label mapping issue for Roberta, XLM and Bart that datasets/glue.py takes care of:\r\nBefore the fix, the classes are `dict(entailment=0, contradiction=1, neutral=2)`.\r\n\r\n@VictorSanh please confirm that I am not spewing lies. I have not worked with this dataset very much.\r\n", "Thanks for the quick response, as always!\r\n\r\nFirstly, using ```<s>``` or ```</s>``` with the initial code (above) seems to make no noticeable difference to the distribution.\r\n\r\nI have taken a look at the [permalinked](https://github.com/huggingface/transformers/blob/fcf0652460753f8a81f7576e8abdaa6b3742f00e/src/transformers/data/processors/glue.py#L132\r\n) code and have attempted to replicate it below, to no avail. \r\n\r\nWhether the tokenizer is passed a list of two sentences, a tuple of two sentences, or a list of a tuple; it returns a list of two tokenizations - one for each sentence - rather than one overall tokenization with automatic insertion of sep_token.\r\n\r\n```python\r\nfrom transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoModel\r\nimport torch\r\n\r\nt = AutoTokenizer.from_pretrained(\"facebook/bart-large-mnli\")\r\n\r\ns1 = torch.tensor(t([\"here we are passing ...\",\"... a list to the tokenizer\"], padding=\"max_length\")[\"input_ids\"]).to(\"cuda:0\") # typical use of tokenizer\r\ns2 = torch.tensor(t((\"here we are passing ...\",\"... a tuple to the tokenizer\"), padding=\"max_length\")[\"input_ids\"]).to(\"cuda:0\") # atypical use\r\ns3 = torch.tensor(t([(\"here we are passing ...\",\"... a list containing a tuple to the tokenizer\")], padding=\"max_length\")[\"input_ids\"]).to(\"cuda:0\") # as glue.py (I think!)\r\ns1.size() # torch.Size([2, 1024]) \r\ns2.size() # torch.Size([2, 1024])\r\ns3.size() # torch.Size([2, 1024]) >> none are the expected torch.Size([1, 1024])\r\n```", "I know how to solve the tokenizer problem\r\n```python\r\ntok = AutoTokenizer.from_pretrained('facebook/bart-large')\r\npair1 = (\"this is a sent\", \"another sent\")\r\npair2 = (\"this is a sent about same\", \"another sent\")\r\nassert tok(*pair1, return_tensors='pt').input_ids.shape == (1,10)\r\nassert tok([pair1, pair2], return_tensors='pt',padding=True).input_ids.shape == (2,12)\r\n```", "Great this is good progress, thank you!\r\n\r\nA remaining issue is that the distributions still look off:\r\n 1. One pair is entailed, the other contradicting - and yet their classification class remains the same.\r\n 2. The certainty of the model is possibly very low for identical sentences. \r\n\r\n```python\r\nt = AutoTokenizer.from_pretrained(\"facebook/bart-large-mnli\")\r\nmc = AutoModelForSequenceClassification.from_pretrained(\"facebook/bart-large-mnli\").to(\"cuda:0\")\r\n\r\ns1 = (\"this is good\", \"this is good\")\r\ns2 = (\"this is good\", \"this is bad\")\r\n\r\ninputs = torch.tensor([t(*s1, padding=\"max_length\")[\"input_ids\"],\r\n t(*s2, padding=\"max_length\")[\"input_ids\"]]\r\n ).to(\"cuda:0\")\r\n\r\nwith torch.no_grad():\r\n logits = mc(inputs, output_hidden_states=True)[0]\r\nsm = torch.nn.Softmax()\r\nprint(sm(logits)) # tensor([[0.0991, 0.2503, 0.6507], \r\n # [0.1670, 0.2707, 0.5623]], device='cuda:0')\r\n\r\n```", "Interesting. Happy to look into it if there's a bug, but otherwise I think this is just a model issue. (Bug = the prediction is very different from the fairseq model for the same input).", "> Great this is good progress, thank you!\r\n> \r\n> A remaining issue is that the distributions still look off:\r\n> \r\n> 1. One pair is entailed, the other contradicting - and yet their classification class remains the same.\r\n> 2. The certainty of the model is possibly very low for identical sentences.\r\n> \r\n> ```python\r\n> t = AutoTokenizer.from_pretrained(\"facebook/bart-large-mnli\")\r\n> mc = AutoModelForSequenceClassification.from_pretrained(\"facebook/bart-large-mnli\").to(\"cuda:0\")\r\n> \r\n> s1 = (\"this is good\", \"this is good\")\r\n> s2 = (\"this is good\", \"this is bad\")\r\n> \r\n> inputs = torch.tensor([t(*s1, padding=\"max_length\")[\"input_ids\"],\r\n> t(*s2, padding=\"max_length\")[\"input_ids\"]]\r\n> ).to(\"cuda:0\")\r\n> \r\n> with torch.no_grad():\r\n> logits = mc(inputs, output_hidden_states=True)[0]\r\n> sm = torch.nn.Softmax()\r\n> print(sm(logits)) # tensor([[0.0991, 0.2503, 0.6507], \r\n> # [0.1670, 0.2707, 0.5623]], device='cuda:0')\r\n> ```\r\n\r\nHave you tried with longer sentences? MNLI has inputs that are on average longer.\r\nWhile I agree that the second distribution is a bit off, the first one seems fairly spiked to me.\r\nAgree with Sam, might just be the model's behavior (as opposed to a bug)", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,594
1,600
1,600
NONE
null
Hi folks, First off, I've been using you guys since the early days and think the effort and time that you put in is just phenomenal. Thank you. All the postgrads I know at the Uni of Edinburgh love HuggingFace. My question concerns the usage of the ```facebook/bart-large-mnli``` checkpoint - specifically the input formatting. The paper mentions that inputs are concatenated and appended with an EOS token, which is then passed to the classification head. Something like below perhaps? If this is the case, the probabilities do not seem right, seeing as the first two sentences are the exact same. ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoModel import torch t = AutoTokenizer.from_pretrained("facebook/bart-large-mnli") mc = AutoModelForSequenceClassification.from_pretrained("facebook/bart-large-mnli") s1 = torch.tensor(t("i am good. [EOS] i am good.", padding="max_length")["input_ids"]) s2 = torch.tensor(t("i am good. [EOS] i am NOT good.", padding="max_length")["input_ids"]) s3 = torch.tensor(t("i am good. [EOS] i am bad.", padding="max_length")["input_ids"]) with torch.no_grad(): logits = mc(torch.stack((s1,s2,s3)), output_hidden_states=True)[0] sm = torch.nn.Softmax() print(sm(logits)) # tensor([[0.2071, 0.3143, 0.4786], # these sentences are the exact same, so why just 0.47? # [0.6478, 0.1443, 0.2080], # slightly better, but this checkpoint gets ~80% acc on MNLI # [0.3937, 0.2987, 0.3076]]) # This distribution is almost random, but the sentences are the exact opposite ``` I note that ```[EOS]``` is not registered with the tokenizer special tokens. When I use the registers ```<s>``` or ```</s>``` I get similar results
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5714/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5714/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5713
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5713/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5713/comments
https://api.github.com/repos/huggingface/transformers/issues/5713/events
https://github.com/huggingface/transformers/issues/5713
655,933,739
MDU6SXNzdWU2NTU5MzM3Mzk=
5,713
ONNX export broken for QA models
{ "login": "vshampor", "id": 31695470, "node_id": "MDQ6VXNlcjMxNjk1NDcw", "avatar_url": "https://avatars.githubusercontent.com/u/31695470?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vshampor", "html_url": "https://github.com/vshampor", "followers_url": "https://api.github.com/users/vshampor/followers", "following_url": "https://api.github.com/users/vshampor/following{/other_user}", "gists_url": "https://api.github.com/users/vshampor/gists{/gist_id}", "starred_url": "https://api.github.com/users/vshampor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vshampor/subscriptions", "organizations_url": "https://api.github.com/users/vshampor/orgs", "repos_url": "https://api.github.com/users/vshampor/repos", "events_url": "https://api.github.com/users/vshampor/events{/privacy}", "received_events_url": "https://api.github.com/users/vshampor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Might I also add that breaking regular PyTorch ONNX export like that with custom model output wrappers is IMO a Really Bad Idea (TM). Your `PreTrainedModel`'s are still `torch.nn.Module`'s, and as such should be exportable using standard `torch.onnx.export` APIs. \r\n\r\nTo make matters worse, the proposed `convert_pytorch` API that I discovered in `src/transformers/convert_graph_to_onnx.py` does not work with general argument-forwarding `forward(*args, **kwargs)` wrapper over the specific HF Transformers models. The `convert_pytorch` path's `__code__` shenanigans in their current way fail when exporting the following kinds of models:\r\n```python\r\nclass BertWithWrappedForward(BertForQuestionAnswering):\r\n def forward(*args, **kwargs):\r\n # pre-forward actions here\r\n super().forward(*args, **kwargs)\r\n```\r\n\r\nThe default `torch.onnx.export` path might have worked here, and it would be good to see some kind of support for such models in the `convert_pytorch` API, or at least a fallback scenario.", "@mfuntowicz @julien-c @LysandreJik ", "Just wanted to add that `convert_graph_to_onnx.py` is also broken for text-generation with GPT2 for me.\r\n\r\nRunning `python src/transformers/convert_graph_to_onnx.py --pipeline text-generation --model gpt2 --framework pt output/model.onnx` returns\r\n```\r\nError while converting the model: Only tuples, lists and Variables supported as JIT inputs/outputs. \r\nDictionaries and strings are also accepted but their usage is not recommended. But got unsupported type CausalLMOutputWithPast\r\n```\r\n\r\nAlthough the type is different (now `CausalLMOutputWithPast`), this seems to be the same error happening.", "To help with resolving the issue, this is the merge that seems to be causing the problems here: [5438](https://github.com/huggingface/transformers/pull/5438)\r\n\r\nAlso tried running the previous version before this release (`pip install -Iv transformers=3.0.1`), and now my model is properly converted to onnx.", "Hi! This was an unseen error that appeared when we made the switch from tuples to namedtuples. The fix was to specify to pipelines to continue using tuples instead of namedtuples!\r\n\r\n#6061 should have fixed it, thanks for letting us know!" ]
1,594
1,595
1,595
CONTRIBUTOR
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): bert-base-uncased Language I am using the model on (English, Chinese ...): - The problem arises when using: * [x] the official example scripts: convert_graph_to_onnx.py * [x] my own modified scripts: any script attempting to do a regular torch.onnx.export on a `PreTrainedModel` The tasks I am working on is: * [x] an official GLUE/SQUaD task: SQuAD * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run: `src/transformers/convert_graph_to_onnx.py --model bert-base-uncased --framework pt --pipeline question-answering /tmp/test_hf_onnx/test_hf.onnx` 2. Observe in console: ``` Error while converting the model: Only tuples, lists and Variables supported as JIT inputs/outputs. Dictionaries and strings are also accepted but their usage is not recommended. But got unsupported type QuestionAnsweringModelOutput ``` ## Expected behavior Successful export of the model to `/tmp/test_hf_onnx/test_hf.onnx` ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: current master ce374ba87767d551f720242d5e64bfa976531079 - Platform: Ubuntu 18.04 - Python version: 3.7 - PyTorch version (GPU?): 1.5.0 - Tensorflow version (GPU?): - - Using GPU in script?: - - Using distributed or parallel set-up in script?: -
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5713/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5713/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5712
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5712/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5712/comments
https://api.github.com/repos/huggingface/transformers/issues/5712/events
https://github.com/huggingface/transformers/issues/5712
655,877,737
MDU6SXNzdWU2NTU4Nzc3Mzc=
5,712
How to download Pre-trained T5 model?
{ "login": "deepankar27", "id": 3585068, "node_id": "MDQ6VXNlcjM1ODUwNjg=", "avatar_url": "https://avatars.githubusercontent.com/u/3585068?v=4", "gravatar_id": "", "url": "https://api.github.com/users/deepankar27", "html_url": "https://github.com/deepankar27", "followers_url": "https://api.github.com/users/deepankar27/followers", "following_url": "https://api.github.com/users/deepankar27/following{/other_user}", "gists_url": "https://api.github.com/users/deepankar27/gists{/gist_id}", "starred_url": "https://api.github.com/users/deepankar27/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/deepankar27/subscriptions", "organizations_url": "https://api.github.com/users/deepankar27/orgs", "repos_url": "https://api.github.com/users/deepankar27/repos", "events_url": "https://api.github.com/users/deepankar27/events{/privacy}", "received_events_url": "https://api.github.com/users/deepankar27/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "```python \r\nfrom transformers import T5ForConditionalGeneration\r\n\r\nt5 = T5ForConditionalGeneration.from_pretrained(\"t5-small\")\r\n```\r\n\r\ndoes not work? ", "@patrickvonplaten Thanks for the pointer, I tried & getting this:\r\n\r\n```\r\n>>> from transformers import T5ForConditionalGeneration\r\n>>> t5 = T5ForConditionalGeneration.from_pretrained(\"t5-small\")\r\nTraceback (most recent call last):\r\n File \"/root/anaconda3/envs/docsearch/lib/python3.7/site-packages/transformers/configuration_utils.py\", line 242, in get_config_dict\r\n raise EnvironmentError\r\nOSError\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/root/anaconda3/envs/docsearch/lib/python3.7/site-packages/transformers/modeling_utils.py\", line 604, in from_pretrained\r\n **kwargs,\r\n File \"/root/anaconda3/envs/docsearch/lib/python3.7/site-packages/transformers/configuration_utils.py\", line 200, in from_pretrained\r\n config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n File \"/root/anaconda3/envs/docsearch/lib/python3.7/site-packages/transformers/configuration_utils.py\", line 251, in get_config_dict\r\n raise EnvironmentError(msg)\r\nOSError: Can't load config for 't5-small'. Make sure that:\r\n\r\n- 't5-small' is a correct model identifier listed on 'https://huggingface.co/models'\r\n\r\n- or 't5-small' is the correct path to a directory containing a config.json file\r\n```\r\n\r\nI am new on huggingface API. I am going to download it & give path of the model. Let's see how it works... But let me know if you can give me some suggestions.", "Hi @deepankar27, do you mind specifying which `transformers` version you're using?", "@LysandreJik Sorry for the late reply, I figured it out, issue was with my config files. Thanks... :)", "Where in the computer memory is the T5-small model being downloaded by using @patrickvonplaten 's code?", "The model is downloaded from AWS where it is saved and then usually saved in a cache folder (usually `~/.cache/torch/transformers` as far as I know)" ]
1,594
1,598
1,598
NONE
null
Hi, I tried looking for ways to download & use T5-small pre-trained model but didn't get any API mentioned in documentation to download it. Though I found links but don't know will it work if I pass the path of the model? Thanks in advance..
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5712/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5712/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5711
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5711/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5711/comments
https://api.github.com/repos/huggingface/transformers/issues/5711/events
https://github.com/huggingface/transformers/issues/5711
655,859,911
MDU6SXNzdWU2NTU4NTk5MTE=
5,711
QA Pipeline: Key Error due to predicting a token outside of allowed context
{ "login": "tholor", "id": 1563902, "node_id": "MDQ6VXNlcjE1NjM5MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/1563902?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tholor", "html_url": "https://github.com/tholor", "followers_url": "https://api.github.com/users/tholor/followers", "following_url": "https://api.github.com/users/tholor/following{/other_user}", "gists_url": "https://api.github.com/users/tholor/gists{/gist_id}", "starred_url": "https://api.github.com/users/tholor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tholor/subscriptions", "organizations_url": "https://api.github.com/users/tholor/orgs", "repos_url": "https://api.github.com/users/tholor/repos", "events_url": "https://api.github.com/users/tholor/events{/privacy}", "received_events_url": "https://api.github.com/users/tholor/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[ { "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false } ]
[ "Hi @tholor, \r\n\r\nThanks for reporting the issue. \r\n\r\nWe did have an issue where predictions were going out of bounds on QA pipeline and it has been fixed on master: \r\n\r\n```python\r\n>>> nlp = pipeline(\"question-answering\",model=\"distilbert-base-uncased-distilled-squad\",\r\n tokenizer=\"distilbert-base-uncased\",\r\n device=-1)\r\n\r\n>>> nlp(question=\"test finding\", context=\"My name is Carla and I live in Berlin\")\r\n>>> {'score': 0.41493675112724304, 'start': 11, 'end': 16, 'answer': 'Carla'}\r\n```\r\n\r\nIf you are able to checkout from master branch I would be happy to hear back from you to make sure it's working as expected on your side as well.\r\n\r\nLet us know 😃 \r\nMorgan", "Hi @mfuntowicz , \r\nWorks like a charm now. Thanks for the fix!" ]
1,594
1,595
1,595
CONTRIBUTOR
null
# 🐛 Bug ## Information Model: distilbert Language: English The problem arises when using: QA inference via `pipeline` The pipeline throws an exception when the model predicts a token that is not part of the document (e.g. final special token). In the example below, the model predicts token 13 to be the end of the answer span. The context however ends at token 12 and token 13 is the final [SEP] token. Therefore, we get a key error when trying to access `feature.token_to_orig_map[13])` in here: https://github.com/huggingface/transformers/blob/ce374ba87767d551f720242d5e64bfa976531079/src/transformers/pipelines.py#L1370-L1380 ## To reproduce ``` nlp = pipeline("question-answering",model="distilbert-base-uncased-distilled-squad", tokenizer="distilbert-base-uncased", device=-1) nlp(question="test finding", context="My name is Carla and I live in Berlin") ``` results in ``` Traceback (most recent call last): File "/home/mp/deepset/dev/haystack/debug.py", line 16, in <module> nlp(question="test finding", context="My name is Carla and I live in Berlin") File "/home/mp/miniconda3/envs/py37/lib/python3.7/site-packages/transformers/pipelines.py", line 1316, in __call__ for s, e, score in zip(starts, ends, scores) File "/home/mp/miniconda3/envs/py37/lib/python3.7/site-packages/transformers/pipelines.py", line 1316, in <listcomp> for s, e, score in zip(starts, ends, scores) KeyError: 13 ``` ## Expected behavior Predictions that are pointing to tokens that are not part of the "context" (here: the last [SEP] token) should be filtered out from possible answers. ## Environment info - `transformers` version: 3.0.2 - Platform: Ubuntu 18.04 - Python version: 3.7.6 - PyTorch version (GPU?): 1.5.1, CPU - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5711/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5711/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5710
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5710/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5710/comments
https://api.github.com/repos/huggingface/transformers/issues/5710/events
https://github.com/huggingface/transformers/issues/5710
655,835,194
MDU6SXNzdWU2NTU4MzUxOTQ=
5,710
Attention heads attend equally after conversion from tensorflow checkpoint
{ "login": "YMaks", "id": 22409396, "node_id": "MDQ6VXNlcjIyNDA5Mzk2", "avatar_url": "https://avatars.githubusercontent.com/u/22409396?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YMaks", "html_url": "https://github.com/YMaks", "followers_url": "https://api.github.com/users/YMaks/followers", "following_url": "https://api.github.com/users/YMaks/following{/other_user}", "gists_url": "https://api.github.com/users/YMaks/gists{/gist_id}", "starred_url": "https://api.github.com/users/YMaks/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YMaks/subscriptions", "organizations_url": "https://api.github.com/users/YMaks/orgs", "repos_url": "https://api.github.com/users/YMaks/repos", "events_url": "https://api.github.com/users/YMaks/events{/privacy}", "received_events_url": "https://api.github.com/users/YMaks/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,594
1,600
1,600
NONE
null
# 🐛 Bug ## Information Hi. I'm using notebook https://github.com/jessevig/bertviz/blob/master/head_view_bert.ipynb from https://github.com/jessevig/bertviz for visualizing attention in the Bert model. Given example works fine with pytorch default models. The problem arises when I'm converting pre-trained (with https://github.com/google-research/bert#pre-training-with-bert) on custom dataset Bert-Base Multilingual Uncased model. Pytorch default model visualization: ![Screenshot from 2020-06-30 12-08-01](https://user-images.githubusercontent.com/22409396/87297669-7bc75b80-c511-11ea-8ca1-a24f8d4c31df.png) Example of attention values for default model: [[8.3652e-03, 6.9530e-02, 6.6828e-02, ..., 9.5115e-03, 2.7546e-02, 4.5171e-01], [3.3669e-03, 1.2537e-02, 9.2709e-03, ..., 1.5638e-03, 1.0154e-03, 9.1897e-01], [6.5795e-03, 3.6612e-03, 5.4454e-02, ..., 1.1923e-03, 4.1071e-03, 8.4899e-01], ..., [7.3705e-03, 2.5430e-03, 7.6645e-03, ..., 1.7184e-02, 4.6256e-02, 8.2301e-01], [2.2311e-02, 1.8006e-03, 4.3833e-02, ..., 9.1167e-03, 1.3746e-01, 6.2295e-01], [7.5967e-02, 4.2936e-02, 4.6500e-02, ..., 4.9925e-02, 6.6538e-02, 5.0721e-02]], [[3.5124e-02, 2.2295e-02, 9.2680e-03, ..., 1.1409e-02, 1.7234e-02, 5.5768e-01], [7.0571e-03, 3.7321e-01, 1.7890e-02, ..., 7.6114e-03, 8.8965e-03, 3.6259e-01], [8.8010e-03, 4.9023e-03, 1.4315e-01, ..., 2.2279e-03, 7.9276e-02, 4.3233e-01], ..., Visualization after model conversion from tf to pytorch: ![Screenshot from 2020-06-30 12-07-42](https://user-images.githubusercontent.com/22409396/87297740-97326680-c511-11ea-8a18-6a33160034ab.png) Example of attention values after conversion: [[0.0716, 0.0686, 0.0556, ..., 0.0776, 0.0783, 0.0648], [0.0893, 0.0513, 0.0641, ..., 0.0606, 0.0908, 0.0554], [0.0868, 0.0663, 0.0621, ..., 0.0822, 0.0777, 0.0471], ..., [0.0906, 0.0750, 0.0649, ..., 0.0807, 0.1011, 0.0444], [0.0670, 0.0667, 0.0620, ..., 0.0877, 0.0739, 0.0515], [0.0773, 0.0738, 0.0652, ..., 0.0787, 0.0856, 0.0518]], [[0.0553, 0.0622, 0.0665, ..., 0.0585, 0.0845, 0.0670], [0.0631, 0.0829, 0.0592, ..., 0.0608, 0.0968, 0.0532], [0.0561, 0.0720, 0.0617, ..., 0.0628, 0.1010, 0.0802], ..., Another experiments: 1. loading tensorflow checkpoint directly without conversion - works fine, not equal attentions; 2. loading pytorch model after saving it from loaded tensorflow checkpoint also works fine; 3. tested with Bert-Base Multilingual Uncased model without pre-training (to be sure that pre-training doesn't cause the problem) - got the same results. So, I guess that conversion from tf checkpoint works wrong or I'm converting model in wrong way. Any explanation of described behavior would be appreciated, thank you. ## To reproduce Steps to reproduce the behavior: 1. Example from https://github.com/jessevig/bertviz/blob/master/head_view_bert.ipynb - works fine. 2. Conversion tensorflow checkpoint to pytorch gives incorrect attentions - not work. ```python from transformers import convert_bert_original_tf_checkpoint_to_pytorch convert_bert_original_tf_checkpoint_to_pytorch.convert_tf_checkpoint_to_pytorch( 'model/multilingual_L-12_H-768_A-12/bert_model.ckpt.index', 'model/multilingual_L-12_H-768_A-12/bert_config.json', 'model/multilingual_L-12_H-768_A-12/pytorch_model.bin') ``` Then I copy converted pytorch_model.bin, config.json, vocab.txt to model/multilingual_L-12_H-768_A-12_pytorch/ ```python from transformers import BertTokenizer, BertForPreTraining pytorch_bert_model = 'model/multilingual_L-12_H-768_A-12_pytorch/' model = BertForPreTraining.from_pretrained(pytorch_bert_model) tokenizer = BertTokenizer.from_pretrained(pytorch_bert_model, do_lower_case=True) ``` No errors, but attention values seems wrong as written above. ``` INFO:transformers.configuration_utils:loading configuration file model/multilingual_L-12_H-768_A-12_pytorch/config.json INFO:transformers.configuration_utils:Model config BertConfig { "attention_probs_dropout_prob": 0.1, "directionality": "bidi", "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "output_attentions": true, "pad_token_id": 0, "pooler_fc_size": 768, "pooler_num_attention_heads": 12, "pooler_num_fc_layers": 3, "pooler_size_per_head": 128, "pooler_type": "first_token_transform", "type_vocab_size": 2, "vocab_size": 105879 } INFO:transformers.modeling_utils:loading weights file model/multilingual_L-12_H-768_A-12_pytorch/pytorch_model.bin INFO:transformers.modeling_utils:All model checkpoint weights were used when initializing BertForPreTraining. INFO:transformers.modeling_utils:All the weights of BertForPreTraining were initialized from the model checkpoint at model/multilingual_L-12_H-768_A-12_pytorch/. If your task is similar to the task the model of the ckeckpoint was trained on, you can already use BertForPreTraining for predictions without further training. INFO:transformers.tokenization_utils_base:loading file model/multilingual_L-12_H-768_A-12_pytorch/vocab.txt ``` Using ```BertModel``` for loading converted model gives the same result. 3. Loading tensorflow checkpoint directly without conversion - works fine ```python from transformers import BertTokenizer, BertForPreTraining bert_config_file = 'model/multilingual_L-12_H-768_A-12_pytorch/config.json' bert_vocab_file = 'model/multilingual_L-12_H-768_A-12_pytorch/vocab.txt' tf_bert_checkpoint ='model/multilingual_L-12_H-768_A-12/bert_model.ckpt.index' model = BertForPreTraining.from_pretrained(tf_bert_checkpoint, from_tf=True, config=bert_config_file) tokenizer = BertTokenizer.from_pretrained(bert_vocab_file, do_lower_case=True) ``` 4. Loading pytorch model after saving it from loaded tensorflow checkpoint - works fine ```python from transformers import BertTokenizer, BertForPreTraining bert_config_file = 'model/multilingual_L-12_H-768_A-12_pytorch/config.json' bert_vocab_file = 'model/multilingual_L-12_H-768_A-12_pytorch/vocab.txt' tf_bert_checkpoint ='model/multilingual_L-12_H-768_A-12/bert_model.ckpt.index' model = BertForPreTraining.from_pretrained(tf_bert_checkpoint, from_tf=True, config=bert_config_file) model.save_pretrained(model/test/') pytorch_bert_checkpoint_path = 'model/test/pytorch_model.bin' model = BertForPreTraining.from_pretrained(pytorch_bert_checkpoint_path, config=bert_config_file) tokenizer = BertTokenizer.from_pretrained(bert_vocab_file, do_lower_case=True) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Conversion from tensorflow checkpoint do not influence on model attentions (values doesn't seem equal), attentions can be visualized correctly. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Ubuntu 18.04.3 LTS - Python version: 3.7.7 - PyTorch version (GPU): 1.4.0 - Tensorflow version (GPU): 1.15.0 - Using GPU in script: No - Using distributed or parallel set-up in script: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5710/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/5710/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5709
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5709/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5709/comments
https://api.github.com/repos/huggingface/transformers/issues/5709/events
https://github.com/huggingface/transformers/issues/5709
655,699,157
MDU6SXNzdWU2NTU2OTkxNTc=
5,709
Run Language Modeling on Colab TPU cores terminates
{ "login": "AliOsm", "id": 7662492, "node_id": "MDQ6VXNlcjc2NjI0OTI=", "avatar_url": "https://avatars.githubusercontent.com/u/7662492?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AliOsm", "html_url": "https://github.com/AliOsm", "followers_url": "https://api.github.com/users/AliOsm/followers", "following_url": "https://api.github.com/users/AliOsm/following{/other_user}", "gists_url": "https://api.github.com/users/AliOsm/gists{/gist_id}", "starred_url": "https://api.github.com/users/AliOsm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AliOsm/subscriptions", "organizations_url": "https://api.github.com/users/AliOsm/orgs", "repos_url": "https://api.github.com/users/AliOsm/repos", "events_url": "https://api.github.com/users/AliOsm/events{/privacy}", "received_events_url": "https://api.github.com/users/AliOsm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "For the single core problem, I think it is related to how the trainer prepares the `epoch_loader` object [here](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L480-L484).", "Update: for the single core problem, removing ` / len(epoch_iterator)` part from this [line](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L519) solves the problem, so I suggest to precompute the value using `len(train_loader)` before this [if statement](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L480) and use it later. The multicores problem is still exists, could it relate to RAM limits in Google Colab?", "This indeed seems to be a problem. I encountered the same issue.\r\nThe hack suggested by @AliOsm seems to work for now", "@julien-c could you please take a look?", "The memory was the problem! In the beginning of the notebook, run the following cell to get the 35GB RAMs runtime instead of the 12GB one:\r\n\r\n```python\r\nimport torch\r\ntorch.tensor([10.]*10000000000)\r\n```\r\n\r\nThen, use this snippet of code to finetune GPT-2 on wikitext-2:\r\n\r\n```bash\r\nVERSION = \"nightly\" #@param [\"1.5\" , \"20200325\", \"nightly\"]\r\n!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py\r\n!python pytorch-xla-env-setup.py --version $VERSION\r\n\r\n!pip install git+https://github.com/huggingface/transformers.git\r\n\r\n!git clone https://github.com/huggingface/transformers.git\r\n\r\n!curl https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-v1.zip --output wikitext-2-v1.zip\r\n!unzip wikitext-2-v1.zip\r\n!rm wikitext-2-v1.zip\r\n\r\n!python transformers/examples/xla_spawn.py --num_cores 8 \\\r\n\ttransformers/examples/language-modeling/run_language_modeling.py \\\r\n --output_dir=output \\\r\n --model_type=gpt2 \\\r\n --model_name_or_path=gpt2 \\\r\n --do_train \\\r\n --train_data_file=wikitext-2/wiki.train.tokens \\\r\n --do_eval \\\r\n --eval_data_file=wikitext-2/wiki.test.tokens \\\r\n --per_device_train_batch_size 2 \\\r\n --overwrite_output_dir\r\n```\r\n\r\nIt will be helpful to put this in the documentation :3" ]
1,594
1,595
1,595
CONTRIBUTOR
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): GPT2 Language I am using the model on (English, Chinese ...): English (wikitext-2) The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) I'm trying to test `run_language_modeling.py` on GPT2 using all 8 TPU cores. Running on 1 core gives the following error: ```bash Epoch: 0% 0/3 [00:00<?, ?it/s] Iteration: 0it [00:00, ?it/s]Exception in device=TPU:0: 'NoneType' object cannot be interpreted as an integer Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 119, in _start_fn fn(gindex, *args) File "/content/transformers/examples/language-modeling/run_language_modeling.py", line 292, in _mp_fn main() File "/content/transformers/examples/language-modeling/run_language_modeling.py", line 260, in main trainer.train(model_path=model_path) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 519, in train self.epoch = epoch + (step + 1) / len(epoch_iterator) TypeError: 'NoneType' object cannot be interpreted as an integer ``` While running using all 8 cores gives this one: ```bash /usr/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown len(cache)) Traceback (most recent call last): File "transformers/examples/xla_spawn.py", line 72, in <module> main() File "transformers/examples/xla_spawn.py", line 68, in main xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores) File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 182, in spawn start_method=start_method) File "/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/spawn.py", line 158, in start_processes while not context.join(): File "/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/spawn.py", line 108, in join (error_index, name) Exception: process 0 terminated with signal SIGKILL ``` I'm running this on a Colab TPU Notebook. ## To reproduce Steps to reproduce the behavior: ```python VERSION = "20200325" #@param ["1.5" , "20200325", "nightly"] !curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py !python pytorch-xla-env-setup.py --version $VERSION import torch_xla import torch_xla.core.xla_model as xm !pip install git+https://github.com/huggingface/transformers.git !git clone https://github.com/huggingface/transformers.git !curl https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-v1.zip --output wikitext-2-v1.zip !unzip wikitext-2-v1.zip !rm wikitext-2-v1.zip !python transformers/examples/xla_spawn.py --num_cores 1 \ transformers/examples/language-modeling/run_language_modeling.py \ --output_dir=output \ --model_type=gpt2 \ --model_name_or_path=gpt2 \ --do_train \ --train_data_file=wikitext-2/wiki.train.tokens \ --do_eval \ --eval_data_file=wikitext-2/wiki.test.tokens \ --per_device_train_batch_size 1 ``` ## Expected behavior Finetuning the model and saves it. ## Environment info - `transformers` version: 3.0.2 - Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.0a0+d6149a7 (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: Yes and No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5709/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5709/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5708
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5708/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5708/comments
https://api.github.com/repos/huggingface/transformers/issues/5708/events
https://github.com/huggingface/transformers/issues/5708
655,667,265
MDU6SXNzdWU2NTU2NjcyNjU=
5,708
For Roberta pretraining, how to enable large batch training using gradient accumulation?
{ "login": "quincyliang", "id": 4104404, "node_id": "MDQ6VXNlcjQxMDQ0MDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4104404?v=4", "gravatar_id": "", "url": "https://api.github.com/users/quincyliang", "html_url": "https://github.com/quincyliang", "followers_url": "https://api.github.com/users/quincyliang/followers", "following_url": "https://api.github.com/users/quincyliang/following{/other_user}", "gists_url": "https://api.github.com/users/quincyliang/gists{/gist_id}", "starred_url": "https://api.github.com/users/quincyliang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/quincyliang/subscriptions", "organizations_url": "https://api.github.com/users/quincyliang/orgs", "repos_url": "https://api.github.com/users/quincyliang/repos", "events_url": "https://api.github.com/users/quincyliang/events{/privacy}", "received_events_url": "https://api.github.com/users/quincyliang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! You can use the `--gradient_accumulation_steps=N_STEPS` argument to the `run_language_modeling.py` script for that.\r\n\r\nYou can see all available flags by doing `python run_language_modeling.py --help`" ]
1,594
1,595
1,595
NONE
null
In the example code, where can I enable gradient accumulation for large batch size training. Thanks. https://github.com/huggingface/transformers/tree/master/examples/language-modeling
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5708/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5708/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5707
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5707/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5707/comments
https://api.github.com/repos/huggingface/transformers/issues/5707/events
https://github.com/huggingface/transformers/issues/5707
655,617,793
MDU6SXNzdWU2NTU2MTc3OTM=
5,707
Span Mask Fill
{ "login": "BigSalmon2", "id": 61605789, "node_id": "MDQ6VXNlcjYxNjA1Nzg5", "avatar_url": "https://avatars.githubusercontent.com/u/61605789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BigSalmon2", "html_url": "https://github.com/BigSalmon2", "followers_url": "https://api.github.com/users/BigSalmon2/followers", "following_url": "https://api.github.com/users/BigSalmon2/following{/other_user}", "gists_url": "https://api.github.com/users/BigSalmon2/gists{/gist_id}", "starred_url": "https://api.github.com/users/BigSalmon2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BigSalmon2/subscriptions", "organizations_url": "https://api.github.com/users/BigSalmon2/orgs", "repos_url": "https://api.github.com/users/BigSalmon2/repos", "events_url": "https://api.github.com/users/BigSalmon2/events{/privacy}", "received_events_url": "https://api.github.com/users/BigSalmon2/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Maybe you can get inspiration from here: https://github.com/facebookresearch/SpanBERT", "> Maybe you can get inspiration from here: https://github.com/facebookresearch/SpanBERT\r\n\r\n@RudrakshTuwani \r\n\r\nThank you for your response. I've actually tried to implement SpanBERT previously, but my status as a beginner must have barred me from doing it properly. \r\n\r\nWhile SpanBERT has \"the same format as the HuggingFace BERT models\", the outputs were only special tokens or letters. ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,594
1,600
1,600
NONE
null
I see that Transformers does not support Ernie, but am in search of a way to [MASK] phrases. Can somebody guide me to an alternative to Ernie, code, or a way I could do this myself?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5707/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5707/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5706
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5706/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5706/comments
https://api.github.com/repos/huggingface/transformers/issues/5706/events
https://github.com/huggingface/transformers/issues/5706
655,611,191
MDU6SXNzdWU2NTU2MTExOTE=
5,706
can't resume training from a saved checkpoint in run_glue
{ "login": "ohadrozen", "id": 44141885, "node_id": "MDQ6VXNlcjQ0MTQxODg1", "avatar_url": "https://avatars.githubusercontent.com/u/44141885?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ohadrozen", "html_url": "https://github.com/ohadrozen", "followers_url": "https://api.github.com/users/ohadrozen/followers", "following_url": "https://api.github.com/users/ohadrozen/following{/other_user}", "gists_url": "https://api.github.com/users/ohadrozen/gists{/gist_id}", "starred_url": "https://api.github.com/users/ohadrozen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ohadrozen/subscriptions", "organizations_url": "https://api.github.com/users/ohadrozen/orgs", "repos_url": "https://api.github.com/users/ohadrozen/repos", "events_url": "https://api.github.com/users/ohadrozen/events{/privacy}", "received_events_url": "https://api.github.com/users/ohadrozen/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Thanks!", "sorry, in the checkpoint directory I have 5 files (and not 2 as I wrote above):\r\npytorch_model.bin\r\ntraining_args.bin\r\nconfig.json\r\noptimizer.pt\r\nscheduler.pt\r\n", "the vocabulary file is missing in the checkpoint folder. You can use the base model vocabulary by adding more parameter `--tokenizer_name roberta-base`.\r\n\r\ntry this:\r\n```\r\npython run_glue.py --model_name_or_path [/tmp/MNLI/roberta512/checkpoint-27000<<change it!>>] --tokenizer_name roberta-base --task_name MNLI --do_train --do_eval --data_dir /home/nlp/ohadr/PycharmProjects/BERT_classification/glue_data/MNLI --max_seq_length 512 --per_device_train_batch_size 8 --learning_rate 5e-5 --num_train_epochs 3.0 --output_dir models/tmp/roberta512_cont\r\n```", "I am having a similar issue when trying to evalutate checkpoints on a test set. If I copy the `vocab.txt` from the final model to the checkpoint folder and evaluate it, the accuracy is significantly lower. The final model had 0.54 and the checkpoints are all in the range from 0.31 to 0.38. That confuses me. ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "how to train user custom data ?\r\n" ]
1,594
1,675
1,603
NONE
null
# 🐛 Bug Hi, I'm using run_glue.py to train Roberta model. I ran the training for a few hours, but after 2 epochs it crashed due to low disk space. I now want to resume the training, and for that, I replaced the --model_name_or_path from roberta-base to [my checkpoint dir]. But then I get the following error: "OSError: Model name 'models/tmp/roberta512/checkpoint-27000' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'models/tmp/roberta512/checkpoint-27000' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url." in the checkpoint dir there are only two files: pytorch_model.bin and config.json ## Information This is the command line I used to run initially: python run_glue.py --model_name_or_path roberta-base --task_name MNLI --do_train --do_eval --data_dir /home/nlp/ohadr/PycharmProjects/BERT_classification/glue_data/MNLI --max_seq_length 512 --per_device_train_batch_size 8 --learning_rate 5e-5 --num_train_epochs 3.0 --output_dir /tmp/MNLI/roberta512 This is the command line I tried to resume training with: python run_glue.py --model_name_or_path /tmp/MNLI/roberta512/checkpoint-27000 --task_name MNLI --do_train --do_eval --data_dir /home/nlp/ohadr/PycharmProjects/BERT_classification/glue_data/MNLI --max_seq_length 512 --per_device_train_batch_size 8 --learning_rate 5e-5 --num_train_epochs 3.0 --output_dir models/tmp/roberta512_cont Model I am using (Bert, XLNet ...): Roberta Language I am using the model on (English, Chinese ...): English The problem arises when using: * [v] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [v] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. train the model using the following: python run_glue.py --model_name_or_path roberta-base --task_name MNLI --do_train --do_eval --data_dir /home/nlp/ohadr/PycharmProjects/BERT_classification/glue_data/MNLI --max_seq_length 512 --per_device_train_batch_size 8 --learning_rate 5e-5 --num_train_epochs 3.0 --output_dir /tmp/MNLI/roberta512 2. use the following line to resume training from the last saved checkpoint - for that, change the marked directory below with your own directory: python run_glue.py --model_name_or_path [/tmp/MNLI/roberta512/checkpoint-27000<<change it!>>] --task_name MNLI --do_train --do_eval --data_dir /home/nlp/ohadr/PycharmProjects/BERT_classification/glue_data/MNLI --max_seq_length 512 --per_device_train_batch_size 8 --learning_rate 5e-5 --num_train_epochs 3.0 --output_dir models/tmp/roberta512_cont Error message: "OSError: Model name 'models/tmp/roberta512/checkpoint-27000' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed 'models/tmp/roberta512/checkpoint-27000' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url." ## Expected behavior I would expect the model to continue training from that checkpoint ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.1 - Platform: Linux-3.10.0-1127.13.1.el7.x86_64-x86_64-with-centos-7.8.2003-Core - Python version: 3.7.7 - PyTorch version (GPU?): 1.5.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> YES - Using distributed or parallel set-up in script?: <fill in> PARALLEL
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5706/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5706/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5705
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5705/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5705/comments
https://api.github.com/repos/huggingface/transformers/issues/5705/events
https://github.com/huggingface/transformers/issues/5705
655,599,801
MDU6SXNzdWU2NTU1OTk4MDE=
5,705
Any insight to this mystery issue? Using the Keras functional API results in whole deleted weights/layers for transformer layers.
{ "login": "Santosh-Gupta", "id": 5524261, "node_id": "MDQ6VXNlcjU1MjQyNjE=", "avatar_url": "https://avatars.githubusercontent.com/u/5524261?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Santosh-Gupta", "html_url": "https://github.com/Santosh-Gupta", "followers_url": "https://api.github.com/users/Santosh-Gupta/followers", "following_url": "https://api.github.com/users/Santosh-Gupta/following{/other_user}", "gists_url": "https://api.github.com/users/Santosh-Gupta/gists{/gist_id}", "starred_url": "https://api.github.com/users/Santosh-Gupta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Santosh-Gupta/subscriptions", "organizations_url": "https://api.github.com/users/Santosh-Gupta/orgs", "repos_url": "https://api.github.com/users/Santosh-Gupta/repos", "events_url": "https://api.github.com/users/Santosh-Gupta/events{/privacy}", "received_events_url": "https://api.github.com/users/Santosh-Gupta/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,594
1,600
1,600
CONTRIBUTOR
null
# ❓ Questions & Help This is probably a Keras, tensorflow bug, but just wanted to check-in here in case I overlooked anything. I've discovered a bug, where if transformer layers are copied from a transformer model, and used as individual layers, and then a Keras model is made using the functional API, this seems to result in missing weights/layers from the list of trainable layers. However, the issue goes away if I use model subclassing to make the Keras model. I raised the issue here https://github.com/tensorflow/tensorflow/issues/40638#event-3468314954 but you can directly checkout the issue in this colab notebook https://colab.research.google.com/gist/Santosh-Gupta/273361f873e4daf572fddea691b1f325/missingtrainablevars.ipynb Which copies the layers from one of your transformer models. I also made one where I implemented the transformer layers from (near) scratch, and got the same result https://colab.research.google.com/gist/ravikyram/0191e12b7c6d9afeb80ccc009870b255/untitled52.ipynb This likely seems like a bug in Keras, but may take a while to pinpoint what exactly is causing this, since a transformer layer has many parts. But just wanted to pop this in here, in case there was some insight to the issue, or a way to pinpoint the cause.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5705/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5705/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5704
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5704/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5704/comments
https://api.github.com/repos/huggingface/transformers/issues/5704/events
https://github.com/huggingface/transformers/pull/5704
655,596,265
MDExOlB1bGxSZXF1ZXN0NDQ4MDQ2OTQ0
5,704
Make the order of additional special tokens deterministic
{ "login": "gonglinyuan", "id": 9744170, "node_id": "MDQ6VXNlcjk3NDQxNzA=", "avatar_url": "https://avatars.githubusercontent.com/u/9744170?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gonglinyuan", "html_url": "https://github.com/gonglinyuan", "followers_url": "https://api.github.com/users/gonglinyuan/followers", "following_url": "https://api.github.com/users/gonglinyuan/following{/other_user}", "gists_url": "https://api.github.com/users/gonglinyuan/gists{/gist_id}", "starred_url": "https://api.github.com/users/gonglinyuan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gonglinyuan/subscriptions", "organizations_url": "https://api.github.com/users/gonglinyuan/orgs", "repos_url": "https://api.github.com/users/gonglinyuan/repos", "events_url": "https://api.github.com/users/gonglinyuan/events{/privacy}", "received_events_url": "https://api.github.com/users/gonglinyuan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5704?src=pr&el=h1) Report\n> Merging [#5704](https://codecov.io/gh/huggingface/transformers/pull/5704?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0befb513278f6e42b722be340dbc667e0ba2718e&el=desc) will **decrease** coverage by `0.94%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5704/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5704?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5704 +/- ##\n==========================================\n- Coverage 78.26% 77.32% -0.95% \n==========================================\n Files 146 146 \n Lines 25998 25998 \n==========================================\n- Hits 20348 20102 -246 \n- Misses 5650 5896 +246 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5704?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5704/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5704?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5704?src=pr&el=footer). Last update [0befb51...c706cc9](https://codecov.io/gh/huggingface/transformers/pull/5704?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,594
1,596
1,596
CONTRIBUTOR
null
In `SpecialTokensMixin.all_special_tokens_extended`, deduplication is performed by `all_toks = list(set(all_toks))`. However, this will change the ordering of additional special tokens, and the order depends on the hash seed of set data structure. This will result in non-deterministic id of additional special tokens added to `AutoTokenizer.from_pretrained` method. Therefore, I changed this problematic line to `all_toks = list(OrderedDict.fromkeys(all_toks))`. This line will deduplicate `all_toks` while still keeping the original ordering.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5704/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5704/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5704", "html_url": "https://github.com/huggingface/transformers/pull/5704", "diff_url": "https://github.com/huggingface/transformers/pull/5704.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5704.patch", "merged_at": 1596523111000 }
https://api.github.com/repos/huggingface/transformers/issues/5703
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5703/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5703/comments
https://api.github.com/repos/huggingface/transformers/issues/5703/events
https://github.com/huggingface/transformers/pull/5703
655,589,680
MDExOlB1bGxSZXF1ZXN0NDQ4MDQxNTA2
5,703
Make the order of additional special tokens deterministic
{ "login": "gonglinyuan", "id": 9744170, "node_id": "MDQ6VXNlcjk3NDQxNzA=", "avatar_url": "https://avatars.githubusercontent.com/u/9744170?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gonglinyuan", "html_url": "https://github.com/gonglinyuan", "followers_url": "https://api.github.com/users/gonglinyuan/followers", "following_url": "https://api.github.com/users/gonglinyuan/following{/other_user}", "gists_url": "https://api.github.com/users/gonglinyuan/gists{/gist_id}", "starred_url": "https://api.github.com/users/gonglinyuan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gonglinyuan/subscriptions", "organizations_url": "https://api.github.com/users/gonglinyuan/orgs", "repos_url": "https://api.github.com/users/gonglinyuan/repos", "events_url": "https://api.github.com/users/gonglinyuan/events{/privacy}", "received_events_url": "https://api.github.com/users/gonglinyuan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,594
1,594
1,594
CONTRIBUTOR
null
In `SpecialTokensMixin.all_special_tokens_extended`, deduplication is performed by `all_toks = list(set(all_toks))`. However, this will change the ordering of additional special tokens, and the order depends on the hash seed of set data structure. This will result in non-deterministic id of additional special tokens added to `AutoTokenizer.from_pretrained` method. Therefore, I changed this problematic line to `all_toks = sorted(list(set(all_toks)))`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5703/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5703/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5703", "html_url": "https://github.com/huggingface/transformers/pull/5703", "diff_url": "https://github.com/huggingface/transformers/pull/5703.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5703.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/5702
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5702/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5702/comments
https://api.github.com/repos/huggingface/transformers/issues/5702/events
https://github.com/huggingface/transformers/issues/5702
655,556,335
MDU6SXNzdWU2NTU1NTYzMzU=
5,702
help:OSError: Model name 'ctrl' was not found in tokenizers model name list (ctrl). We assumed 'ctrl' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.
{ "login": "Heiheiyo", "id": 15426714, "node_id": "MDQ6VXNlcjE1NDI2NzE0", "avatar_url": "https://avatars.githubusercontent.com/u/15426714?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Heiheiyo", "html_url": "https://github.com/Heiheiyo", "followers_url": "https://api.github.com/users/Heiheiyo/followers", "following_url": "https://api.github.com/users/Heiheiyo/following{/other_user}", "gists_url": "https://api.github.com/users/Heiheiyo/gists{/gist_id}", "starred_url": "https://api.github.com/users/Heiheiyo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Heiheiyo/subscriptions", "organizations_url": "https://api.github.com/users/Heiheiyo/orgs", "repos_url": "https://api.github.com/users/Heiheiyo/repos", "events_url": "https://api.github.com/users/Heiheiyo/events{/privacy}", "received_events_url": "https://api.github.com/users/Heiheiyo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "As for me, I'm getting \r\n```\r\nOSError: Can't load weights for 'mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es'. Make sure that:\r\n\r\n- 'mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es' is a correct model identifier listed on 'https://huggingface.co/models'\r\n\r\n- or 'mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.\r\n```\r\n\r\nthe line of code is \r\n```\r\nnlp = pipeline(\r\n 'question-answering', \r\n model='mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es',\r\n tokenizer=(\r\n 'mrm8488/distill-bert-base-spanish-wwm-cased-finetuned-spa-squad2-es', \r\n {\"use_fast\": False}\r\n )\r\n)\r\n```\r\nbut my question is, why does pipeline download the model but can't load/find the weights?? I'm using python:3.7 dockerimage.\r\nI'm using tf version '2.2.0'", "@Heiheiyo, is it possible you have a `ctrl` folder that does not contain the vocab and merges files?\r\nWhen running your command on master I have no issues with CTRL.", "@Kreijstal, on what version of transformers are you running? I copy-pasted your command and it works fine on the `master` branch.", "@LysandreJik Thank you for your answer.I have solved this problem.I downloaded the ctrl model and modified the model file path.\r\n", "@LysandreJik I solved this problem too, I used the dockerfiles I found on this repo to figure out the right libraries that might not have been installed" ]
1,594
1,595
1,595
NONE
null
# ❓ Questions & Help ![image](https://user-images.githubusercontent.com/15426714/87268409-dd30ff80-c4fc-11ea-9e34-4d1c06dc1d23.png)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5702/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5702/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5701
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5701/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5701/comments
https://api.github.com/repos/huggingface/transformers/issues/5701/events
https://github.com/huggingface/transformers/issues/5701
655,508,056
MDU6SXNzdWU2NTU1MDgwNTY=
5,701
How to generate sentences from Transformer's sentence embeddings?
{ "login": "willanxywc", "id": 24306827, "node_id": "MDQ6VXNlcjI0MzA2ODI3", "avatar_url": "https://avatars.githubusercontent.com/u/24306827?v=4", "gravatar_id": "", "url": "https://api.github.com/users/willanxywc", "html_url": "https://github.com/willanxywc", "followers_url": "https://api.github.com/users/willanxywc/followers", "following_url": "https://api.github.com/users/willanxywc/following{/other_user}", "gists_url": "https://api.github.com/users/willanxywc/gists{/gist_id}", "starred_url": "https://api.github.com/users/willanxywc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/willanxywc/subscriptions", "organizations_url": "https://api.github.com/users/willanxywc/orgs", "repos_url": "https://api.github.com/users/willanxywc/repos", "events_url": "https://api.github.com/users/willanxywc/events{/privacy}", "received_events_url": "https://api.github.com/users/willanxywc/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Sorry I'm not 100% sure, I get the question. Could you specify what you mean by `sentence embeddings` exactly and the output you would like with some code? :-) ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,594
1,600
1,600
NONE
null
Is it possible to use the pre-trained transformer models to generate sentences from sentence embeddings? I konw I can get the continuous representations of a sentence with for example BertModel or GPT2Model. But can I reconstruct the sentence directly from the sentence representations generated by BertModel, with hugging-face transformer, especially the pre-trained ones? I mean sentence embeddings as input, the readable sentence as output.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5701/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5701/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5700
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5700/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5700/comments
https://api.github.com/repos/huggingface/transformers/issues/5700/events
https://github.com/huggingface/transformers/issues/5700
655,492,606
MDU6SXNzdWU2NTU0OTI2MDY=
5,700
How to visualize the output of the encoder using T-sne plots?
{ "login": "jlim13", "id": 36393441, "node_id": "MDQ6VXNlcjM2MzkzNDQx", "avatar_url": "https://avatars.githubusercontent.com/u/36393441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jlim13", "html_url": "https://github.com/jlim13", "followers_url": "https://api.github.com/users/jlim13/followers", "following_url": "https://api.github.com/users/jlim13/following{/other_user}", "gists_url": "https://api.github.com/users/jlim13/gists{/gist_id}", "starred_url": "https://api.github.com/users/jlim13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jlim13/subscriptions", "organizations_url": "https://api.github.com/users/jlim13/orgs", "repos_url": "https://api.github.com/users/jlim13/repos", "events_url": "https://api.github.com/users/jlim13/events{/privacy}", "received_events_url": "https://api.github.com/users/jlim13/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,594
1,600
1,600
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details I was thinking of just mean or max pooling (along the sequence length dimension) the output of the encoder and visualizing that, but I was wondering if there were better ways of doing so.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5700/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5700/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5699
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5699/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5699/comments
https://api.github.com/repos/huggingface/transformers/issues/5699/events
https://github.com/huggingface/transformers/issues/5699
655,457,789
MDU6SXNzdWU2NTU0NTc3ODk=
5,699
Add beta 1 and beta 2 option in `TrainingArguments` for `AdamW` optimizer.
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm not sure we would like to add this to the `TrainingArguments`. If we add all possible params this could quickly explode. Note that you can instantiate your own optimizer and pass it here: https://github.com/huggingface/transformers/blob/7096e47513127d4f072111a7f58f109842a2b6b0/src/transformers/trainer.py#L158\r\n\r\nAlso pinging @julien-c here.", "Well - my arguments for this change is that adam_epsilon is possible to be set set and so beta 1 and 2 should also be possible to be set. Especialy because the RoBERTa paper suggests an others setting then default. \r\n\r\n2nd argument is that it is not that easy to instantiate your own optimizer because there is a dependency to `model`. See here: https://github.com/huggingface/transformers/blob/7096e47513127d4f072111a7f58f109842a2b6b0/src/transformers/trainer.py#L326-L335\r\n", "Closing this in favor of #5592 " ]
1,594
1,594
1,594
CONTRIBUTOR
null
I want to set the Adam Optimizers beta 2 to 0.98 - this is because I want to train a new RoBERTa LM. The paper sais, that it improves stability. The default is 0.999 and it can not be set in `TrainingArguments`. Could you please add the option to specify beta 1 and 2 for AdamW in the `TrainingArguments`? `adam_epsilon ` can already be specified. If you want me to I can provide a PR. What do you think?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5699/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5699/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5698
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5698/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5698/comments
https://api.github.com/repos/huggingface/transformers/issues/5698/events
https://github.com/huggingface/transformers/pull/5698
655,402,697
MDExOlB1bGxSZXF1ZXN0NDQ3OTAxNzE4
5,698
Create README.md
{ "login": "dartrevan", "id": 24587263, "node_id": "MDQ6VXNlcjI0NTg3MjYz", "avatar_url": "https://avatars.githubusercontent.com/u/24587263?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dartrevan", "html_url": "https://github.com/dartrevan", "followers_url": "https://api.github.com/users/dartrevan/followers", "following_url": "https://api.github.com/users/dartrevan/following{/other_user}", "gists_url": "https://api.github.com/users/dartrevan/gists{/gist_id}", "starred_url": "https://api.github.com/users/dartrevan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dartrevan/subscriptions", "organizations_url": "https://api.github.com/users/dartrevan/orgs", "repos_url": "https://api.github.com/users/dartrevan/repos", "events_url": "https://api.github.com/users/dartrevan/events{/privacy}", "received_events_url": "https://api.github.com/users/dartrevan/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5698?src=pr&el=h1) Report\n> Merging [#5698](https://codecov.io/gh/huggingface/transformers/pull/5698?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0befb513278f6e42b722be340dbc667e0ba2718e&el=desc) will **decrease** coverage by `0.25%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5698/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5698?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5698 +/- ##\n==========================================\n- Coverage 78.26% 78.01% -0.26% \n==========================================\n Files 146 146 \n Lines 25998 25998 \n==========================================\n- Hits 20348 20283 -65 \n- Misses 5650 5715 +65 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5698?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5698/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5698/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5698/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5698/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5698/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (ø)` | |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5698/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5698/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5698/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5698?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5698?src=pr&el=footer). Last update [0befb51...3e4d8eb](https://codecov.io/gh/huggingface/transformers/pull/5698?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,594
1,594
1,594
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5698/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5698/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5698", "html_url": "https://github.com/huggingface/transformers/pull/5698", "diff_url": "https://github.com/huggingface/transformers/pull/5698.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5698.patch", "merged_at": 1594738314000 }
https://api.github.com/repos/huggingface/transformers/issues/5697
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5697/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5697/comments
https://api.github.com/repos/huggingface/transformers/issues/5697/events
https://github.com/huggingface/transformers/issues/5697
655,368,803
MDU6SXNzdWU2NTUzNjg4MDM=
5,697
How can I evaluate on GLUE but without fine-tune BERT. Just train the rest layers?
{ "login": "caolusg", "id": 8014065, "node_id": "MDQ6VXNlcjgwMTQwNjU=", "avatar_url": "https://avatars.githubusercontent.com/u/8014065?v=4", "gravatar_id": "", "url": "https://api.github.com/users/caolusg", "html_url": "https://github.com/caolusg", "followers_url": "https://api.github.com/users/caolusg/followers", "following_url": "https://api.github.com/users/caolusg/following{/other_user}", "gists_url": "https://api.github.com/users/caolusg/gists{/gist_id}", "starred_url": "https://api.github.com/users/caolusg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/caolusg/subscriptions", "organizations_url": "https://api.github.com/users/caolusg/orgs", "repos_url": "https://api.github.com/users/caolusg/repos", "events_url": "https://api.github.com/users/caolusg/events{/privacy}", "received_events_url": "https://api.github.com/users/caolusg/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,594
1,600
1,600
NONE
null
# ❓ Questions & Help ## Details
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5697/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5697/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5696
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5696/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5696/comments
https://api.github.com/repos/huggingface/transformers/issues/5696/events
https://github.com/huggingface/transformers/pull/5696
655,366,014
MDExOlB1bGxSZXF1ZXN0NDQ3ODc1OTY4
5,696
Update README.md
{ "login": "bashartalafha", "id": 26685171, "node_id": "MDQ6VXNlcjI2Njg1MTcx", "avatar_url": "https://avatars.githubusercontent.com/u/26685171?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bashartalafha", "html_url": "https://github.com/bashartalafha", "followers_url": "https://api.github.com/users/bashartalafha/followers", "following_url": "https://api.github.com/users/bashartalafha/following{/other_user}", "gists_url": "https://api.github.com/users/bashartalafha/gists{/gist_id}", "starred_url": "https://api.github.com/users/bashartalafha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bashartalafha/subscriptions", "organizations_url": "https://api.github.com/users/bashartalafha/orgs", "repos_url": "https://api.github.com/users/bashartalafha/repos", "events_url": "https://api.github.com/users/bashartalafha/events{/privacy}", "received_events_url": "https://api.github.com/users/bashartalafha/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,594
1,594
1,594
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5696/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5696/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5696", "html_url": "https://github.com/huggingface/transformers/pull/5696", "diff_url": "https://github.com/huggingface/transformers/pull/5696.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5696.patch", "merged_at": 1594745518000 }
https://api.github.com/repos/huggingface/transformers/issues/5695
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5695/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5695/comments
https://api.github.com/repos/huggingface/transformers/issues/5695/events
https://github.com/huggingface/transformers/pull/5695
655,364,501
MDExOlB1bGxSZXF1ZXN0NDQ3ODc0ODM0
5,695
Update README.md
{ "login": "bashartalafha", "id": 26685171, "node_id": "MDQ6VXNlcjI2Njg1MTcx", "avatar_url": "https://avatars.githubusercontent.com/u/26685171?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bashartalafha", "html_url": "https://github.com/bashartalafha", "followers_url": "https://api.github.com/users/bashartalafha/followers", "following_url": "https://api.github.com/users/bashartalafha/following{/other_user}", "gists_url": "https://api.github.com/users/bashartalafha/gists{/gist_id}", "starred_url": "https://api.github.com/users/bashartalafha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bashartalafha/subscriptions", "organizations_url": "https://api.github.com/users/bashartalafha/orgs", "repos_url": "https://api.github.com/users/bashartalafha/repos", "events_url": "https://api.github.com/users/bashartalafha/events{/privacy}", "received_events_url": "https://api.github.com/users/bashartalafha/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,594
1,594
1,594
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5695/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5695/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5695", "html_url": "https://github.com/huggingface/transformers/pull/5695", "diff_url": "https://github.com/huggingface/transformers/pull/5695.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5695.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/5694
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5694/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5694/comments
https://api.github.com/repos/huggingface/transformers/issues/5694/events
https://github.com/huggingface/transformers/pull/5694
655,354,245
MDExOlB1bGxSZXF1ZXN0NDQ3ODY3MzI1
5,694
[Don't merge] Run make style on templates
{ "login": "JetRunner", "id": 22514219, "node_id": "MDQ6VXNlcjIyNTE0MjE5", "avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JetRunner", "html_url": "https://github.com/JetRunner", "followers_url": "https://api.github.com/users/JetRunner/followers", "following_url": "https://api.github.com/users/JetRunner/following{/other_user}", "gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}", "starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions", "organizations_url": "https://api.github.com/users/JetRunner/orgs", "repos_url": "https://api.github.com/users/JetRunner/repos", "events_url": "https://api.github.com/users/JetRunner/events{/privacy}", "received_events_url": "https://api.github.com/users/JetRunner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It seems to be an isort version problem" ]
1,594
1,651
1,594
CONTRIBUTOR
null
When run `make style` locally, these files got modified. I noticed that `black` and `isort` seem to have conflicts on these files. Is there a solution? ``` > make style black --line-length 119 --target-version py35 examples templates tests src utils reformatted /Users/canwenxu/transformers/src/transformers/__init__.py reformatted /Users/canwenxu/transformers/templates/adding_a_new_example_script/run_xxx.py reformatted /Users/canwenxu/transformers/templates/adding_a_new_example_script/utils_xxx.py All done! ✨ 🍰 ✨ 3 files reformatted, 339 files left unchanged. isort --recursive examples templates tests src utils Fixing /Users/canwenxu/transformers/templates/adding_a_new_example_script/run_xxx.py Fixing /Users/canwenxu/transformers/templates/adding_a_new_example_script/utils_xxx.py Fixing /Users/canwenxu/transformers/src/transformers/__init__.py ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5694/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5694/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5694", "html_url": "https://github.com/huggingface/transformers/pull/5694", "diff_url": "https://github.com/huggingface/transformers/pull/5694.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5694.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/5693
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5693/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5693/comments
https://api.github.com/repos/huggingface/transformers/issues/5693/events
https://github.com/huggingface/transformers/issues/5693
655,346,586
MDU6SXNzdWU2NTUzNDY1ODY=
5,693
__init__() missing 1 required positional argument: 'logits'
{ "login": "vanh17", "id": 10501538, "node_id": "MDQ6VXNlcjEwNTAxNTM4", "avatar_url": "https://avatars.githubusercontent.com/u/10501538?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vanh17", "html_url": "https://github.com/vanh17", "followers_url": "https://api.github.com/users/vanh17/followers", "following_url": "https://api.github.com/users/vanh17/following{/other_user}", "gists_url": "https://api.github.com/users/vanh17/gists{/gist_id}", "starred_url": "https://api.github.com/users/vanh17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vanh17/subscriptions", "organizations_url": "https://api.github.com/users/vanh17/orgs", "repos_url": "https://api.github.com/users/vanh17/repos", "events_url": "https://api.github.com/users/vanh17/events{/privacy}", "received_events_url": "https://api.github.com/users/vanh17/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "i faced the same error yesterday. Installing version 3.0.1 fixed the issue for me.", "Installing one or two older versions can fix this. However, I will leave it here so that they know this bug exists in their newest version.", "It appears that the CircleCI doesn't run gpu tests (or just multiple gpu?), all sub-tests `test_multigpu_data_parallel_forward` fail., e.g.: `tests/test_modeling_flaubert.py::FlaubertModelTest::test_multigpu_data_parallel_forward`.\r\n```\r\npytest --disable-warnings -n 1 tests/test_modeling_bert.py::BertModelTest::test_multigpu_data_parallel_forward\r\n====================================================================== test session starts =======================================================================\r\nplatform linux -- Python 3.7.5, pytest-5.4.3, py-1.9.0, pluggy-0.13.1\r\nrootdir: /mnt/nvme1/code/huggingface/transformers-tests-1\r\nplugins: hypothesis-5.5.4, filter-subpackage-0.1.1, arraydiff-0.3, flaky-3.6.1, ipynb-1.1.1.dev0, cov-2.10.0, astropy-header-0.1.2, forked-1.2.0, doctestplus-0.5.0, openfiles-0.4.0, remotedata-0.3.2, xdist-1.32.0\r\ngw0 [1]\r\nF [100%]\r\n============================================================================ FAILURES ============================================================================\r\n_______________________________________________________ BertModelTest.test_multigpu_data_parallel_forward ________________________________________________________\r\n[gw0] linux -- Python 3.7.5 /home/stas/anaconda3/envs/main/bin/python\r\n\r\nself = <tests.test_modeling_bert.BertModelTest testMethod=test_multigpu_data_parallel_forward>\r\n\r\n @require_multigpu\r\n def test_multigpu_data_parallel_forward(self):\r\n config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()\r\n \r\n # some params shouldn't be scattered by nn.DataParallel\r\n # so just remove them if they are present.\r\n blacklist_non_batched_params = [\"head_mask\"]\r\n for k in blacklist_non_batched_params:\r\n inputs_dict.pop(k, None)\r\n \r\n # move input tensors to cuda:O\r\n for k, v in inputs_dict.items():\r\n if torch.is_tensor(v):\r\n inputs_dict[k] = v.to(0)\r\n \r\n for model_class in self.all_model_classes:\r\n model = model_class(config=config)\r\n model.to(0)\r\n model.eval()\r\n \r\n # Wrap model in nn.DataParallel\r\n model = torch.nn.DataParallel(model)\r\n with torch.no_grad():\r\n> _ = model(**self._prepare_for_class(inputs_dict, model_class))\r\n\r\ntests/test_modeling_common.py:807:\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/modules/module.py:550: in __call__\r\n result = self.forward(*input, **kwargs)\r\n/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py:156: in forward\r\n return self.gather(outputs, self.output_device)\r\n/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py:168: in gather\r\n return gather(outputs, output_device, dim=self.dim)\r\n/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py:68: in gather\r\n res = gather_map(outputs)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\noutputs = [BaseModelOutputWithPooling(last_hidden_state=tensor([[[ 1.0115e+00, 1.4145e+00, -5.7332e-01, ..., -4.6471e-01,\r\n ... 0.1111, -0.0592, -0.1177, 0.0074, -0.0155, -0.1015]],\r\n device='cuda:1'), hidden_states=None, attentions=None)]\r\n\r\n def gather_map(outputs):\r\n out = outputs[0]\r\n if isinstance(out, torch.Tensor):\r\n return Gather.apply(target_device, dim, *outputs)\r\n if out is None:\r\n return None\r\n if isinstance(out, dict):\r\n if not all((len(out) == len(d) for d in outputs)):\r\n raise ValueError('All dicts must have the same number of keys')\r\n return type(out)(((k, gather_map([d[k] for d in outputs]))\r\n for k in out))\r\n> return type(out)(map(gather_map, zip(*outputs)))\r\nE TypeError: __init__() missing 1 required positional argument: 'pooler_output'\r\n\r\n/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py:63: TypeError\r\n==================================================================== short test summary info =====================================================================\r\nFAILED tests/test_modeling_bert.py::BertModelTest::test_multigpu_data_parallel_forward - TypeError: __init__() missing 1 required positional argument: 'pooler_...\r\n================================================================= 1 failed, 4 warnings in 5.44s ==================================================================\r\n```\r\n\r\n```\r\npytest --disable-warnings -n 1 tests/test_modeling_flaubert.py::FlaubertModelTest::test_multigpu_data_parallel_forward\r\n====================================================================== test session starts =======================================================================\r\nplatform linux -- Python 3.7.5, pytest-5.4.3, py-1.9.0, pluggy-0.13.1\r\nrootdir: /mnt/nvme1/code/huggingface/transformers-tests-1\r\nplugins: hypothesis-5.5.4, filter-subpackage-0.1.1, arraydiff-0.3, flaky-3.6.1, ipynb-1.1.1.dev0, cov-2.10.0, astropy-header-0.1.2, forked-1.2.0, doctestplus-0.5.0, openfiles-0.4.0, remotedata-0.3.2, xdist-1.32.0\r\ngw0 [1]\r\nF [100%]\r\n============================================================================ FAILURES ============================================================================\r\n_____________________________________________________ FlaubertModelTest.test_multigpu_data_parallel_forward ______________________________________________________\r\n[gw0] linux -- Python 3.7.5 /home/stas/anaconda3/envs/main/bin/python\r\n\r\nself = <tests.test_modeling_flaubert.FlaubertModelTest testMethod=test_multigpu_data_parallel_forward>\r\n\r\n @require_multigpu\r\n def test_multigpu_data_parallel_forward(self):\r\n config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()\r\n \r\n # some params shouldn't be scattered by nn.DataParallel\r\n # so just remove them if they are present.\r\n blacklist_non_batched_params = [\"head_mask\"]\r\n for k in blacklist_non_batched_params:\r\n inputs_dict.pop(k, None)\r\n \r\n # move input tensors to cuda:O\r\n for k, v in inputs_dict.items():\r\n if torch.is_tensor(v):\r\n inputs_dict[k] = v.to(0)\r\n \r\n for model_class in self.all_model_classes:\r\n model = model_class(config=config)\r\n model.to(0)\r\n model.eval()\r\n \r\n # Wrap model in nn.DataParallel\r\n model = torch.nn.DataParallel(model)\r\n with torch.no_grad():\r\n> _ = model(**self._prepare_for_class(inputs_dict, model_class))\r\n\r\ntests/test_modeling_common.py:807:\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/modules/module.py:550: in __call__\r\n result = self.forward(*input, **kwargs)\r\n/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py:156: in forward\r\n return self.gather(outputs, self.output_device)\r\n/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py:168: in gather\r\n return gather(outputs, output_device, dim=self.dim)\r\n/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py:68: in gather\r\n res = gather_map(outputs)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\noutputs = [MaskedLMOutput(loss=None, logits=tensor([[[-0.0008, 0.3751, -0.0050, ..., 0.0933, -0.1563, 0.0494],\r\n [-0....0, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]]],\r\n device='cuda:1'), hidden_states=None, attentions=None)]\r\n\r\n def gather_map(outputs):\r\n out = outputs[0]\r\n if isinstance(out, torch.Tensor):\r\n return Gather.apply(target_device, dim, *outputs)\r\n if out is None:\r\n return None\r\n if isinstance(out, dict):\r\n if not all((len(out) == len(d) for d in outputs)):\r\n raise ValueError('All dicts must have the same number of keys')\r\n return type(out)(((k, gather_map([d[k] for d in outputs]))\r\n for k in out))\r\n> return type(out)(map(gather_map, zip(*outputs)))\r\nE TypeError: __init__() missing 1 required positional argument: 'logits'\r\n\r\n/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py:63: TypeError\r\n==================================================================== short test summary info =====================================================================\r\nFAILED tests/test_modeling_flaubert.py::FlaubertModelTest::test_multigpu_data_parallel_forward - TypeError: __init__() missing 1 required positional argument: ...\r\n================================================================= 1 failed, 4 warnings in 5.54s =============================================================\r\n```", "Digging deeper it appears that `torch.nn.parallel.scatter_gather.gather` can't gather outputs that are `dataclasses` - it gets a list of outputs that are `dataclasses` and completely breaks them down into just one value.\r\n\r\nThis pytorch hack fixes the problem for the failing tests. Swap the gather function for this one (including import):\r\n\r\n```\r\n# torch/nn/parallel/scatter_gather.py\r\n\r\nimport dataclasses\r\ndef gather(outputs, target_device, dim=0):\r\n r\"\"\"\r\n Gathers tensors from different GPUs on a specified device\r\n (-1 means the CPU).\r\n \"\"\"\r\n def gather_map(outputs):\r\n out = outputs[0]\r\n if dataclasses.is_dataclass(out):\r\n outputs = [dataclasses.asdict(out) for out in outputs]\r\n out = outputs[0]\r\n if isinstance(out, torch.Tensor):\r\n return Gather.apply(target_device, dim, *outputs)\r\n if out is None:\r\n return None\r\n\r\n if isinstance(out, dict):\r\n if not all((len(out) == len(d) for d in outputs)):\r\n raise ValueError('All dicts must have the same number of keys')\r\n return type(out)(((k, gather_map([d[k] for d in outputs]))\r\n for k in out))\r\n\r\n return type(out)(map(gather_map, zip(*outputs)))\r\n\r\n # Recursive function calls like this create reference cycles.\r\n # Setting the function to None clears the refcycle.\r\n try:\r\n res = gather_map(outputs)\r\n finally:\r\n gather_map = None\r\n return res\r\n```\r\n\r\nIt converts the dataclass output into a dict and then it works - at least the tests do, I haven't tried OP's example.\r\n\r\nWhat I added is:\r\n\r\n```\r\nimport dataclasses\r\n```\r\nand\r\n```\r\n if dataclasses.is_dataclass(out):\r\n outputs = [dataclasses.asdict(out) for out in outputs]\r\n out = outputs[0]\r\n```\r\n\r\nI filed a bug report with pytorch: https://github.com/pytorch/pytorch/issues/41327\r\n", "My pytorch tweak fixes the transformers tests, but when trying to use it on OP's use - it fails elsewhere:\r\n\r\n```\r\nexport TASK_NAME=CoLA\r\nexport GLUE_DIR=/tmp/glue_data/\r\npython ./examples/text-classification/run_glue.py --model_name_or_path bert-base-uncased --task_name $TASK_NAME --do_train --do_eval --data_dir $GLUE_DIR/$TASK_NAME --max_seq_length 128 --per_device_eval_batch_size=2 --per_device_train_batch_size=2 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/$TASK_NAME/\r\n```\r\n\r\n```\r\n ...\r\n File \"/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py\", line 98, in <listcomp>\r\n outputs = [dataclasses.asdict(out) for out in outputs]\r\n File \"/home/stas/anaconda3/envs/main/lib/python3.7/dataclasses.py\", line 1045, in asdict\r\n return _asdict_inner(obj, dict_factory)\r\n File \"/home/stas/anaconda3/envs/main/lib/python3.7/dataclasses.py\", line 1052, in _asdict_inner\r\n value = _asdict_inner(getattr(obj, f.name), dict_factory)\r\n File \"/home/stas/anaconda3/envs/main/lib/python3.7/dataclasses.py\", line 1086, in _asdict_inner\r\n return copy.deepcopy(obj)\r\n File \"/home/stas/anaconda3/envs/main/lib/python3.7/copy.py\", line 161, in deepcopy\r\n y = copier(memo)\r\n File \"/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/tensor.py\", line 44, in __deepcopy__\r\n raise RuntimeError(\"Only Tensors created expl\r\n\r\nRuntimeError: Only Tensors created explicitly by the user (graph leaves) support the deepcopy protocol at the moment\r\n```\r\n\r\nSo that conversion from dataclass from dict didn't work elsewhere. Needs more digging.\r\n", "@vanh17, until this is sorted out, you may choose to run on a single gpu which I tested to work. \r\n\r\nYou can accomplish that by adding to your command line:\r\n```\r\nenv CUDA_VISIBLE_DEVICES=0 python ./examples/text-classification/run_glue.py ...\r\n```\r\nchange 0 to whichever GPU you want it to be run on.", "I think this is related to https://github.com/huggingface/transformers/pull/5685\r\n\r\nWhen used in a `nn.DataParallel` setup a model should be instantiated with `return_tuple=True`.\r\n\r\nIt would be nice to check if there is a way for a model to know that it's being part of a `nn.DataParallel` so it can setup this argument automatically. If someone wants to give it a look....\r\n\r\ncc @sgugger ", "I can look at this when I'm back next week. In the meantime, merging #5685 will fix the issue.", "> merging #5685 will fix the issue.\r\n\r\nI verified that the `run_glue.py` on dual gpu work after this merge.\r\n\r\nIs there a CirleCI config that supports dual gpu tests?\r\n\r\nedit: multigpu tests still fail as before. I forgot to back out the pytorch hack.", "So, if with n_gpu > 1, it works w/o returning outputs wrapped in a model's output dataclass, why do we need to ever return a dataclass and not *always* a tuple regardless of n_gpu's value? same goes for the suggestion by @thomwolf - only with `nn.DataParallel`. https://github.com/huggingface/transformers/pull/5685 just moved the problem elsewhere, since it's not possible to rely on a model to return an output dataclass and the behavior is different depending on the hardware setup.", "Always returning tuples require user to know which output is at which position (and it changes depending on the parameters you pass to the model) so having something self-documenting was a feature users asked for a long time. ", "I totally understand that and this is great. But if a user codes for that API relying on outputs being a dataclass, and their code is then run in multi-gpu env it will break. Are we on the same page now?\r\n\r\nI can see 2 solutions that lead to a consistent API:\r\n\r\n1. getting pytorch to support not only dict outputs but also dataclass in `gather` https://github.com/pytorch/pytorch/issues/41327\r\n\r\n2. re-encapsulate the tuple into the original output dataclass when it returns from pytorch to transformers and before it is passed back to the user. There will be an additional small overhead. But we don't really have a proxy to insert such manipulation, so probably this is not feasible at the moment.\r\n\r\n", "I updated my earlier comment - multigpu tests still fail after @sgugger's commit as before - so only part of the problem has been worked around. I forgot to back out the proposed pytorch hack so it looked like it worked, but it is not.", "wrt the change https://github.com/huggingface/transformers/pull/5685, won't this be fitting:\r\n\r\n```\r\n # Our model outputs do not work with DataParallel, so forcing return tuple.\r\n- if self.args.n_gpu > 1:\r\n+ if isinstance(model, nn.DataParallel):\r\n inputs[\"return_tuple\"] = True\r\n```\r\n\r\nas @thomwolf suggested. But perhaps practically they are covering the same cases.\r\n\r\nI'm digging for where else this is needed to make the tests work.\r\n", "OK, to make the common tests work, this is needed:\r\n\r\n```\r\ndiff --git a/tests/test_modeling_common.py b/tests/test_modeling_common.py\r\nindex 0021f23c..683b7913 100644\r\n--- a/tests/test_modeling_common.py\r\n+++ b/tests/test_modeling_common.py\r\n@@ -803,6 +803,7 @@ class ModelTesterMixin:\r\n\r\n # Wrap model in nn.DataParallel\r\n model = torch.nn.DataParallel(model)\r\n+ inputs_dict[\"return_tuple\"] = True\r\n with torch.no_grad():\r\n _ = model(**self._prepare_for_class(inputs_dict, model_class))\r\n\r\n```\r\nyikes.\r\n\r\nPR for both: https://github.com/huggingface/transformers/pull/5733\r\nLet me know if you prefer a separate PR for each.", "Also why does the `return_tuple` param defaults to `None` and not `False` in most models, whereas in some it's `False`. It probably should be `False` everywhere, no?\r\n\r\nSame applies to `output_hidden_states` and `output_attentions` `forward` params - sometimes they default to `None` and other times `False`. Probably should be `False` everywhere.\r\n", "I think we can find a work-around on this in the meantime by allowing our output data classes to accepts list/tuple as inputs to the first argument and spread these over the other arguments in `__post_init__`. I'll try to make a PR on this.", "> I think we can find a work-around on this in the meantime by allowing our output data classes to accepts list/tuple as inputs to the first argument and spread these over the other arguments in `__post_init__`. I'll try to make a PR on this.\r\n\r\nTo me, it is now working with this workaround (fine-tuning LMs). But, shall I get concerned about the reliability of the results?", "> shall I get concerned about the reliability of the results?\r\n\r\nIf you're referring to https://github.com/huggingface/transformers/pull/5685 commit, there is no reason to be concerned. There was no \"functional\" change per se, this is really sorting out the API - trying to make it consistent.", "I also ran into a similar problem when running the script from `examples/question-answering` using two GPUs from the master branch:\r\n\r\n```\r\npython run_squad.py \\\r\n --model_type bert \\\r\n --model_name_or_path bert-base-uncased \\\r\n --do_train \\\r\n --do_eval \\\r\n --do_lower_case \\\r\n --train_file $SQUAD_DIR/train-v1.1.json \\\r\n --predict_file $SQUAD_DIR/dev-v1.1.json \\\r\n --per_gpu_train_batch_size 12 \\\r\n --per_gpu_eval_batch_size=16 \\\r\n --learning_rate 3e-5 \\\r\n --num_train_epochs 2.0 \\\r\n --max_seq_length 320 \\\r\n --doc_stride 128 \\\r\n --output_dir $SQUAD_DIR/bert-base-uncased-squad_v1\r\n```\r\n\r\nThe error looks like below:\r\n```\r\n File \"run_squad.py\", line 821, in <module>\r\n main()\r\n File \"run_squad.py\", line 764, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer)\r\n File \"run_squad.py\", line 202, in train\r\n outputs = model(**inputs)\r\n File \"/home/qqcao/work/transformers/.env/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/qqcao/work/transformers/.env/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py\", line 156, in forward\r\n return self.gather(outputs, self.output_device)\r\n File \"/home/qqcao/work/transformers/.env/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py\", line 168, in gather\r\n return gather(outputs, output_device, dim=self.dim)\r\n File \"/home/qqcao/work/transformers/.env/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py\", line 68, in gather\r\n res = gather_map(outputs)\r\n File \"/home/qqcao/work/transformers/.env/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py\", line 63, in gather_map\r\n return type(out)(map(gather_map, zip(*outputs)))\r\nTypeError: __init__() missing 2 required positional arguments: 'start_logits' and 'end_logits'\r\n```\r\nI have to roll back to version 3.0.0. Do you have any ETA when this will get fixed? Thanks.", "@csarron, this should fix it.\r\n\r\n```\r\n--- a/examples/question-answering/run_squad.py\r\n+++ b/examples/question-answering/run_squad.py\r\n@@ -199,6 +199,9 @@ def train(args, train_dataset, model, tokenizer):\r\n {\"langs\": (torch.ones(batch[0].shape, dtype=torch.int64) * args.lang_id).to(args.device)}\r\n )\r\n\r\n+ if isinstance(model, torch.nn.DataParallel):\r\n+ inputs[\"return_tuple\"] = True\r\n+\r\n outputs = model(**inputs)\r\n # model outputs are always tuple in transformers (see doc)\r\n loss = outputs[0]\r\n```\r\n\r\nIt appears that this will now need to be added **everywhere** before model is invoked, and users will need to do that too should they code their own and intend to use `DataParallel`. \r\n\r\nSurely, there must be a better way. I suppose that when this neat `dataclass` feature was added it wasn't tested on `nn.DataParallel`. Perhaps best to back it out, figure out for pytorch to support `dataclasses` in scatter/gather and then put it back in with perhaps a monkeypatch for older pytorch versions. https://github.com/pytorch/pytorch/issues/41327\r\n\r\np.s. Note that the project's scripts/modules don't consistently `import torch.nn as nn`, so sometimes it's `torch.nn.DataParallel`, whereas other times `nn.DataParallel`.", "Got same problem here.", "@sgugger came up with a transparent solution for this issue: https://github.com/huggingface/transformers/pull/5941", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,594
1,601
1,601
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Language I am using the model on (English, Chinese ...): The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. python ./examples/text-classification/run_glue.py --model_name_or_path bert-base-uncased --task_name $TASK_NAME --do_train --do_eval --data_dir $GLUE_DIR/$TASK_NAME --max_seq_length 128 --per_device_eval_batch_size=2 --per_device_train_batch_size=2 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/$TASK_NAME/ <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> File "./examples/text-classification/run_glue.py", line 246, in <module> main() File "./examples/text-classification/run_glue.py", line 173, in main model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None File "/work/vnhh/anaconda3/lib/python3.6/site-packages/transformers/trainer.py", line 499, in train tr_loss += self._training_step(model, inputs, optimizer) File "/work/vnhh/anaconda3/lib/python3.6/site-packages/transformers/trainer.py", line 622, in _training_step outputs = model(**inputs) File "/work/vnhh/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/work/vnhh/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 153, in forward return self.gather(outputs, self.output_device) File "/work/vnhh/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 165, in gather return gather(outputs, output_device, dim=self.dim) File "/work/vnhh/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather res = gather_map(outputs) File "/work/vnhh/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map return type(out)(map(gather_map, zip(*outputs))) TypeError: __init__() missing 1 required positional argument: 'logits' ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> It should be able to run and finish training ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-4.4.0-165-generic-x86_64-with-debian-stretch-sid - Python version: 3.6.5 - PyTorch version (GPU?): 1.3.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> -tensorboardX: 1.9.0
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5693/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/huggingface/transformers/issues/5693/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5692
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5692/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5692/comments
https://api.github.com/repos/huggingface/transformers/issues/5692/events
https://github.com/huggingface/transformers/pull/5692
655,345,792
MDExOlB1bGxSZXF1ZXN0NDQ3ODYxMjY3
5,692
rename the functions to match the rest of the test convention
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5692?src=pr&el=h1) Report\n> Merging [#5692](https://codecov.io/gh/huggingface/transformers/pull/5692?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0befb513278f6e42b722be340dbc667e0ba2718e&el=desc) will **decrease** coverage by `0.16%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5692/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5692?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5692 +/- ##\n==========================================\n- Coverage 78.26% 78.09% -0.17% \n==========================================\n Files 146 146 \n Lines 25998 25998 \n==========================================\n- Hits 20348 20304 -44 \n- Misses 5650 5694 +44 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5692?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.70% <0.00%> (-2.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.02% <0.00%> (-1.29%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5692/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5692?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5692?src=pr&el=footer). Last update [0befb51...921cabc](https://codecov.io/gh/huggingface/transformers/pull/5692?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,594
1,594
1,594
CONTRIBUTOR
null
no functional change
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5692/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5692/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5692", "html_url": "https://github.com/huggingface/transformers/pull/5692", "diff_url": "https://github.com/huggingface/transformers/pull/5692.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5692.patch", "merged_at": 1594634990000 }
https://api.github.com/repos/huggingface/transformers/issues/5691
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5691/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5691/comments
https://api.github.com/repos/huggingface/transformers/issues/5691/events
https://github.com/huggingface/transformers/issues/5691
655,339,554
MDU6SXNzdWU2NTUzMzk1NTQ=
5,691
Cannot import EvalPrediction from transformers
{ "login": "vanh17", "id": 10501538, "node_id": "MDQ6VXNlcjEwNTAxNTM4", "avatar_url": "https://avatars.githubusercontent.com/u/10501538?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vanh17", "html_url": "https://github.com/vanh17", "followers_url": "https://api.github.com/users/vanh17/followers", "following_url": "https://api.github.com/users/vanh17/following{/other_user}", "gists_url": "https://api.github.com/users/vanh17/gists{/gist_id}", "starred_url": "https://api.github.com/users/vanh17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vanh17/subscriptions", "organizations_url": "https://api.github.com/users/vanh17/orgs", "repos_url": "https://api.github.com/users/vanh17/repos", "events_url": "https://api.github.com/users/vanh17/events{/privacy}", "received_events_url": "https://api.github.com/users/vanh17/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @vanh17 , your transformers version is old, `EvalPrediction` is not available in 2.5.1. You can install transformers from source and then run the examples. \r\n" ]
1,594
1,594
1,594
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): Language I am using the model on (English, Chinese ...): The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. python3 ./examples/text-classification/run_glue.py --model_name_or_path bert-base-uncased --task_name $TASK_NAME --do_train --do_eval --data_dir $GLUE_DIR/$TASK_NAME --max_seq_length 128 --per_device_eval_batch_size=2 --per_device_train_batch_size=2 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir /tmp/$TASK_NAME/cd ../.. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> It should run normally but then give error: cannot import module EvalPrediction ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.5.1 - Platform: Linux-4.4.0-165-generic-x86_64-with-debian-stretch-sid - Python version: 3.6.5 - PyTorch version (GPU?): 1.2.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5691/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5691/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5690
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5690/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5690/comments
https://api.github.com/repos/huggingface/transformers/issues/5690/events
https://github.com/huggingface/transformers/issues/5690
655,310,122
MDU6SXNzdWU2NTUzMTAxMjI=
5,690
How I can predict missing letters in a sentence, like " I want to b _ _ the car because it is cheap."
{ "login": "taicaile", "id": 23215696, "node_id": "MDQ6VXNlcjIzMjE1Njk2", "avatar_url": "https://avatars.githubusercontent.com/u/23215696?v=4", "gravatar_id": "", "url": "https://api.github.com/users/taicaile", "html_url": "https://github.com/taicaile", "followers_url": "https://api.github.com/users/taicaile/followers", "following_url": "https://api.github.com/users/taicaile/following{/other_user}", "gists_url": "https://api.github.com/users/taicaile/gists{/gist_id}", "starred_url": "https://api.github.com/users/taicaile/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/taicaile/subscriptions", "organizations_url": "https://api.github.com/users/taicaile/orgs", "repos_url": "https://api.github.com/users/taicaile/repos", "events_url": "https://api.github.com/users/taicaile/events{/privacy}", "received_events_url": "https://api.github.com/users/taicaile/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I am not sure how to predict letters, but you can use BERT to predict words. ", "I'd try to train a character-level model. Some of the Reformer models are pretrained in a char-level setting, if I remember correctly: https://huggingface.co/models?search=reformer\r\n\r\nIn the future however, this question is more suited to [discuss.huggingface.co](https://discuss.huggingface.co)" ]
1,594
1,595
1,595
NONE
null
Hi, I am new on NLP, I want to predict missing letters in a sentence. Here is an example, ```text I want to b _ _ the car because it is cheap. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5690/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5690/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5689
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5689/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5689/comments
https://api.github.com/repos/huggingface/transformers/issues/5689/events
https://github.com/huggingface/transformers/issues/5689
655,291,409
MDU6SXNzdWU2NTUyOTE0MDk=
5,689
Is Writing With Transform open source?
{ "login": "DrAlta", "id": 25010423, "node_id": "MDQ6VXNlcjI1MDEwNDIz", "avatar_url": "https://avatars.githubusercontent.com/u/25010423?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DrAlta", "html_url": "https://github.com/DrAlta", "followers_url": "https://api.github.com/users/DrAlta/followers", "following_url": "https://api.github.com/users/DrAlta/following{/other_user}", "gists_url": "https://api.github.com/users/DrAlta/gists{/gist_id}", "starred_url": "https://api.github.com/users/DrAlta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DrAlta/subscriptions", "organizations_url": "https://api.github.com/users/DrAlta/orgs", "repos_url": "https://api.github.com/users/DrAlta/repos", "events_url": "https://api.github.com/users/DrAlta/events{/privacy}", "received_events_url": "https://api.github.com/users/DrAlta/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "If you add your model to the hub, you'll get the inference widget & API that you can use for demos or integration into your product: https://huggingface.co/docs", "We don't currently have short-term plans to open source Write With Transformers's frontend\r\n\r\nFollowing up on what @clmnt said, we have an option (currently in private beta) for GPU acceleration of the inference API which would let you built similar (fast!) applications.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,594
1,600
1,600
NONE
null
I want to use Writing with Transform with my own model? My first though was that I could just download it and modify the source to point to my model., but I can't find WWT anywhere in the repo. Do I have to publish my model then make a request for you to add it?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5689/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5689/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5688
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5688/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5688/comments
https://api.github.com/repos/huggingface/transformers/issues/5688/events
https://github.com/huggingface/transformers/pull/5688
655,264,552
MDExOlB1bGxSZXF1ZXN0NDQ3ODA2Njg5
5,688
doc improvements
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5688?src=pr&el=h1) Report\n> Merging [#5688](https://codecov.io/gh/huggingface/transformers/pull/5688?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0267668c3d648c6e41afda97f5df8671ee880ac3&el=desc) will **decrease** coverage by `0.24%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5688/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5688?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5688 +/- ##\n==========================================\n- Coverage 77.01% 76.76% -0.25% \n==========================================\n Files 128 146 +18 \n Lines 21615 25983 +4368 \n==========================================\n+ Hits 16646 19945 +3299 \n- Misses 4969 6038 +1069 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5688?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <ø> (+0.11%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/5688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `74.01% <ø> (+5.16%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3MucHk=) | `86.04% <ø> (+0.68%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark\\_args\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdGYucHk=) | `87.50% <ø> (ø)` | |\n| [src/transformers/benchmark/benchmark\\_args\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdXRpbHMucHk=) | `89.13% <ø> (-7.75%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3RmLnB5) | `61.53% <ø> (ø)` | |\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.50% <ø> (-3.60%)` | :arrow_down: |\n| [src/transformers/configuration\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.18% <ø> (+0.32%)` | :arrow_up: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5688/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.75% <ø> (+0.41%)` | :arrow_up: |\n| ... and [109 more](https://codecov.io/gh/huggingface/transformers/pull/5688/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5688?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5688?src=pr&el=footer). Last update [dc31a72...2242bb2](https://codecov.io/gh/huggingface/transformers/pull/5688?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,594
1,594
1,594
CONTRIBUTOR
null
a few documentation improvements - one variable name consistency rename, and the rest are small tweaks with one clarification.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5688/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5688/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5688", "html_url": "https://github.com/huggingface/transformers/pull/5688", "diff_url": "https://github.com/huggingface/transformers/pull/5688.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5688.patch", "merged_at": 1594635017000 }
https://api.github.com/repos/huggingface/transformers/issues/5687
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5687/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5687/comments
https://api.github.com/repos/huggingface/transformers/issues/5687/events
https://github.com/huggingface/transformers/pull/5687
655,245,857
MDExOlB1bGxSZXF1ZXN0NDQ3NzkzNTg1
5,687
Making ONNX conversion directly load the model and tokenizer + adding tests
{ "login": "abelriboulot", "id": 34995848, "node_id": "MDQ6VXNlcjM0OTk1ODQ4", "avatar_url": "https://avatars.githubusercontent.com/u/34995848?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abelriboulot", "html_url": "https://github.com/abelriboulot", "followers_url": "https://api.github.com/users/abelriboulot/followers", "following_url": "https://api.github.com/users/abelriboulot/following{/other_user}", "gists_url": "https://api.github.com/users/abelriboulot/gists{/gist_id}", "starred_url": "https://api.github.com/users/abelriboulot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abelriboulot/subscriptions", "organizations_url": "https://api.github.com/users/abelriboulot/orgs", "repos_url": "https://api.github.com/users/abelriboulot/repos", "events_url": "https://api.github.com/users/abelriboulot/events{/privacy}", "received_events_url": "https://api.github.com/users/abelriboulot/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[ { "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false } ]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5687?src=pr&el=h1) Report\n> Merging [#5687](https://codecov.io/gh/huggingface/transformers/pull/5687?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0267668c3d648c6e41afda97f5df8671ee880ac3&el=desc) will **increase** coverage by `0.89%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5687/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5687?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5687 +/- ##\n==========================================\n+ Coverage 77.01% 77.90% +0.89% \n==========================================\n Files 128 146 +18 \n Lines 21615 25983 +4368 \n==========================================\n+ Hits 16646 20243 +3597 \n- Misses 4969 5740 +771 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5687?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <ø> (+0.11%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/5687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `74.01% <ø> (+5.16%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3MucHk=) | `86.04% <ø> (+0.68%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark\\_args\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdGYucHk=) | `87.50% <ø> (ø)` | |\n| [src/transformers/benchmark/benchmark\\_args\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdXRpbHMucHk=) | `89.13% <ø> (-7.75%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3RmLnB5) | `61.53% <ø> (ø)` | |\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.50% <ø> (-3.60%)` | :arrow_down: |\n| [src/transformers/configuration\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `100.00% <ø> (+7.14%)` | :arrow_up: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5687/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.75% <ø> (+0.41%)` | :arrow_up: |\n| ... and [108 more](https://codecov.io/gh/huggingface/transformers/pull/5687/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5687?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5687?src=pr&el=footer). Last update [dc31a72...977fc15](https://codecov.io/gh/huggingface/transformers/pull/5687?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I added a bit to this. The wrapper model is now recursive, which lets models such as T5 export properly. Additionally, thanks to the ModelOutput refactor, now output names can be automatically extracted!", "Any news on this @mfuntowicz ? I'm waiting to have this merged to attempt to make a T5 -> ONNX -> ONNX.js script.", "I'm adding bunch of people to the PR as I would like to get their feeling on these changes 😊 ", "Thanks for the review @mfuntowicz and I definitely understand the need to get a consensus on this. To sum up why I think it might be a good idea to remove the dependency on pipelines is because at the moment the pipeline is only used as a way to load a model and a tokenizer. I came into this issue while trying to export T5 (and it seems that a few people also had that issue in the thread for #5518). Doing it directly without passing by the pipelines is closer in my opinion to what the script actually does (grabbing a model and a tokenizer), makes every ONNX-compatible model exportable by default, and avoids confusions such as that choosing a different pipeline for the same model would change the output ONNX model.\r\nAs for the other elements, thanks a lot for noting them, I completely agree with the points, and didn't know about the return_tuples parameter, that should save a lot of weirdness in the code! Although I guess this might lose the named outputs part, but I'm sure there's a cleaner way to do it.", "If I may add some motivation to such PR, some tasks like MultiChoice questions are not managed by pipelines. Therefore we had to perform the conversion ourselves, and it appeared that depending of the model, there are some bugs in pytorch to onnx method (from pytorch lib) in the order input have to be provided (https://github.com/microsoft/onnxruntime/issues/4292, I know you have fixed a similar issue in this repo but for pipeline tasks only). Now, we are using some other model that have not such bug, we need to manage both cases, etc. Being able to rely on the lib code (well tested / documented) instead of having to maintain our own code would be a big improvement and may directly increase the number of teams putting Transformer based models in prod (onnx runtime providing big perf boost).", "Hey @mfuntowicz !\r\nJust wanted to check in to see whether I should adjust the PR with your comments or whether the team prefers to keep the pipelines as is.", "Hi @abelriboulot, do you think your PR can solve https://github.com/huggingface/transformers/issues/6503 as well?", "Hi @Zhen-hao, I took a look at your issue and I think it might. I haven't heard back from huggingface on this PR, so I think I might make a separate package to easily convert huggingface models to ONNX. If you're interested I'll keep you in the loop!", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "In case anyone is looking for it, the package is [there](https://github.com/abelriboulot/onnxt5)! Hope it helps.", "Is there any way to convert Helsinki-NLP/opus-mt-en-ROMANCE to onnx format ?", "@patil-suraj was working on a library that might help with that, he might know better!" ]
1,594
1,604
1,603
CONTRIBUTOR
null
This is a proposal for an update in the ONNX conversion: - First, I investigated issue #4906 as I was having the same issue. This is due to a dependency on this [commit](https://github.com/pytorch/pytorch/commit/96989a2a114de9b77e7dd9495d62c4a8a549b40d) from the ONNX team available from version 1.5.0 of PyTorch, I therefore added it to the extra requirements and added to the messages to make it more obvious (along with keras2onnx for TF) - The bigger part of this PR aims to remove the dependency of the script for specific pipelines. I believe this dependency is not needed as a conversion to ONNX simply requires a model, and a tokenizer. There are a few advantages to doing it this way: 1. This would solve #4788, and help greatly towards #5075. With this update, the conversion to ONNX no longer requires a given pipeline, and therefore any model can be converted (I have tested for instance with T5) 2. It is maybe clearer since the elements of the pipeline are not exported onto ONNX. Let me know your thoughts on that one. - I added some fast-running integration testing of the script to the existing tests. - I made the onnx export compatible with the ModelOutput refactor (I believe previously it wasn't?) A question I have is whether I should add a message or add the possibility for the user to provide a pipeline even if it is not used in order to make it back-compatible?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5687/reactions", "total_count": 5, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5687/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5687", "html_url": "https://github.com/huggingface/transformers/pull/5687", "diff_url": "https://github.com/huggingface/transformers/pull/5687.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5687.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/5686
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5686/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5686/comments
https://api.github.com/repos/huggingface/transformers/issues/5686/events
https://github.com/huggingface/transformers/pull/5686
655,222,489
MDExOlB1bGxSZXF1ZXN0NDQ3Nzc3NTA3
5,686
[Fix] github actions CI by reverting #5138
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5686?src=pr&el=h1) Report\n> Merging [#5686](https://codecov.io/gh/huggingface/transformers/pull/5686?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0267668c3d648c6e41afda97f5df8671ee880ac3&el=desc) will **increase** coverage by `1.10%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5686/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5686?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5686 +/- ##\n==========================================\n+ Coverage 77.01% 78.11% +1.10% \n==========================================\n Files 128 146 +18 \n Lines 21615 25983 +4368 \n==========================================\n+ Hits 16646 20297 +3651 \n- Misses 4969 5686 +717 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5686?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <ø> (+0.11%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/5686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `74.01% <ø> (+5.16%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3MucHk=) | `86.04% <ø> (+0.68%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark\\_args\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdGYucHk=) | `87.50% <ø> (ø)` | |\n| [src/transformers/benchmark/benchmark\\_args\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdXRpbHMucHk=) | `89.13% <ø> (-7.75%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3RmLnB5) | `61.53% <ø> (ø)` | |\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.50% <ø> (-3.60%)` | :arrow_down: |\n| [src/transformers/configuration\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.18% <ø> (+0.32%)` | :arrow_up: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5686/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.75% <ø> (+0.41%)` | :arrow_up: |\n| ... and [107 more](https://codecov.io/gh/huggingface/transformers/pull/5686/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5686?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5686?src=pr&el=footer). Last update [dc31a72...7c84917](https://codecov.io/gh/huggingface/transformers/pull/5686?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Did you try to use `2>&1 | tee output.txt` instead of just `| tee output.txt` ? Don't know at all if this would work instead, but might be worth trying giving the first answer in this stack overflow: https://stackoverflow.com/questions/418896/how-to-redirect-output-to-a-file-and-stdout", "@patrickvonplaten I think that will just make the logfile include stderr, and the job might still succeed for the same reason it is succeeding now. Do you know how to test locally?\r\n", "Gunna try this for the 7pm run tonight and then we can be more aggressive later." ]
1,594
1,594
1,594
CONTRIBUTOR
null
On github actions, even when the tests fail the job is green, presumably because of my artifacts change( #5318 ). I am not clear on why this happens, but have verified that this does not happen on CircleCI. In the screenshot below, there is a green check mark, but also 2 tests have failed: ![image](https://user-images.githubusercontent.com/6045025/87226810-6103b400-c364-11ea-875a-dcbd7c1e49ca.png)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5686/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5686/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5686", "html_url": "https://github.com/huggingface/transformers/pull/5686", "diff_url": "https://github.com/huggingface/transformers/pull/5686.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5686.patch", "merged_at": 1594674739000 }
https://api.github.com/repos/huggingface/transformers/issues/5685
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5685/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5685/comments
https://api.github.com/repos/huggingface/transformers/issues/5685/events
https://github.com/huggingface/transformers/pull/5685
655,199,953
MDExOlB1bGxSZXF1ZXN0NDQ3NzYyMTMz
5,685
Fix Trainer in DataParallel setting
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5685?src=pr&el=h1) Report\n> Merging [#5685](https://codecov.io/gh/huggingface/transformers/pull/5685?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7fad617dc1fc681a7f5da5e0172c8b83f4bf0024&el=desc) will **decrease** coverage by `0.20%`.\n> The diff coverage is `25.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5685/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5685?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5685 +/- ##\n==========================================\n- Coverage 78.11% 77.91% -0.21% \n==========================================\n Files 146 146 \n Lines 25983 25987 +4 \n==========================================\n- Hits 20297 20247 -50 \n- Misses 5686 5740 +54 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5685?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5685/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.84% <25.00%> (-0.12%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5685/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `44.56% <0.00%> (-46.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5685/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `63.55% <0.00%> (-31.78%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5685/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `79.94% <0.00%> (-6.02%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5685/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.92% <0.00%> (-1.97%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5685/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5685/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5685/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5685?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5685?src=pr&el=footer). Last update [7fad617...05ec8f6](https://codecov.io/gh/huggingface/transformers/pull/5685?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,594
1,594
1,594
COLLABORATOR
null
The new output types seem to break data parallel FYI, see comment on #5671. This is is because of the line ``` return type(out)(map(gather_map, zip(*outputs))) ``` in `scatter_gather` which tries to reconstruct an output of the same type as ours (and fails since it does not provide the necessary arguments). There is no way to fix our `ModelOutput` to work with this AFAICT. However, we have the `return_tuple` argument to fix the issue :-)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5685/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5685/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5685", "html_url": "https://github.com/huggingface/transformers/pull/5685", "diff_url": "https://github.com/huggingface/transformers/pull/5685.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5685.patch", "merged_at": 1594643859000 }
https://api.github.com/repos/huggingface/transformers/issues/5684
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5684/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5684/comments
https://api.github.com/repos/huggingface/transformers/issues/5684/events
https://github.com/huggingface/transformers/pull/5684
655,196,972
MDExOlB1bGxSZXF1ZXN0NDQ3NzYwMTI0
5,684
fix incorrect docstring on bart summarization example
{ "login": "SamuelCahyawijaya", "id": 2826602, "node_id": "MDQ6VXNlcjI4MjY2MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/2826602?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SamuelCahyawijaya", "html_url": "https://github.com/SamuelCahyawijaya", "followers_url": "https://api.github.com/users/SamuelCahyawijaya/followers", "following_url": "https://api.github.com/users/SamuelCahyawijaya/following{/other_user}", "gists_url": "https://api.github.com/users/SamuelCahyawijaya/gists{/gist_id}", "starred_url": "https://api.github.com/users/SamuelCahyawijaya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SamuelCahyawijaya/subscriptions", "organizations_url": "https://api.github.com/users/SamuelCahyawijaya/orgs", "repos_url": "https://api.github.com/users/SamuelCahyawijaya/repos", "events_url": "https://api.github.com/users/SamuelCahyawijaya/events{/privacy}", "received_events_url": "https://api.github.com/users/SamuelCahyawijaya/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5684?src=pr&el=h1) Report\n> Merging [#5684](https://codecov.io/gh/huggingface/transformers/pull/5684?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7fad617dc1fc681a7f5da5e0172c8b83f4bf0024&el=desc) will **decrease** coverage by `0.15%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5684/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5684?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5684 +/- ##\n==========================================\n- Coverage 78.11% 77.96% -0.16% \n==========================================\n Files 146 146 \n Lines 25983 25983 \n==========================================\n- Hits 20297 20258 -39 \n- Misses 5686 5725 +39 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5684?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5684/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.80% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5684/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `44.56% <0.00%> (-46.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5684/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `63.55% <0.00%> (-31.78%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5684/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `82.95% <0.00%> (-3.01%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5684/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.92% <0.00%> (-1.97%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5684/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5684/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5684/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5684?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5684?src=pr&el=footer). Last update [7fad617...05ce6f7](https://codecov.io/gh/huggingface/transformers/pull/5684?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Ah, I am sorry, it's my mistake, I mixed up in my environment with the previous release version 2.9.0 which doesn't have `__call__` function implemented on the base tokenizer class." ]
1,594
1,594
1,594
NONE
null
Change summarization example for BART from ``` inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt') ``` to ``` inputs = tokenizer.encode_plus(ARTICLE_TO_SUMMARIZE, max_length=1024, return_tensors='pt') ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5684/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5684/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5684", "html_url": "https://github.com/huggingface/transformers/pull/5684", "diff_url": "https://github.com/huggingface/transformers/pull/5684.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5684.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/5683
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5683/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5683/comments
https://api.github.com/repos/huggingface/transformers/issues/5683/events
https://github.com/huggingface/transformers/pull/5683
655,196,576
MDExOlB1bGxSZXF1ZXN0NDQ3NzU5ODUy
5,683
Add Microsoft's CodeBERT
{ "login": "JetRunner", "id": 22514219, "node_id": "MDQ6VXNlcjIyNTE0MjE5", "avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JetRunner", "html_url": "https://github.com/JetRunner", "followers_url": "https://api.github.com/users/JetRunner/followers", "following_url": "https://api.github.com/users/JetRunner/following{/other_user}", "gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}", "starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions", "organizations_url": "https://api.github.com/users/JetRunner/orgs", "repos_url": "https://api.github.com/users/JetRunner/repos", "events_url": "https://api.github.com/users/JetRunner/events{/privacy}", "received_events_url": "https://api.github.com/users/JetRunner/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5683?src=pr&el=h1) Report\n> Merging [#5683](https://codecov.io/gh/huggingface/transformers/pull/5683?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7fad617dc1fc681a7f5da5e0172c8b83f4bf0024&el=desc) will **decrease** coverage by `0.75%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5683/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5683?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5683 +/- ##\n==========================================\n- Coverage 78.11% 77.36% -0.76% \n==========================================\n Files 146 146 \n Lines 25983 25983 \n==========================================\n- Hits 20297 20102 -195 \n- Misses 5686 5881 +195 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5683?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5683/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5683/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `63.55% <0.00%> (-31.78%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5683/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.71% <0.00%> (-0.26%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5683/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5683/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5683/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5683?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5683?src=pr&el=footer). Last update [7fad617...cbd3d7a](https://codecov.io/gh/huggingface/transformers/pull/5683?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "> @guoday\r\n\r\nHi, @JetRunner .\r\nThanks a lot. It look great.\r\n\r\n\r\n" ]
1,594
1,594
1,594
CONTRIBUTOR
null
@guoday
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5683/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5683/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5683", "html_url": "https://github.com/huggingface/transformers/pull/5683", "diff_url": "https://github.com/huggingface/transformers/pull/5683.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5683.patch", "merged_at": 1594474651000 }
https://api.github.com/repos/huggingface/transformers/issues/5682
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5682/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5682/comments
https://api.github.com/repos/huggingface/transformers/issues/5682/events
https://github.com/huggingface/transformers/issues/5682
655,185,567
MDU6SXNzdWU2NTUxODU1Njc=
5,682
What is the decoder_input for encoder-decoder transformer in training time?
{ "login": "guotong1988", "id": 4702353, "node_id": "MDQ6VXNlcjQ3MDIzNTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4702353?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guotong1988", "html_url": "https://github.com/guotong1988", "followers_url": "https://api.github.com/users/guotong1988/followers", "following_url": "https://api.github.com/users/guotong1988/following{/other_user}", "gists_url": "https://api.github.com/users/guotong1988/gists{/gist_id}", "starred_url": "https://api.github.com/users/guotong1988/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guotong1988/subscriptions", "organizations_url": "https://api.github.com/users/guotong1988/orgs", "repos_url": "https://api.github.com/users/guotong1988/repos", "events_url": "https://api.github.com/users/guotong1988/events{/privacy}", "received_events_url": "https://api.github.com/users/guotong1988/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The spirit of the answer is right. There is a lot more detail in this [blogpost](https://sshleifer.github.io/blog_v2/jupyter/2020/03/12/bart.html)", "In the future, this would be a great Q for our forums: https://discuss.huggingface.co/ since it doesn't directly involve issues with the library. ", "Thank you again." ]
1,594
1,594
1,594
CONTRIBUTOR
null
https://datascience.stackexchange.com/questions/76261/whats-the-input-dimension-for-transformer-decoder-during-training Is the link's answer right? Thank you very much!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5682/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5682/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5681
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5681/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5681/comments
https://api.github.com/repos/huggingface/transformers/issues/5681/events
https://github.com/huggingface/transformers/pull/5681
655,179,281
MDExOlB1bGxSZXF1ZXN0NDQ3NzQ5MDE5
5,681
[pipelines] Update fill mask pipeline to remove special tokens in the output
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5681?src=pr&el=h1) Report\n> Merging [#5681](https://codecov.io/gh/huggingface/transformers/pull/5681?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7fad617dc1fc681a7f5da5e0172c8b83f4bf0024&el=desc) will **decrease** coverage by `0.20%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5681/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5681?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5681 +/- ##\n==========================================\n- Coverage 78.11% 77.91% -0.21% \n==========================================\n Files 146 146 \n Lines 25983 25983 \n==========================================\n- Hits 20297 20244 -53 \n- Misses 5686 5739 +53 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5681?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5681/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.36% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5681/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `44.56% <0.00%> (-46.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5681/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `63.55% <0.00%> (-31.78%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5681/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `79.44% <0.00%> (-6.52%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5681/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.92% <0.00%> (-1.97%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5681/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <0.00%> (+1.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5681/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5681/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5681?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5681?src=pr&el=footer). Last update [7fad617...c2fbb71](https://codecov.io/gh/huggingface/transformers/pull/5681?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Hi Thomas, I found the output is something like this with the latest version of `transformers`:\r\n```json\r\n[\r\n {\r\n \"sequence\": \"<s>if (x is not None) and(x>1)</s>\",\r\n \"score\": 0.7236990928649902,\r\n \"token\": 8,\r\n \"token_str\": \"Ġand\"\r\n },\r\n {\r\n \"sequence\": \"<s>if (x is not None) &(x>1)</s>\",\r\n \"score\": 0.10633797943592072,\r\n \"token\": 359,\r\n \"token_str\": \"Ġ&\"\r\n },\r\n {\r\n \"sequence\": \"<s>if (x is not None)and(x>1)</s>\",\r\n \"score\": 0.021604137495160103,\r\n \"token\": 463,\r\n \"token_str\": \"and\"\r\n },\r\n {\r\n \"sequence\": \"<s>if (x is not None) AND(x>1)</s>\",\r\n \"score\": 0.02122747339308262,\r\n \"token\": 4248,\r\n \"token_str\": \"ĠAND\"\r\n },\r\n {\r\n \"sequence\": \"<s>if (x is not None) if(x>1)</s>\",\r\n \"score\": 0.016991324722766876,\r\n \"token\": 114,\r\n \"token_str\": \"Ġif\"\r\n }\r\n]\r\n```\r\n\r\nHowever, when using `2.9.1`, I get:\r\n```python\r\n{'sequence': '<s> if (x is not None) and (x>1)</s>', 'score': 0.6049249172210693, 'token': 8}\r\n{'sequence': '<s> if (x is not None) or (x>1)</s>', 'score': 0.30680200457572937, 'token': 50}\r\n{'sequence': '<s> if (x is not None) if (x>1)</s>', 'score': 0.02133703976869583, 'token': 114}\r\n{'sequence': '<s> if (x is not None) then (x>1)</s>', 'score': 0.018607674166560173, 'token': 172}\r\n{'sequence': '<s> if (x is not None) AND (x>1)</s>', 'score': 0.007619690150022507, 'token': 4248}\r\n```\r\n\r\nThe output sequence of `2.9.1` is way much cleaner. Can this PR fix this?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,594
1,651
1,601
MEMBER
null
Small fix to remove the special tokens from the output of the fill mask pipeline.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5681/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5681/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5681", "html_url": "https://github.com/huggingface/transformers/pull/5681", "diff_url": "https://github.com/huggingface/transformers/pull/5681.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5681.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/5680
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5680/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5680/comments
https://api.github.com/repos/huggingface/transformers/issues/5680/events
https://github.com/huggingface/transformers/issues/5680
655,169,245
MDU6SXNzdWU2NTUxNjkyNDU=
5,680
How to produce customized attention mask for BertModel?
{ "login": "empty-id", "id": 56990007, "node_id": "MDQ6VXNlcjU2OTkwMDA3", "avatar_url": "https://avatars.githubusercontent.com/u/56990007?v=4", "gravatar_id": "", "url": "https://api.github.com/users/empty-id", "html_url": "https://github.com/empty-id", "followers_url": "https://api.github.com/users/empty-id/followers", "following_url": "https://api.github.com/users/empty-id/following{/other_user}", "gists_url": "https://api.github.com/users/empty-id/gists{/gist_id}", "starred_url": "https://api.github.com/users/empty-id/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/empty-id/subscriptions", "organizations_url": "https://api.github.com/users/empty-id/orgs", "repos_url": "https://api.github.com/users/empty-id/repos", "events_url": "https://api.github.com/users/empty-id/events{/privacy}", "received_events_url": "https://api.github.com/users/empty-id/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Is there any help?", "@null-id I also want to know how to customize query-key attention mask. Did you solve the problem?" ]
1,594
1,602
1,600
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> The attention mask for BertModel needs a tensor sized (batch, seq-length). But what if I need to customize the attention for each token, just like UniLM, or some diagonal attention like GPT?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5680/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5680/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5679
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5679/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5679/comments
https://api.github.com/repos/huggingface/transformers/issues/5679/events
https://github.com/huggingface/transformers/pull/5679
655,163,840
MDExOlB1bGxSZXF1ZXN0NDQ3NzM5NDA0
5,679
Pipeline model type check
{ "login": "JetRunner", "id": 22514219, "node_id": "MDQ6VXNlcjIyNTE0MjE5", "avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JetRunner", "html_url": "https://github.com/JetRunner", "followers_url": "https://api.github.com/users/JetRunner/followers", "following_url": "https://api.github.com/users/JetRunner/following{/other_user}", "gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}", "starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions", "organizations_url": "https://api.github.com/users/JetRunner/orgs", "repos_url": "https://api.github.com/users/JetRunner/repos", "events_url": "https://api.github.com/users/JetRunner/events{/privacy}", "received_events_url": "https://api.github.com/users/JetRunner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5679?src=pr&el=h1) Report\n> Merging [#5679](https://codecov.io/gh/huggingface/transformers/pull/5679?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7fad617dc1fc681a7f5da5e0172c8b83f4bf0024&el=desc) will **decrease** coverage by `1.44%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5679/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5679?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5679 +/- ##\n==========================================\n- Coverage 78.11% 76.67% -1.45% \n==========================================\n Files 146 146 \n Lines 25983 25998 +15 \n==========================================\n- Hits 20297 19934 -363 \n- Misses 5686 6064 +378 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5679?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.96% <100.00%> (+0.59%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `19.81% <0.00%> (-79.28%)` | :arrow_down: |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/5679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `34.69% <0.00%> (-57.15%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `44.56% <0.00%> (-46.35%)` | :arrow_down: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `50.74% <0.00%> (-35.83%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `63.55% <0.00%> (-31.78%)` | :arrow_down: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `60.00% <0.00%> (-25.72%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `16.66% <0.00%> (-21.30%)` | :arrow_down: |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `32.00% <0.00%> (-17.10%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5679/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `74.43% <0.00%> (-11.53%)` | :arrow_down: |\n| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/5679/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5679?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5679?src=pr&el=footer). Last update [7fad617...81471f4](https://codecov.io/gh/huggingface/transformers/pull/5679?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This is a reasonable change, however next time on Pipelines please wait for review from @mfuntowicz, @LysandreJik or I before merging\r\n\r\n(especially as it can impact the inference API)", "> This is a reasonable change, however next time on Pipelines please wait for review from @mfuntowicz, @LysandreJik or I before merging\n> \n> \n> \n> (especially as it can impact the inference API)\n\nOk, I wasn't aware of that. Is there some written guideline about these requirements? I'm a little confused from time to time." ]
1,594
1,594
1,594
CONTRIBUTOR
null
Add model type check for pipelines. https://github.com/huggingface/transformers/issues/5678
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5679/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5679/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5679", "html_url": "https://github.com/huggingface/transformers/pull/5679", "diff_url": "https://github.com/huggingface/transformers/pull/5679.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5679.patch", "merged_at": 1594528462000 }
https://api.github.com/repos/huggingface/transformers/issues/5678
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5678/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5678/comments
https://api.github.com/repos/huggingface/transformers/issues/5678/events
https://github.com/huggingface/transformers/issues/5678
655,150,803
MDU6SXNzdWU2NTUxNTA4MDM=
5,678
Weird output when using unexpected model type for pipelines
{ "login": "JetRunner", "id": 22514219, "node_id": "MDQ6VXNlcjIyNTE0MjE5", "avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JetRunner", "html_url": "https://github.com/JetRunner", "followers_url": "https://api.github.com/users/JetRunner/followers", "following_url": "https://api.github.com/users/JetRunner/following{/other_user}", "gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}", "starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions", "organizations_url": "https://api.github.com/users/JetRunner/orgs", "repos_url": "https://api.github.com/users/JetRunner/repos", "events_url": "https://api.github.com/users/JetRunner/events{/privacy}", "received_events_url": "https://api.github.com/users/JetRunner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm working on a fix now.", "This bug occurs irrespective `transformer` version I checked it for 2.8.0, 2.90 and 3.0.1\r\n\r\nPipeline returns incorrect output only when the model and tokenizer classes are used to initialize the pipeline.\r\n\r\nIf you use model and tokernizer parameters as path instead in form of string. The output is fine. Following snippet demonstrates this : \r\n\r\n```\r\nfrom transformers import RobertaModel, RobertaTokenizer, RobertaConfig\r\nfrom transformers import pipeline\r\n\r\nMODEL_PATH = 'roberta-base'\r\n\r\nmodel = RobertaModel.from_pretrained(MODEL_PATH)\r\ntokenizer = RobertaTokenizer.from_pretrained(MODEL_PATH)\r\n\r\nfill_from_path = pipeline(\r\n 'fill-mask',\r\n model=MODEL_PATH,\r\n tokenizer=MODEL_PATH\r\n)\r\n\r\nfill_from_model = pipeline(\r\n 'fill-mask',\r\n model=model,\r\n tokenizer=tokenizer\r\n)\r\nseq = 'I found a bug in <mask>'\r\nprint(fill_from_path(seq))\r\nprint(fill_from_model(seq))\r\n```\r\n\r\nThe output is the following. You can see the first output is fine where we used the model paths, but the second output where we provided the model and tokenizer classes has a problem.\r\n\r\n```\r\n[{'sequence': '<s> I found a bug in Firefox</s>', 'score': 0.051126863807439804, 'token': 30675}, {'sequence': '<s> I found a bug in Gmail</s>', 'score': 0.027283240109682083, 'token': 29004}, {'sequence': '<s> I found a bug in Photoshop</s>', 'score': 0.024683473631739616, 'token': 35197}, {'sequence': '<s> I found a bug in Java</s>', 'score': 0.021543316543102264, 'token': 24549}, {'sequence': '<s> I found a bug in Windows</s>', 'score': 0.018485287204384804, 'token': 6039}]\r\n[{'sequence': '<s> I found a bug in real</s>', 'score': 0.9705745577812195, 'token': 588}, {'sequence': '<s> I found a bug in here</s>', 'score': 0.00013350950030144304, 'token': 259}, {'sequence': '<s> I found a bug in within</s>', 'score': 6.807789031881839e-05, 'token': 624}, {'sequence': '<s> I found a bug in San</s>', 'score': 6.468965875683352e-05, 'token': 764}, {'sequence': '<s> I found a bug in 2015</s>', 'score': 6.282260437728837e-05, 'token': 570}]\r\n```", "@ashutosh-dwivedi-e3502 Try changing this line `model = RobertaModel.from_pretrained(MODEL_PATH)` into `model = AutoModelForMaskedLM.from_pretrained(MODEL_PATH)`", "@JuhaKiili That fixes it. Output with `model = AutoModelForMaskedLM.from_pretrained(MODEL_PATH)` is :\r\n\r\n```\r\nSome weights of RobertaForMaskedLM were not initialized from the model checkpoint at roberta-base and are newly initialized: ['lm_head.decoder.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n/Users/asdwivedi/.virtualenvs/test-demo-TklxO9OB/lib/python3.8/site-packages/transformers/modeling_auto.py:796: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.\r\n warnings.warn(\r\nSome weights of RobertaForMaskedLM were not initialized from the model checkpoint at roberta-base and are newly initialized: ['lm_head.decoder.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n[{'sequence': '<s>I found a bug in Firefox</s>', 'score': 0.05709619075059891, 'token': 30675, 'token_str': 'ĠFirefox'}, {'sequence': '<s>I found a bug in Gmail</s>', 'score': 0.03430333733558655, 'token': 29004, 'token_str': 'ĠGmail'}, {'sequence': '<s>I found a bug in WordPress</s>', 'score': 0.028388172388076782, 'token': 33398, 'token_str': 'ĠWordPress'}, {'sequence': '<s>I found a bug in Java</s>', 'score': 0.02571324072778225, 'token': 24549, 'token_str': 'ĠJava'}, {'sequence': '<s>I found a bug in Python</s>', 'score': 0.01953786611557007, 'token': 31886, 'token_str': 'ĠPython'}]\r\n[{'sequence': '<s>I found a bug in Firefox</s>', 'score': 0.05709619075059891, 'token': 30675, 'token_str': 'ĠFirefox'}, {'sequence': '<s>I found a bug in Gmail</s>', 'score': 0.03430333733558655, 'token': 29004, 'token_str': 'ĠGmail'}, {'sequence': '<s>I found a bug in WordPress</s>', 'score': 0.028388172388076782, 'token': 33398, 'token_str': 'ĠWordPress'}, {'sequence': '<s>I found a bug in Java</s>', 'score': 0.02571324072778225, 'token': 24549, 'token_str': 'ĠJava'}, {'sequence': '<s>I found a bug in Python</s>', 'score': 0.01953786611557007, 'token': 31886, 'token_str': 'ĠPython'}]\r\n```" ]
1,594
1,598
1,594
CONTRIBUTOR
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): CodeBERT Language I am using the model on (English, Chinese ...): Code The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: This is the right code and right outputs: ```python from transformers import RobertaConfig, RobertaTokenizer, RobertaForMaskedLM, pipeline model = RobertaForMaskedLM.from_pretrained('microsoft/codebert-base-mlm') tokenizer = RobertaTokenizer.from_pretrained('microsoft/codebert-base-mlm') CODE = "if (x is not None) <mask> (x>1)" fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer) outputs = fill_mask(CODE) print(outputs) ``` Output: ```python [{'sequence': '<s>if (x is not None) and(x>1)</s>', 'score': 0.7236990928649902, 'token': 8, 'token_str': 'Ġand'}, {'sequence': '<s>if (x is not None) &(x>1)</s>', 'score': 0.10633797943592072, 'token': 359, 'token_str': 'Ġ&'}, {'sequence': '<s>if (x is not None)and(x>1)</s>', 'score': 0.021604137495160103, 'token': 463, 'token_str': 'and'}, {'sequence': '<s>if (x is not None) AND(x>1)</s>', 'score': 0.02122747339308262, 'token': 4248, 'token_str': 'ĠAND'}, {'sequence': '<s>if (x is not None) if(x>1)</s>', 'score': 0.016991324722766876, 'token': 114, 'token_str': 'Ġif'}] ``` But if we load the model with `RobertaModel` and proceed with the same pipeline: ```python from transformers import RobertaConfig, RobertaTokenizer, RobertaModel, pipeline model = RobertaModel.from_pretrained('microsoft/codebert-base-mlm') tokenizer = RobertaTokenizer.from_pretrained('microsoft/codebert-base-mlm') CODE = "if (x is not None) <mask> (x>1)" fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer) outputs = fill_mask(CODE) print(outputs) ``` Then the output makes no sense at all: ```python [{'sequence': '<s>if (x is not None) real(x>1)</s>', 'score': 0.9961338043212891, 'token': 588, 'token_str': 'Ġreal'}, {'sequence': '<s>if (x is not None)n(x>1)</s>', 'score': 1.70519979292294e-05, 'token': 282, 'token_str': 'n'}, {'sequence': '<s>if (x is not None) security(x>1)</s>', 'score': 1.5919968063826673e-05, 'token': 573, 'token_str': 'Ġsecurity'}, {'sequence': '<s>if (x is not None) Saturday(x>1)</s>', 'score': 1.5472969607799314e-05, 'token': 378, 'token_str': 'ĠSaturday'}, {'sequence': '<s>if (x is not None) here(x>1)</s>', 'score': 1.543204598419834e-05, 'token': 259, 'token_str': 'Ġhere'}] ``` - `transformers` version: 3.0.1 - Platform: Colab - Python version: Doesn't matter - PyTorch version (GPU?): Doesn't matter - Tensorflow version (GPU?): Doesn't matter - Using GPU in script?: Doesn't matter - Using distributed or parallel set-up in script?: Doesn't matter
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5678/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5678/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5677
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5677/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5677/comments
https://api.github.com/repos/huggingface/transformers/issues/5677/events
https://github.com/huggingface/transformers/pull/5677
655,132,715
MDExOlB1bGxSZXF1ZXN0NDQ3NzE3ODE1
5,677
[WIP] Added indexes in grouped entity NER
{ "login": "prithvikannan", "id": 46332835, "node_id": "MDQ6VXNlcjQ2MzMyODM1", "avatar_url": "https://avatars.githubusercontent.com/u/46332835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prithvikannan", "html_url": "https://github.com/prithvikannan", "followers_url": "https://api.github.com/users/prithvikannan/followers", "following_url": "https://api.github.com/users/prithvikannan/following{/other_user}", "gists_url": "https://api.github.com/users/prithvikannan/gists{/gist_id}", "starred_url": "https://api.github.com/users/prithvikannan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prithvikannan/subscriptions", "organizations_url": "https://api.github.com/users/prithvikannan/orgs", "repos_url": "https://api.github.com/users/prithvikannan/repos", "events_url": "https://api.github.com/users/prithvikannan/events{/privacy}", "received_events_url": "https://api.github.com/users/prithvikannan/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I think this would be a nice addition as well. Most of the tests are failing because they were not adapted to your addition. Do you mind adapting them?\r\n\r\nPS: would you mind changing `indexes` to `indices`? That's what we try to use in the repository for the plural of index :)", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5677?src=pr&el=h1) Report\n> Merging [#5677](https://codecov.io/gh/huggingface/transformers/pull/5677?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7fad617dc1fc681a7f5da5e0172c8b83f4bf0024&el=desc) will **increase** coverage by `0.39%`.\n> The diff coverage is `80.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5677/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5677?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5677 +/- ##\n==========================================\n+ Coverage 78.11% 78.51% +0.39% \n==========================================\n Files 146 146 \n Lines 25983 26326 +343 \n==========================================\n+ Hits 20297 20669 +372 \n+ Misses 5686 5657 -29 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5677?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/commands/train.py](https://codecov.io/gh/huggingface/transformers/pull/5677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy90cmFpbi5weQ==) | `0.00% <ø> (ø)` | |\n| [src/transformers/benchmark/benchmark\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3RmLnB5) | `65.03% <50.00%> (+3.49%)` | :arrow_up: |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <100.00%> (ø)` | |\n| [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/5677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `81.88% <100.00%> (+7.87%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: |\n| [src/transformers/testing\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `72.72% <0.00%> (-3.75%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `97.41% <0.00%> (-1.69%)` | :arrow_down: |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `89.21% <0.00%> (-0.51%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5677/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.13% <0.00%> (-0.22%)` | :arrow_down: |\n| ... and [35 more](https://codecov.io/gh/huggingface/transformers/pull/5677/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5677?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5677?src=pr&el=footer). Last update [7fad617...516926a](https://codecov.io/gh/huggingface/transformers/pull/5677?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Made changes suggested by @LysandreJik, then rebased. ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,594
1,601
1,601
NONE
null
Based on [issue #5676](https://github.com/huggingface/transformers/issues/5676) Any application that requires users to locate grouped named entities would require some sort of index. This feature is present in the standard NER pipeline and should also exist in the grouped entity NER pipeline as well. This is a very short addition to the model and is a relevant use case to many developers.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5677/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5677/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5677", "html_url": "https://github.com/huggingface/transformers/pull/5677", "diff_url": "https://github.com/huggingface/transformers/pull/5677.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5677.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/5676
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5676/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5676/comments
https://api.github.com/repos/huggingface/transformers/issues/5676/events
https://github.com/huggingface/transformers/issues/5676
655,127,271
MDU6SXNzdWU2NTUxMjcyNzE=
5,676
Add indexes to grouped entity NER pipeline
{ "login": "prithvikannan", "id": 46332835, "node_id": "MDQ6VXNlcjQ2MzMyODM1", "avatar_url": "https://avatars.githubusercontent.com/u/46332835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prithvikannan", "html_url": "https://github.com/prithvikannan", "followers_url": "https://api.github.com/users/prithvikannan/followers", "following_url": "https://api.github.com/users/prithvikannan/following{/other_user}", "gists_url": "https://api.github.com/users/prithvikannan/gists{/gist_id}", "starred_url": "https://api.github.com/users/prithvikannan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prithvikannan/subscriptions", "organizations_url": "https://api.github.com/users/prithvikannan/orgs", "repos_url": "https://api.github.com/users/prithvikannan/repos", "events_url": "https://api.github.com/users/prithvikannan/events{/privacy}", "received_events_url": "https://api.github.com/users/prithvikannan/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I am facing the same issue, Does this issue got fixed", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Fixed by https://github.com/huggingface/transformers/pull/8781" ]
1,594
1,607
1,606
NONE
null
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> **There should be indexes in the output of the grouped entity NER pipeline** The standard NER pipeline from transformers outputs entities that contain the word, score, entity type, and index. The following snippet demonstrates the normal behavior of the NER pipeline with the default `grouped_entities=False` option. ```python from transformers import pipeline nlp_without_grouping = pipeline("ner") sequence = "Hugging Face Inc. is a company based in New York City." print(nlp_without_grouping(sequence)) [ {'word': 'Hu', 'score': 0.9992662668228149, 'entity': 'I-ORG', 'index': 1}, {'word': '##gging', 'score': 0.9808881878852844, 'entity': 'I-ORG', 'index': 2}, {'word': 'Face', 'score': 0.9953625202178955, 'entity': 'I-ORG', 'index': 3}, {'word': 'Inc', 'score': 0.9993382096290588, 'entity': 'I-ORG', 'index': 4}, {'word': 'New', 'score': 0.9990268349647522, 'entity': 'I-LOC', 'index': 11}, {'word': 'York', 'score': 0.9988483190536499, 'entity': 'I-LOC', 'index': 12}, {'word': 'City', 'score': 0.9991773366928101, 'entity': 'I-LOC', 'index': 13} ] ``` However, the NER pipeline with `grouped_entities=True` outputs only word, score, and entity type. Here's the code snippet and output. There's also the problem of 'New York City' being duplicated, but I will address that in a new issue. ```python from transformers import pipeline nlp_with_grouping = pipeline("ner", grouped_entities=True) sequence = "Hugging Face Inc. is a company based in New York City." print(nlp_with_grouping(sequence)) [ {'entity_group': 'I-ORG', 'score': 0.9937137961387634, 'word': 'Hugging Face Inc'}, {'entity_group': 'I-LOC', 'score': 0.9990174969037374, 'word': 'New York City'}, {'entity_group': 'I-LOC', 'score': 0.9990174969037374, 'word': 'New York City'} ] ``` I believe that the grouped entities returned should also include the tokens of the entities. Sample output would look as such ```python [ {'entity_group': 'I-ORG', 'score': 0.9930560886859894, 'word': 'Hugging Face Inc', 'indexes': [1, 2, 3, 4]}, {'entity_group': 'I-LOC', 'score': 0.998809814453125, 'word': 'New York City', 'indexes': [11, 12, 13]}, {'entity_group': 'I-LOC', 'score': 0.998809814453125, 'word': 'New York City', 'indexes': [11, 12, 13]} ] ``` ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> **Any application that requires users to locate grouped named entities would require some sort of index.** This feature is present in the standard NER pipeline and should also exist in the grouped entity NER pipeline as well. In my case, I am trying to append the type to the text right after the named entity ("Apple" would become "Apple \<I-ORG\>") so I need to be able to locate the named entity within my phrase. ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md --> I have been able to fix this by adding two lines to `group_sub_entities` function https://github.com/huggingface/transformers/blob/7fad617dc1fc681a7f5da5e0172c8b83f4bf0024/src/transformers/pipelines.py#L1042 ```python def group_sub_entities(self, entities: List[dict]) -> dict: """ Returns grouped sub entities """ # Get the first entity in the entity group entity = entities[0]["entity"] scores = np.mean([entity["score"] for entity in entities]) tokens = [entity["word"] for entity in entities] indexes = [entity["index"] for entity in entities] # my added line entity_group = { "entity_group": entity, "score": np.mean(scores), "word": self.tokenizer.convert_tokens_to_string(tokens), "indexes": indexes # my added line } return entity_group ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5676/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5676/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5675
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5675/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5675/comments
https://api.github.com/repos/huggingface/transformers/issues/5675/events
https://github.com/huggingface/transformers/issues/5675
655,090,893
MDU6SXNzdWU2NTUwOTA4OTM=
5,675
Deepset model not loading using default code
{ "login": "gbanerjee01", "id": 17485108, "node_id": "MDQ6VXNlcjE3NDg1MTA4", "avatar_url": "https://avatars.githubusercontent.com/u/17485108?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gbanerjee01", "html_url": "https://github.com/gbanerjee01", "followers_url": "https://api.github.com/users/gbanerjee01/followers", "following_url": "https://api.github.com/users/gbanerjee01/following{/other_user}", "gists_url": "https://api.github.com/users/gbanerjee01/gists{/gist_id}", "starred_url": "https://api.github.com/users/gbanerjee01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gbanerjee01/subscriptions", "organizations_url": "https://api.github.com/users/gbanerjee01/orgs", "repos_url": "https://api.github.com/users/gbanerjee01/repos", "events_url": "https://api.github.com/users/gbanerjee01/events{/privacy}", "received_events_url": "https://api.github.com/users/gbanerjee01/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Looks like the inference on the model hub works if it helps: https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2?text=Where+do+I+live%3F&context=My+name+is+Wolfgang+and+I+live+in+Berlin" ]
1,594
1,594
1,594
NONE
null
# 🐛 Bug ## Information Model I am using (Bert): Bert Language I am using the model on (English): English The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run this script below ``` from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("deepset/bert-large-uncased-whole-word-masking-squad2") model = AutoModelForQuestionAnswering.from_pretrained("deepset/bert-large-uncased-whole-word-masking-squad2") ``` 2. Notice this error stack: ``` --------------------------------------------------------------------------- OSError Traceback (most recent call last) <ipython-input-29-2a5e47891fb0> in <module> ----> 1 tokenizer = AutoTokenizer.from_pretrained("deepset/bert-large-uncased-whole-word-masking-squad2") 2 model = AutoModelForQuestionAnswering.from_pretrained("deepset/bert-large-uncased-whole-word-masking-squad2") 3 4 reg_tokenizer = RegexpTokenizer(r'\w+') ~/anaconda3/lib/python3.7/site-packages/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 107 return RobertaTokenizer.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) 108 elif 'bert' in pretrained_model_name_or_path: --> 109 return BertTokenizer.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) 110 elif 'openai-gpt' in pretrained_model_name_or_path: 111 return OpenAIGPTTokenizer.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) ~/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py in from_pretrained(cls, *inputs, **kwargs) 280 281 """ --> 282 return cls._from_pretrained(*inputs, **kwargs) 283 284 ~/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 344 pretrained_model_name_or_path, ', '.join(s3_models), 345 pretrained_model_name_or_path, --> 346 list(cls.vocab_files_names.values()))) 347 348 # Get files from url, cache, or disk depending on the case OSError: Model name 'deepset/bert-large-uncased-whole-word-masking-squad2' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased). We assumed 'deepset/bert-large-uncased-whole-word-masking-squad2' was a path or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url. ``` ## Expected behavior I switched to a conda environment and reinstalled transformers package with conda. Before, having used just pip install, this segment of code was working. Now, it no longer even finds the relevant model. The expected behavior is to load this specific model. ## Environment info - `transformers` version: 2.11.0 - Platform: Linux-3.10.0-1062.12.1.el7.x86_64-x86_64-with-redhat-7.8-Maipo - Python version: 3.6.8 - PyTorch version (GPU?): 1.5.1 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: n/a - Using distributed or parallel set-up in script?: no
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5675/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5675/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5674
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5674/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5674/comments
https://api.github.com/repos/huggingface/transformers/issues/5674/events
https://github.com/huggingface/transformers/issues/5674
655,050,714
MDU6SXNzdWU2NTUwNTA3MTQ=
5,674
Can't get BART to generate EOS token.
{ "login": "marcotcr", "id": 698010, "node_id": "MDQ6VXNlcjY5ODAxMA==", "avatar_url": "https://avatars.githubusercontent.com/u/698010?v=4", "gravatar_id": "", "url": "https://api.github.com/users/marcotcr", "html_url": "https://github.com/marcotcr", "followers_url": "https://api.github.com/users/marcotcr/followers", "following_url": "https://api.github.com/users/marcotcr/following{/other_user}", "gists_url": "https://api.github.com/users/marcotcr/gists{/gist_id}", "starred_url": "https://api.github.com/users/marcotcr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marcotcr/subscriptions", "organizations_url": "https://api.github.com/users/marcotcr/orgs", "repos_url": "https://api.github.com/users/marcotcr/repos", "events_url": "https://api.github.com/users/marcotcr/events{/privacy}", "received_events_url": "https://api.github.com/users/marcotcr/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "Hmm, as far as I know the pretrained BART models do produce the EOS token and can actually generate pretty short summaries. For example if you look into the test `test_cnn_summarization_same_as_fairseq` you can see that the summaries are actually quite small (some are much smaller than `max_length`)\r\n\r\nAlso you could try to replace this line:\r\n```python\r\na = model.generate(**inputs, early_stopping=True, num_beams=4, max_length=100, early_stoppy=True)\r\n```\r\n\r\nwith \r\n\r\n```python\r\na = model.generate(**inputs, early_stopping=True, num_beams=4, max_length=100, length_penalty=2.0, no_repeat_ngram_size=3)\r\n```\r\n\r\nAlso pinging @sshleifer here.", "That test helped me figure it out, thanks. `min_length` is set to some default value, I'm guessing around `50`. If I set `min_length=0`, it works as expected. It sounds from the documentation that `0` should be the default value, so I guess there is a bug either in the documentation or in the code:\r\n> min_length – (optional) int The min length of the sequence to be generated. Between 0 and infinity. Default to 0.\r\n\r\nI'd close this, but I'm leaving it open just in case you guys want to fix this mismatch.\r\nThanks,", "Interesting! Usually `config.min_length` should be set to 0 by default. Not sure why it is not set to `0` in your case. Thanks for the feedback though!", "There is a bigger problem somewhere I suspect. This started happening around generation_utils.py time, and is happening in the blenderbot PR as well. Also see #5656 \r\nWe should be able to generate EOS with min_length=50. \r\nI'll try to take a look later in the week if the mystery remains unsolved. \r\n\r\nDoes early_stopping=True/False make any difference?", "No early_stopping=True/False is not making any difference.\r\nAnd setting config.min_length = 0 is still not working for my case as the fine-tuned model still producing truncated outputs.", "@tromedlov22 you also tried BART? This issue is about BART.", "@marcotcr the confusion is caused by the fact that by default we use `config.task_specific_params['summarization']`.\r\nThe way to override is to save the desired config locally.\r\n\r\n```python\r\nIn [8]: AutoConfig.from_pretrained('facebook/bart-large').task_specific_params['summarization']\r\nOut[8]:\r\n{'early_stopping': True,\r\n 'length_penalty': 2.0,\r\n 'max_length': 142,\r\n 'min_length': 56,\r\n 'no_repeat_ngram_size': 3,\r\n 'num_beams': 4}\r\n```\r\n\r\nWe should probably add a `logger.info` statement that we are using task specific params.\r\n", "Same issue here. I've been using BART-large-cnn (and -xsum) to finetune on my dataset. By setting `max_tokens=620`, and `min_tokens=0, 300, 500, and 600`, the BART still produces truncated (incomplete) sentences, which does not make sense to me. Any workaround/solution to this? @sshleifer Thanks!" ]
1,594
1,597
1,595
NONE
null
# 🐛 Bug I finetuned Bart on a few seq2seq tasks. It seems to learn the right thing, but it never seems to stop generating text unless I set `max_length`, i.e. it never generates the EOS token on its own. This seems to be the case for the pretrained model as well: if I run the example [here](https://huggingface.co/transformers/model_doc/bart.html#bartforconditionalgeneration), it always produces very long summaries even if the input text is quite small. To make the bug easy to reproduce, I set up a toy task below where I am finetuning Bart to just repeat the input. ## Information Model I am using (Bert, XLNet ...): Bart Language I am using the model on (English, Chinese ...): English The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Create a dataset for seq2seq training where the target is a copy of the source, e.g. below I create a dataset of sentences with `4,000` examples in the training set. ```python import nlp d = nlp.load_dataset('snli') d = list(set(d['train']['premise'][:20000])) import os folder = '/tmp/bartz2' if os.path.exists(folder): os.system('rm -rf %s' % folder) os.mkdir(folder) N = 4000 f = open(os.path.join(folder, 'train.source'), 'w') f.write('\n'.join(d[:N])) f = open(os.path.join(folder, 'val.source'), 'w') f.write('\n'.join(d[N:])) f.close() f = open(os.path.join(folder, 'test.source'), 'w') f.write('\n'.join(d[N:])) f.close() f = open(os.path.join(folder, 'train.target'), 'w') f.write('\n'.join(d[:N])) f.close() f = open(os.path.join(folder, 'val.target'), 'w') f.write('\n'.join(d[N:])) f.close() f = open(os.path.join(folder, 'test.target'), 'w') f.write('\n'.join(d[N:])) f.close() ``` 2. Finetune bart on this dataset, i.e. ```sh ./finetune.sh --data_dir /tmp/bartz2/ --train_batch_size=4 --eval_batch_size=4 --output_dir=copybart --num_train_epochs 3 --model_name_or_path facebook/bart-large --n_val=1000 --n_test=1000 --task=translation ``` Caveat: I commented out `--fp16` in `finetune.sh` 3. Generate text with the finetuned model ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained(f'/home/marcotcr/work/transformers/examples/seq2seq/copybart//best_tfmr') tokenizer = AutoTokenizer.from_pretrained(f'/home/marcotcr/work/transformers/examples/seq2seq/copybart/best_tfmr') model.to('cuda'); for text in d[N+1:N+10]: print(text) inputs = tokenizer([text], max_length=1024, return_tensors='pt', truncation=True).to('cuda') a = model.generate(**inputs, early_stopping=True, num_beams=4, max_length=100, early_stoppy=True) dec = tokenizer.batch_decode(a, skip_special_tokens=True, clean_up_tokenization_spaces=False) print(dec[0]) print() ``` Output (first few lines): ``` A black man walks away with a large basket full of items on his head. A black man walks away with a large basket full of items on his head of a woman-on-shoreline-a-chosie-centre-of items in a basket full-items on their head.A fellow-side of the basket.- A person is doing a bicycle trick over the rocks while a bystander gives a thumbs up. A person is doing a bicycle trick over the rocks while a bystander gives a thumbs up.A couple of people are afoot in a bicycle trunk over the Rocks while a passerer gives an thumbs up to the bystander is a thumb up.T-up. Female with long black hair and wearing eyeglasses, a blue shirt, sitting next to a male in a striped shirt, at a table, cutting into a chocolate cake with a single red candle in it. Female with long black hair and wearing eyeglasses, a blue shirt, sitting next to a male in a striped shirt, at a table, cutting into a chocolate cake with a single red candle in it.The cake is a blue shirts, a purple-colored tissue- ``` ## Expected behavior I would expect it to emit the EOS token after copying the sentence. It seems to learn the right behavior up to that point (it copies the sentence), and I saw this in my real seq2seq tasks as well (i.e. it did the right thing but failed to stop). This behavior is also present in `test_generations.txt`. I also tried training on a dataset where the target sentences ended in `</s>`, but that didn't change it. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 (pulled from master this morning) - Platform: Linux-5.3.0-61-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.8 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5674/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5674/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5673
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5673/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5673/comments
https://api.github.com/repos/huggingface/transformers/issues/5673/events
https://github.com/huggingface/transformers/pull/5673
655,038,116
MDExOlB1bGxSZXF1ZXN0NDQ3NjQ1Njc0
5,673
Document model outputs
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5673?src=pr&el=h1) Report\n> Merging [#5673](https://codecov.io/gh/huggingface/transformers/pull/5673?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/201d23f2854c7a13d3c32df4947af9fd7365c2cd&el=desc) will **decrease** coverage by `0.48%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5673/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5673?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5673 +/- ##\n==========================================\n- Coverage 77.34% 76.85% -0.49% \n==========================================\n Files 146 146 \n Lines 25948 25949 +1 \n==========================================\n- Hits 20070 19944 -126 \n- Misses 5878 6005 +127 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5673?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5673/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `82.04% <ø> (ø)` | |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5673/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.39% <ø> (ø)` | |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5673/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `81.55% <ø> (ø)` | |\n| [src/transformers/modeling\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5673/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.45% <ø> (ø)` | |\n| [src/transformers/modeling\\_outputs.py](https://codecov.io/gh/huggingface/transformers/pull/5673/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vdXRwdXRzLnB5) | `100.00% <ø> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5673/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.19% <100.00%> (+0.05%)` | :arrow_up: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5673/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `19.81% <0.00%> (-79.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5673/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/5673/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `34.69% <0.00%> (-57.15%)` | :arrow_down: |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5673/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `50.74% <0.00%> (-35.83%)` | :arrow_down: |\n| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/5673/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5673?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5673?src=pr&el=footer). Last update [223084e...858c827](https://codecov.io/gh/huggingface/transformers/pull/5673?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I am so glad you noticed since this is the bit I spent the most time on :-)" ]
1,594
1,594
1,594
COLLABORATOR
null
This PR adds proper documentation for all model outputs introduced in #5226. There are a few fixes to some docstrings for proper sphinx formatting, and a tiny change in the function that generates docstring to reference to `ModelOutput` types by their full names (since they are not in the init).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5673/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5673/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5673", "html_url": "https://github.com/huggingface/transformers/pull/5673", "diff_url": "https://github.com/huggingface/transformers/pull/5673.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5673.patch", "merged_at": 1594416663000 }
https://api.github.com/repos/huggingface/transformers/issues/5672
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5672/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5672/comments
https://api.github.com/repos/huggingface/transformers/issues/5672/events
https://github.com/huggingface/transformers/pull/5672
654,967,961
MDExOlB1bGxSZXF1ZXN0NDQ3NTg5Mzg0
5,672
Added first description of the model
{ "login": "onepointconsulting", "id": 35300398, "node_id": "MDQ6VXNlcjM1MzAwMzk4", "avatar_url": "https://avatars.githubusercontent.com/u/35300398?v=4", "gravatar_id": "", "url": "https://api.github.com/users/onepointconsulting", "html_url": "https://github.com/onepointconsulting", "followers_url": "https://api.github.com/users/onepointconsulting/followers", "following_url": "https://api.github.com/users/onepointconsulting/following{/other_user}", "gists_url": "https://api.github.com/users/onepointconsulting/gists{/gist_id}", "starred_url": "https://api.github.com/users/onepointconsulting/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/onepointconsulting/subscriptions", "organizations_url": "https://api.github.com/users/onepointconsulting/orgs", "repos_url": "https://api.github.com/users/onepointconsulting/repos", "events_url": "https://api.github.com/users/onepointconsulting/events{/privacy}", "received_events_url": "https://api.github.com/users/onepointconsulting/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5672?src=pr&el=h1) Report\n> Merging [#5672](https://codecov.io/gh/huggingface/transformers/pull/5672?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/201d23f2854c7a13d3c32df4947af9fd7365c2cd&el=desc) will **decrease** coverage by `0.12%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5672/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5672?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5672 +/- ##\n==========================================\n- Coverage 77.34% 77.22% -0.13% \n==========================================\n Files 146 146 \n Lines 25948 25948 \n==========================================\n- Hits 20070 20038 -32 \n- Misses 5878 5910 +32 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5672?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5672/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5672/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-2.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5672/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5672?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5672?src=pr&el=footer). Last update [223084e...4c2abf8](https://codecov.io/gh/huggingface/transformers/pull/5672?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,594
1,594
1,594
CONTRIBUTOR
null
Added general description, information about the tags and also some example usage code.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5672/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5672/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5672", "html_url": "https://github.com/huggingface/transformers/pull/5672", "diff_url": "https://github.com/huggingface/transformers/pull/5672.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5672.patch", "merged_at": 1594623229000 }
https://api.github.com/repos/huggingface/transformers/issues/5671
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5671/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5671/comments
https://api.github.com/repos/huggingface/transformers/issues/5671/events
https://github.com/huggingface/transformers/pull/5671
654,947,771
MDExOlB1bGxSZXF1ZXN0NDQ3NTcyOTU1
5,671
Deprecate old past arguments
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5671?src=pr&el=h1) Report\n> Merging [#5671](https://codecov.io/gh/huggingface/transformers/pull/5671?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/201d23f2854c7a13d3c32df4947af9fd7365c2cd&el=desc) will **decrease** coverage by `0.14%`.\n> The diff coverage is `68.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5671/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5671?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5671 +/- ##\n==========================================\n- Coverage 77.34% 77.20% -0.15% \n==========================================\n Files 146 146 \n Lines 25948 25981 +33 \n==========================================\n- Hits 20070 20059 -11 \n- Misses 5878 5922 +44 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5671?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5671/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `82.04% <ø> (ø)` | |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5671/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.39% <ø> (ø)` | |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5671/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.82% <ø> (ø)` | |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5671/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `81.55% <ø> (ø)` | |\n| [src/transformers/modeling\\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5671/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `89.71% <ø> (ø)` | |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5671/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.31% <ø> (ø)` | |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5671/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.78% <ø> (ø)` | |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5671/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.80% <60.00%> (-0.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5671/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.11% <60.00%> (-0.55%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5671/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.37% <71.42%> (-1.54%)` | :arrow_down: |\n| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/5671/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5671?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5671?src=pr&el=footer). Last update [201d23f...f51c43e](https://codecov.io/gh/huggingface/transformers/pull/5671?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "i was able to use gpt2 using trainer 10-12 hours ago but i am getting an error now, i think the \"past\" variable replacing is not consistent with the trainer.py class that's why i am getting this error(now i am working with 3.0.1)\r\n`\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-15-3435b262f1ae> in <module>\r\n----> 1 trainer.train()\r\n\r\n~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/trainer.py in train(self, model_path)\r\n 497 continue\r\n 498 \r\n--> 499 tr_loss += self._training_step(model, inputs, optimizer)\r\n 500 \r\n 501 if (step + 1) % self.args.gradient_accumulation_steps == 0 or (\r\n\r\n~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/trainer.py in _training_step(self, model, inputs, optimizer)\r\n 620 inputs[\"mems\"] = self._past\r\n 621 \r\n--> 622 outputs = model(**inputs)\r\n 623 loss = outputs[0] # model outputs are always tuple in transformers (see doc)\r\n 624 \r\n\r\n~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)\r\n 530 result = self._slow_forward(*input, **kwargs)\r\n 531 else:\r\n--> 532 result = self.forward(*input, **kwargs)\r\n 533 for hook in self._forward_hooks.values():\r\n 534 hook_result = hook(self, input, result)\r\n\r\n~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py in forward(self, *inputs, **kwargs)\r\n 151 replicas = self.replicate(self.module, self.device_ids[:len(inputs)])\r\n 152 outputs = self.parallel_apply(replicas, inputs, kwargs)\r\n--> 153 return self.gather(outputs, self.output_device)\r\n 154 \r\n 155 def replicate(self, module, device_ids):\r\n\r\n~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py in gather(self, outputs, output_device)\r\n 163 \r\n 164 def gather(self, outputs, output_device):\r\n--> 165 return gather(outputs, output_device, dim=self.dim)\r\n 166 \r\n 167 \r\n\r\n~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py in gather(outputs, target_device, dim)\r\n 66 # Setting the function to None clears the refcycle.\r\n 67 try:\r\n---> 68 res = gather_map(outputs)\r\n 69 finally:\r\n 70 gather_map = None\r\n\r\n~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py in gather_map(outputs)\r\n 61 return type(out)(((k, gather_map([d[k] for d in outputs]))\r\n 62 for k in out))\r\n---> 63 return type(out)(map(gather_map, zip(*outputs)))\r\n 64 \r\n 65 # Recursive function calls like this create reference cycles.\r\n\r\nTypeError: __init__() missing 1 required positional argument: 'logits'\r\n`", "This is not linked to this PR, as the Trainer never uses past. It seems linked to the model output PR (#5226). You need to instantiate your model by passing `return_tuple=True` to avoid the new behavior, or by adding it to your config like this:\r\n```\r\nconfig.return_tuple = True\r\n```\r\n", "Then why the trainer that works in 3.0.1 does not work after this PR merge.\nI am new to this library and trying to understand it would be very helpful\nif you explained a bit.\nThanks\n\nOn Sat, Jul 11, 2020, 6:10 PM Sylvain Gugger <[email protected]>\nwrote:\n\n> This is not linked to this PR, as the Trainer never uses past. It seems\n> linked to the model output PR (#5226\n> <https://github.com/huggingface/transformers/issues/5226>) will push a\n> quick fix soon.\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/5671#issuecomment-657054389>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AHYJOHP4KBK2FJA2JC2J7V3R3BJEDANCNFSM4OWZ67WQ>\n> .\n>\n", "I think you mistake the PR that caused the problem, #5226 was merged just a bit before." ]
1,594
1,594
1,594
COLLABORATOR
null
As discussed internally, previous arguments `past`, `decoder_cached_states` and `decoder_past_key_value_states` are deprecated and replaced by either `past_key_values` or `decoder_past_key_values`. This also fixes the mentions to those arguments in the input docstrings, the output docstrings already refer to the correct arg (this was done in #5226 ). In passing, replace `DeprecationWarning` in the other deprecated args by `FutureWarning`, since it's the right way to do it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5671/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5671/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5671", "html_url": "https://github.com/huggingface/transformers/pull/5671", "diff_url": "https://github.com/huggingface/transformers/pull/5671.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5671.patch", "merged_at": 1594416353000 }
https://api.github.com/repos/huggingface/transformers/issues/5670
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5670/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5670/comments
https://api.github.com/repos/huggingface/transformers/issues/5670/events
https://github.com/huggingface/transformers/pull/5670
654,942,191
MDExOlB1bGxSZXF1ZXN0NDQ3NTY4NjI0
5,670
[WIP] add DeFormer (ACL 2020) example
{ "login": "csarron", "id": 8440740, "node_id": "MDQ6VXNlcjg0NDA3NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/8440740?v=4", "gravatar_id": "", "url": "https://api.github.com/users/csarron", "html_url": "https://github.com/csarron", "followers_url": "https://api.github.com/users/csarron/followers", "following_url": "https://api.github.com/users/csarron/following{/other_user}", "gists_url": "https://api.github.com/users/csarron/gists{/gist_id}", "starred_url": "https://api.github.com/users/csarron/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/csarron/subscriptions", "organizations_url": "https://api.github.com/users/csarron/orgs", "repos_url": "https://api.github.com/users/csarron/repos", "events_url": "https://api.github.com/users/csarron/events{/privacy}", "received_events_url": "https://api.github.com/users/csarron/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi Qingqing, good to see you here! Yeah, I checked your official code repo and found it isn't based on 🤗's Transformers. It'll be nice if you can reimplement it and add Deformer in our library! \r\n\r\n~~@thomwolf Re. `🤗nlp`, should we use `🤗nlp` in `transformers` right away? Since it would make `🤗nlp` a dependency of `transformers/examples`.~~\r\n\r\nNever mind, it already is.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,594
1,600
1,600
CONTRIBUTOR
null
Hi there, I'm one of the authors of the [DeFormer paper](https://www.aclweb.org/anthology/2020.acl-main.411/), and I'd like to adapt the [DeFormer codebase](https://github.com/StonyBrookNLP/deformer) to this awesome transformers library. To get the adaptation done, I put a few high-level todos in the README (also here). I plan to get a working example by/before August but don't have a precise timeline yet. Let me hear what you think. Thanks. - [ ] use HF preprocessing (use HF nlp library) - [ ] convert original TF DeFormer to HF version - [ ] convert pre-trained checkpoints - [ ] compare and test accuracy for SQuAD, RACE, and BoolQ - [ ] prepare demo and upload to model cards
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5670/reactions", "total_count": 6, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 6, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5670/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5670", "html_url": "https://github.com/huggingface/transformers/pull/5670", "diff_url": "https://github.com/huggingface/transformers/pull/5670.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5670.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/5669
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5669/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5669/comments
https://api.github.com/repos/huggingface/transformers/issues/5669/events
https://github.com/huggingface/transformers/pull/5669
654,933,180
MDExOlB1bGxSZXF1ZXN0NDQ3NTYxNDU1
5,669
[squad] add version tag to squad cache
{ "login": "lazovich", "id": 678679, "node_id": "MDQ6VXNlcjY3ODY3OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/678679?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lazovich", "html_url": "https://github.com/lazovich", "followers_url": "https://api.github.com/users/lazovich/followers", "following_url": "https://api.github.com/users/lazovich/following{/other_user}", "gists_url": "https://api.github.com/users/lazovich/gists{/gist_id}", "starred_url": "https://api.github.com/users/lazovich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lazovich/subscriptions", "organizations_url": "https://api.github.com/users/lazovich/orgs", "repos_url": "https://api.github.com/users/lazovich/repos", "events_url": "https://api.github.com/users/lazovich/events{/privacy}", "received_events_url": "https://api.github.com/users/lazovich/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5669?src=pr&el=h1) Report\n> Merging [#5669](https://codecov.io/gh/huggingface/transformers/pull/5669?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0267668c3d648c6e41afda97f5df8671ee880ac3&el=desc) will **increase** coverage by `0.52%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5669/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5669?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5669 +/- ##\n==========================================\n+ Coverage 77.01% 77.53% +0.52% \n==========================================\n Files 128 145 +17 \n Lines 21615 25367 +3752 \n==========================================\n+ Hits 16646 19668 +3022 \n- Misses 4969 5699 +730 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5669?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <ø> (+0.11%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/5669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `74.01% <ø> (+5.16%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3MucHk=) | `86.04% <ø> (+0.68%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark\\_args\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdGYucHk=) | `87.50% <ø> (ø)` | |\n| [src/transformers/benchmark/benchmark\\_args\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdXRpbHMucHk=) | `89.13% <ø> (-7.75%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3RmLnB5) | `61.53% <ø> (ø)` | |\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.84% <ø> (-3.26%)` | :arrow_down: |\n| [src/transformers/configuration\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.18% <ø> (+0.32%)` | :arrow_up: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5669/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.75% <ø> (+0.41%)` | :arrow_up: |\n| ... and [118 more](https://codecov.io/gh/huggingface/transformers/pull/5669/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5669?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5669?src=pr&el=footer). Last update [bfacb2e...77a74f7](https://codecov.io/gh/huggingface/transformers/pull/5669?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,594
1,594
1,594
CONTRIBUTOR
null
This diff is to add a version number to the SQuAD cache file so that cached SQuADv1.1 features are not mistakenly read when you request SQuADv2. Addresses #5668
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5669/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5669/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5669", "html_url": "https://github.com/huggingface/transformers/pull/5669", "diff_url": "https://github.com/huggingface/transformers/pull/5669.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5669.patch", "merged_at": 1594413262000 }
https://api.github.com/repos/huggingface/transformers/issues/5668
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5668/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5668/comments
https://api.github.com/repos/huggingface/transformers/issues/5668/events
https://github.com/huggingface/transformers/issues/5668
654,929,579
MDU6SXNzdWU2NTQ5Mjk1Nzk=
5,668
SquadDataset should use version number in cache file name
{ "login": "lazovich", "id": 678679, "node_id": "MDQ6VXNlcjY3ODY3OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/678679?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lazovich", "html_url": "https://github.com/lazovich", "followers_url": "https://api.github.com/users/lazovich/followers", "following_url": "https://api.github.com/users/lazovich/following{/other_user}", "gists_url": "https://api.github.com/users/lazovich/gists{/gist_id}", "starred_url": "https://api.github.com/users/lazovich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lazovich/subscriptions", "organizations_url": "https://api.github.com/users/lazovich/orgs", "repos_url": "https://api.github.com/users/lazovich/repos", "events_url": "https://api.github.com/users/lazovich/events{/privacy}", "received_events_url": "https://api.github.com/users/lazovich/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "closes by your own #5669 Thanks for your contribution :)" ]
1,594
1,594
1,594
CONTRIBUTOR
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): N/A Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: * [x] my own modified scripts: simple example code given below The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce 1. Load `SquadDataset` with `args.version_2_with_negative = False`. You will see the progress bars for creating cached features. 2. Load `SquadDataset` with `args.version_2_with_negative = True`. Rather than seeing it create a new cache for the v2 dataset, you will see it automatically use the already made cache file for v1. Example code: ``` from transformers import AutoTokenizer from transformers import SquadDataset from transformers import SquadDataTrainingArguments tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') args = SquadDataTrainingArguments() # FIXME: Change this path to your local SQuAD dataset path args.data_dir = os.path.expanduser("~/.torch/nlp/SQuAD") args.version_2_with_negative = False squadv1 = SquadDataset(args, tokenizer) args.version_2_with_negative = True squadv2 = SquadDataset(args, tokenizer) ``` ## Expected behavior Separate cache files should be created for the v1.1 and v2 versions of SQuAD ## Environment info - `transformers` version: 3.0.2 - Platform: Linux-5.0.0-1035-azure-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.7 - PyTorch version (GPU?): 1.5.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: N/A - Using distributed or parallel set-up in script?: no
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5668/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5668/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5667
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5667/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5667/comments
https://api.github.com/repos/huggingface/transformers/issues/5667/events
https://github.com/huggingface/transformers/issues/5667
654,928,851
MDU6SXNzdWU2NTQ5Mjg4NTE=
5,667
pytorch_model.bin file is different after uploading to HuggingFace Models
{ "login": "JohnGiorgi", "id": 8917831, "node_id": "MDQ6VXNlcjg5MTc4MzE=", "avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JohnGiorgi", "html_url": "https://github.com/JohnGiorgi", "followers_url": "https://api.github.com/users/JohnGiorgi/followers", "following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}", "gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}", "starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions", "organizations_url": "https://api.github.com/users/JohnGiorgi/orgs", "repos_url": "https://api.github.com/users/JohnGiorgi/repos", "events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}", "received_events_url": "https://api.github.com/users/JohnGiorgi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Very strange, I just tried the code and it returns `0.8289950489997864`. Maybe you try to re-download the model with:\r\n\r\n```python\r\n# Load the model\r\ntokenizer = AutoTokenizer.from_pretrained(\"johngiorgi/declutr-small\", force_download=True)\r\nmodel = AutoModel.from_pretrained(\"johngiorgi/declutr-small\", force_download=True)\r\n```\r\n\r\nHopefully this helps :)", "Hi @stefan-it,\r\n\r\nI did try with `force_download=True` a bunch of times but it never worked. I updated the example to include that.\r\n\r\nHmm, mind sending the exact code you used to get the `0.8289950489997864`? I had a collegue run the code on their own system and like me they got the wrong answer of: `0.9928748607635498`. I tried on two machines (linux and mac), both produce `0.9928748607635498`.\r\n\r\nJust to be sure, I deleted my conda environment, made a new one and reinstalled `transformers`. Then I deleted the default cache dir (for me this was at `~/.cache/torch/transformers/`). Finally, I tried the code again this time with `force_download=True`. No beans, exactly the same issue and the `semantic_sim` is ~0.99:\r\n\r\n\r\n```bash\r\nIn [1]: import torch\r\n ...: from scipy.spatial.distance import cosine\r\n ...:\r\n ...: from transformers import AutoModel, AutoTokenizer\r\n\r\nIn [2]: tokenizer = AutoTokenizer.from_pretrained(\"johngiorgi/declutr-small\", force_download=True, cache_dir=\"./declutr-small\")\r\n ...: model = AutoModel.from_pretrained(\"johngiorgi/declutr-small\", force_download=True, cache_dir=\"./declutr-small\")\r\nDownloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 547/547 [00:00<00:00, 277kB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 798k/798k [00:00<00:00, 4.71MB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 456k/456k [00:00<00:00, 3.83MB/s]\r\nDownloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 239/239 [00:00<00:00, 157kB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 54.0/54.0 [00:00<00:00, 35.4kB/s]\r\nDownloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 547/547 [00:00<00:00, 403kB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 331M/331M [00:14<00:00, 23.2MB/s]\r\n\r\nIn [3]: # Prepare some text to embed\r\n ...: text = [\r\n ...: \"A smiling costumed woman is holding an umbrella.\",\r\n ...: \"A happy woman in a fairy costume holds an umbrella.\",\r\n ...: ]\r\n ...: inputs = tokenizer(text, padding=True, truncation=True, return_tensors=\"pt\")\r\n ...:\r\n ...: # Embed the text\r\n ...: with torch.no_grad():\r\n ...: sequence_output, _ = model(**inputs, output_hidden_states=False)\r\n ...:\r\n ...: # Mean pool the token-level embeddings to get sentence-level embeddings\r\n ...: embeddings = torch.sum(\r\n ...: sequence_output * inputs[\"attention_mask\"].unsqueeze(-1), dim=1\r\n ...: ) / torch.clamp(torch.sum(inputs[\"attention_mask\"], dim=1, keepdims=True), min=1e-9)\r\n ...:\r\n ...: # Compute a semantic similarity via the cosine distance\r\n ...: semantic_sim = 1 - cosine(embeddings[0], embeddings[1])\r\n ...: print(semantic_sim) # => ~0.99, NOT the same as the local model!\r\n0.992874801158905\r\n```", "@JohnGiorgi I just ran:\r\n\r\n```python\r\nimport torch\r\nfrom scipy.spatial.distance import cosine\r\n\r\nfrom transformers import AutoTokenizer, AutoModel\r\n\r\n# Load the model\r\ntokenizer = AutoTokenizer.from_pretrained(\"johngiorgi/declutr-small\", force_download=True)\r\nmodel = AutoModel.from_pretrained(\"johngiorgi/declutr-small\", force_download=True)\r\n\r\n# Prepare some text to embed\r\ntext = [\r\n \"A smiling costumed woman is holding an umbrella.\",\r\n \"A happy woman in a fairy costume holds an umbrella.\",\r\n]\r\ninputs = tokenizer(text, padding=True, truncation=True, return_tensors=\"pt\")\r\n\r\n# Embed the text\r\nwith torch.no_grad():\r\n sequence_output, _ = model(**inputs, output_hidden_states=False)\r\n\r\n# Mean pool the token-level embeddings to get sentence-level embeddings\r\nembeddings = torch.sum(\r\n sequence_output * inputs[\"attention_mask\"].unsqueeze(-1), dim=1\r\n) / torch.clamp(torch.sum(inputs[\"attention_mask\"], dim=1, keepdims=True), min=1e-9)\r\n\r\n# Compute a semantic similarity via the cosine distance\r\nsemantic_sim = 1 - cosine(embeddings[0], embeddings[1])\r\nprint(semantic_sim)\r\n```\r\n\r\nI re-downloaded the model and it still returns 0.8289950489997864 😅", "That is maddening, I literally copy-pasted that code and I get `0.992874801158905` 😢 Thanks anyways for confirming it works somewhere at least!\r\n\r\n<img width=\"1440\" alt=\"image\" src=\"https://user-images.githubusercontent.com/8917831/87212687-d1232300-c2ed-11ea-9039-3b05077be964.png\">\r\n" ]
1,594
1,596
1,596
CONTRIBUTOR
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): `distilroberta-base` Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behaviour: 1. Save my model with `.save_pretrained()` 2. Download the models `pytorch_model.bin` 3. Check the `diff` between `pytorch_model.bin` _before_ uploading and _after_ downloading, it is not the same. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> I first noticed that a model I have trained produced different outputs when I loaded it from a local directory compared to uploading it to https://huggingface.co/models and downloading it. Unfortunately, the error is a little hard to reproduce as you need access to my saved model. I have uploaded it [here](https://drive.google.com/file/d/1C8okSoS4tJHtZllQ8qIJbmXRLb8ITL6N/view?usp=sharing). With that model downloaded, I compare its outputs before/after uploading: _Before uploading_ (e.g. loading the model from a local directory) ```python import torch from scipy.spatial.distance import cosine from transformers import AutoModel, AutoTokenizer # Load the model tokenizer = AutoTokenizer.from_pretrained("declutr-small") model = AutoModel.from_pretrained("declutr-small") # Prepare some text to embed text = [ "A smiling costumed woman is holding an umbrella.", "A happy woman in a fairy costume holds an umbrella.", ] inputs = tokenizer(text, padding=True, truncation=True, return_tensors="pt") # Embed the text with torch.no_grad(): sequence_output, _ = model(**inputs, output_hidden_states=False) # Mean pool the token-level embeddings to get sentence-level embeddings embeddings = torch.sum( sequence_output * inputs["attention_mask"].unsqueeze(-1), dim=1 ) / torch.clamp(torch.sum(inputs["attention_mask"], dim=1, keepdims=True), min=1e-9) # Compute a semantic similarity via the cosine distance semantic_sim = 1 - cosine(embeddings[0], embeddings[1]) print(semantic_sim) # => ~0.83 ``` _After uploading_ (e.g. loading the model from https://huggingface.co/models) ```python # Load the model tokenizer = AutoTokenizer.from_pretrained("johngiorgi/declutr-small", force_download=True) model = AutoModel.from_pretrained("johngiorgi/declutr-small", force_download=True) # Prepare some text to embed text = [ "A smiling costumed woman is holding an umbrella.", "A happy woman in a fairy costume holds an umbrella.", ] inputs = tokenizer(text, padding=True, truncation=True, return_tensors="pt") # Embed the text with torch.no_grad(): sequence_output, _ = model(**inputs, output_hidden_states=False) # Mean pool the token-level embeddings to get sentence-level embeddings embeddings = torch.sum( sequence_output * inputs["attention_mask"].unsqueeze(-1), dim=1 ) / torch.clamp(torch.sum(inputs["attention_mask"], dim=1, keepdims=True), min=1e-9) # Compute a semantic similarity via the cosine distance semantic_sim = 1 - cosine(embeddings[0], embeddings[1]) print(semantic_sim) # => ~0.99, NOT the same as the local model! ``` The embeddings must be different, as their semantic similarity is. After some more digging, I realized that the `pytorch_model.bin` of the local model and the uploaded then downloaded model are not the same, which I checked with `diff`. I tried everything I could think of, deleting my `transformers` cache folder, deleting the model from https://huggingface.co/models and re-uploading. I also tried uploading it from both macOS/Linux. The error persists. Does anyone have any clue how this could happen? It's such a frustrating error b/c it essentially passes silently until you look at your models outputs. ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> I expect that the outputs of my model to be identical when I load it from a local directory, and when I upload it and then download it from https://huggingface.co/models. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.0.2 - Platform: Linux-4.15.0-109-generic-x86_64-with-debian-buster-sid - Python version: 3.7.7 - PyTorch version (GPU?): 1.5.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No. - Using distributed or parallel set-up in script?: No.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5667/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5667/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5666
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5666/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5666/comments
https://api.github.com/repos/huggingface/transformers/issues/5666/events
https://github.com/huggingface/transformers/issues/5666
654,920,260
MDU6SXNzdWU2NTQ5MjAyNjA=
5,666
How do you connect Convolutional layers to Transformers?
{ "login": "leersaam", "id": 5494037, "node_id": "MDQ6VXNlcjU0OTQwMzc=", "avatar_url": "https://avatars.githubusercontent.com/u/5494037?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leersaam", "html_url": "https://github.com/leersaam", "followers_url": "https://api.github.com/users/leersaam/followers", "following_url": "https://api.github.com/users/leersaam/following{/other_user}", "gists_url": "https://api.github.com/users/leersaam/gists{/gist_id}", "starred_url": "https://api.github.com/users/leersaam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leersaam/subscriptions", "organizations_url": "https://api.github.com/users/leersaam/orgs", "repos_url": "https://api.github.com/users/leersaam/repos", "events_url": "https://api.github.com/users/leersaam/events{/privacy}", "received_events_url": "https://api.github.com/users/leersaam/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,594
1,600
1,600
NONE
null
# ❓ Questions & Help Hi everyone! Where can I find code example that makes clear how to connect Convolutional layers to Transformers and how it needs to be shaped in order to make such a connection? I have a bit of a hard time figuring it out. Thank you for your support.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5666/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5666/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5665
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5665/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5665/comments
https://api.github.com/repos/huggingface/transformers/issues/5665/events
https://github.com/huggingface/transformers/pull/5665
654,871,191
MDExOlB1bGxSZXF1ZXN0NDQ3NTEyNTEz
5,665
[AutoModels] Fix config params handling of all PT and TF AutoModels
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Isn't the canonical way:\r\n```\r\nconfig, kwargs = AutoConfig.from_pretrained(pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs)\r\n```\r\nin the test?", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5665?src=pr&el=h1) Report\n> Merging [#5665](https://codecov.io/gh/huggingface/transformers/pull/5665?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ce374ba87767d551f720242d5e64bfa976531079&el=desc) will **decrease** coverage by `1.11%`.\n> The diff coverage is `55.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5665/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5665?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5665 +/- ##\n==========================================\n- Coverage 78.43% 77.32% -1.12% \n==========================================\n Files 146 146 \n Lines 26002 26002 \n==========================================\n- Hits 20395 20105 -290 \n- Misses 5607 5897 +290 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5665?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5665/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `63.03% <50.00%> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5665/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `74.41% <60.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5665/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5665/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5665/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5665/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5665?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5665?src=pr&el=footer). Last update [ce374ba...3bb7966](https://codecov.io/gh/huggingface/transformers/pull/5665?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "> Isn't the canonical way:\r\n> \r\n> ```\r\n> config, kwargs = AutoConfig.from_pretrained(pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs)\r\n> ```\r\n> \r\n> in the test?\r\n\r\nOh yeah, that's much cleaner. We should probably update all AutoModels in PT and TF with this then, no?", "I think that's correct, and the way it was always meant to be :raised_eyebrow: ", "> We should probably update all AutoModels in PT and TF with this then, no?\r\n\r\nYes, I agree." ]
1,594
1,594
1,594
MEMBER
null
As shown in #5474, currently, a command like: ```python from transformers import AutoModelForCausalLM] model = AutoModelForCausalLM.from_pretrained('bert-base-uncased', is_decoder=True) ``` fails because `is_decoder` is carried on as a model init argument even though it should *only* be used as a config init argument. This PR fixes one `AutoModelFor....` for this, but this still has be applied for other `AutoModelFor...` classes. Pinging @LysandreJik @sgugger @thomwolf - are you guys ok with this change (bug fix) in general?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5665/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5665/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5665", "html_url": "https://github.com/huggingface/transformers/pull/5665", "diff_url": "https://github.com/huggingface/transformers/pull/5665.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5665.patch", "merged_at": 1594799474000 }
https://api.github.com/repos/huggingface/transformers/issues/5664
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5664/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5664/comments
https://api.github.com/repos/huggingface/transformers/issues/5664/events
https://github.com/huggingface/transformers/issues/5664
654,865,134
MDU6SXNzdWU2NTQ4NjUxMzQ=
5,664
[PyTorch] Load and run a model CPU which was traced and saved on GPU
{ "login": "vdantu", "id": 36211508, "node_id": "MDQ6VXNlcjM2MjExNTA4", "avatar_url": "https://avatars.githubusercontent.com/u/36211508?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vdantu", "html_url": "https://github.com/vdantu", "followers_url": "https://api.github.com/users/vdantu/followers", "following_url": "https://api.github.com/users/vdantu/following{/other_user}", "gists_url": "https://api.github.com/users/vdantu/gists{/gist_id}", "starred_url": "https://api.github.com/users/vdantu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vdantu/subscriptions", "organizations_url": "https://api.github.com/users/vdantu/orgs", "repos_url": "https://api.github.com/users/vdantu/repos", "events_url": "https://api.github.com/users/vdantu/events{/privacy}", "received_events_url": "https://api.github.com/users/vdantu/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[ { "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false } ]
[ "@vdantu Thanks for reporting the issue. \r\n\r\nThe problem arises in `modeling_openai.py`when the user do not provide the `position_ids` function argument thus leading to the inner `position_ids` being created during the forward call. This is fine in classic PyTorch because `forward` is actually evaluated at each call. When it comes to tracing, this is an issue, because the device specified in the forward to actually create the tensor will be hardcoded and you can actually see it in the generated graph: \r\n\r\n```python\r\n %input.1 : Tensor = aten::view(%input_ids.1, %64)\r\n %140 : Device = prim::Constant[value=\"cuda:0\"]()\r\n %position_ids.1 : Tensor = aten::arange(%59, %67, %45, %140, %70)\r\n %73 : Tensor = aten::unsqueeze(%position_ids.1, %45)\r\n```\r\n\r\nAbove you can see `%140` is a constant which value is actually set to `\"cuda:0\"` and then, it is reused to create the `%position_ids.1` tensor through `aten::arange(..., %140, ...)` which of course leads to the error you're seeing.\r\n\r\nI'll have a fix to generate the `position_ids` buffer correctly registered at the Module initialisation and not during forward, so it should be correctly handled by the `map_location` parameter while exporting.", "The above PR should fix the issue, below is the output of the code you provided. If you want to give it a try, let us know if it works on your end too 👍 \r\n\r\n```python\r\n(pytorch) mfuntowicz@brutasse:~/Workspace/transformers$ python test.py \r\nftfy or spacy is not installed using BERT BasicTokenizer instead of SpaCy & ftfy.\r\nSome weights of OpenAIGPTModel were not initialized from the model checkpoint at openai-gpt and are newly initialized: ['position_ids']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\ncpu cpu\r\n(tensor([[[ 7.3001e-02, -1.2431e+00, 7.9122e-01, ..., 1.6806e+00,\r\n -4.3945e-01, 1.1449e+00],\r\n [-3.6239e-01, -8.3647e-01, 1.2019e+00, ..., 1.5575e+00,\r\n -8.4237e-04, 1.0779e+00],\r\n [-1.0138e+00, -7.1014e-01, 6.3509e-01, ..., 1.6684e+00,\r\n -4.6458e-01, 1.5093e+00],\r\n [-6.1989e-01, -2.9500e-01, 9.9504e-01, ..., 2.0421e+00,\r\n 4.2680e-01, 2.1920e+00],\r\n [-5.2932e-01, -1.7606e-02, 7.4836e-01, ..., 2.2980e+00,\r\n 3.4807e-01, 2.7045e+00],\r\n [-1.4679e-01, -9.8566e-02, 1.3909e+00, ..., 1.9108e+00,\r\n 6.0797e-01, 2.1617e+00]]], grad_fn=<ViewBackward>),)\r\nTo CUDA:\r\n/home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:176: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n w = w / math.sqrt(v.size(-1))\r\n/home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:179: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n b = self.bias[:, :, : w.size(-2), : w.size(-1)]\r\ngraph(%self.1 : __torch__.transformers.modeling_openai.OpenAIGPTModel,\r\n %input_ids : Long(1:6, 6:1)):\r\n %4489 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name=\"h\"](%self.1)\r\n %4490 : __torch__.transformers.modeling_openai.___torch_mangle_139.Block = prim::GetAttr[name=\"11\"](%4489)\r\n %4463 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name=\"h\"](%self.1)\r\n %4464 : __torch__.transformers.modeling_openai.___torch_mangle_127.Block = prim::GetAttr[name=\"10\"](%4463)\r\n %4437 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name=\"h\"](%self.1)\r\n %4438 : __torch__.transformers.modeling_openai.___torch_mangle_115.Block = prim::GetAttr[name=\"9\"](%4437)\r\n %4411 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name=\"h\"](%self.1)\r\n %4412 : __torch__.transformers.modeling_openai.___torch_mangle_103.Block = prim::GetAttr[name=\"8\"](%4411)\r\n %4385 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name=\"h\"](%self.1)\r\n %4386 : __torch__.transformers.modeling_openai.___torch_mangle_91.Block = prim::GetAttr[name=\"7\"](%4385)\r\n %4359 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name=\"h\"](%self.1)\r\n %4360 : __torch__.transformers.modeling_openai.___torch_mangle_79.Block = prim::GetAttr[name=\"6\"](%4359)\r\n %4333 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name=\"h\"](%self.1)\r\n %4334 : __torch__.transformers.modeling_openai.___torch_mangle_67.Block = prim::GetAttr[name=\"5\"](%4333)\r\n %4307 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name=\"h\"](%self.1)\r\n %4308 : __torch__.transformers.modeling_openai.___torch_mangle_55.Block = prim::GetAttr[name=\"4\"](%4307)\r\n %4281 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name=\"h\"](%self.1)\r\n %4282 : __torch__.transformers.modeling_openai.___torch_mangle_43.Block = prim::GetAttr[name=\"3\"](%4281)\r\n %4255 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name=\"h\"](%self.1)\r\n %4256 : __torch__.transformers.modeling_openai.___torch_mangle_31.Block = prim::GetAttr[name=\"2\"](%4255)\r\n %4229 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name=\"h\"](%self.1)\r\n %4230 : __torch__.transformers.modeling_openai.___torch_mangle_19.Block = prim::GetAttr[name=\"1\"](%4229)\r\n %4203 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name=\"h\"](%self.1)\r\n %4204 : __torch__.transformers.modeling_openai.Block = prim::GetAttr[name=\"0\"](%4203)\r\n %4178 : __torch__.torch.nn.modules.dropout.Dropout = prim::GetAttr[name=\"drop\"](%self.1)\r\n %4177 : __torch__.torch.nn.modules.sparse.___torch_mangle_0.Embedding = prim::GetAttr[name=\"positions_embed\"](%self.1)\r\n %4175 : __torch__.torch.nn.modules.sparse.Embedding = prim::GetAttr[name=\"tokens_embed\"](%self.1)\r\n %4173 : Tensor = prim::GetAttr[name=\"position_ids\"](%self.1)\r\n %458 : int = prim::Constant[value=0]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:438:0\r\n %459 : int = aten::size(%input_ids, %458) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:438:0\r\n %460 : Long() = prim::NumToTensor(%459)\r\n %4020 : int = aten::Int(%460)\r\n %461 : int = prim::Constant[value=1]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:438:0\r\n %462 : int = aten::size(%input_ids, %461) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:438:0\r\n %463 : Long() = prim::NumToTensor(%462)\r\n %4021 : int = aten::Int(%463)\r\n %470 : int = aten::Int(%463)\r\n %464 : int = aten::Int(%463)\r\n %465 : int = prim::Constant[value=-1]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:439:0\r\n %466 : int[] = prim::ListConstruct(%465, %464)\r\n %input.1 : Long(1:6, 6:1) = aten::view(%input_ids, %466) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:439:0\r\n %468 : int = prim::Constant[value=0]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:447:0\r\n %469 : Long(1:512, 512:1) = aten::unsqueeze(%4173, %468) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:447:0\r\n %471 : int = prim::Constant[value=1]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:447:0\r\n %input.2 : Long(1:512) = aten::select(%469, %471, %470) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:447:0\r\n %4638 : Tensor = prim::CallMethod[name=\"forward\"](%4175, %input.1)\r\n %4639 : Tensor = prim::CallMethod[name=\"forward\"](%4177, %input.2)\r\n %481 : int = prim::Constant[value=1]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:479:0\r\n %482 : Float(1:4608, 6:768, 768:1) = aten::add(%4638, %4639, %481) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:479:0\r\n %483 : Long() = prim::Constant[value={0}]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:479:0\r\n %484 : int = prim::Constant[value=1]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:479:0\r\n %input.3 : Float(1:4608, 6:768, 768:1) = aten::add(%482, %483, %484) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:479:0\r\n %4640 : Tensor = prim::CallMethod[name=\"forward\"](%4178, %input.3)\r\n %489 : int = prim::Constant[value=-1]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:482:0\r\n %490 : int = aten::size(%4640, %489) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:482:0\r\n %491 : Long() = prim::NumToTensor(%490)\r\n %4022 : int = aten::Int(%491)\r\n %4641 : Tensor = prim::CallMethod[name=\"forward\"](%4204, %4640)\r\n %4642 : Tensor = prim::CallMethod[name=\"forward\"](%4230, %4641)\r\n %4643 : Tensor = prim::CallMethod[name=\"forward\"](%4256, %4642)\r\n %4644 : Tensor = prim::CallMethod[name=\"forward\"](%4282, %4643)\r\n %4645 : Tensor = prim::CallMethod[name=\"forward\"](%4308, %4644)\r\n %4646 : Tensor = prim::CallMethod[name=\"forward\"](%4334, %4645)\r\n %4647 : Tensor = prim::CallMethod[name=\"forward\"](%4360, %4646)\r\n %4648 : Tensor = prim::CallMethod[name=\"forward\"](%4386, %4647)\r\n %4649 : Tensor = prim::CallMethod[name=\"forward\"](%4412, %4648)\r\n %4650 : Tensor = prim::CallMethod[name=\"forward\"](%4438, %4649)\r\n %4651 : Tensor = prim::CallMethod[name=\"forward\"](%4464, %4650)\r\n %4652 : Tensor = prim::CallMethod[name=\"forward\"](%4490, %4651)\r\n %4023 : int[] = prim::ListConstruct(%4020, %4021, %4022)\r\n %4024 : Float(1:4608, 6:768, 768:1) = aten::view(%4652, %4023) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:495:0\r\n %4025 : (Float(1:4608, 6:768, 768:1)) = prim::TupleConstruct(%4024)\r\n return (%4025)\r\n\r\n\r\n\r\nLoad model onto CPU\r\n\r\n\r\ngraph(%self.1 : __torch__.transformers.modeling_openai.OpenAIGPTModel,\r\n %input_ids.1 : Tensor):\r\n %78 : Tensor = prim::Constant[value={0}]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:479:0\r\n %47 : int = prim::Constant[value=0]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:438:0\r\n %53 : int = prim::Constant[value=1]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:438:0\r\n %61 : int = prim::Constant[value=-1]() # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:439:0\r\n %3 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name=\"h\"](%self.1)\r\n %4 : __torch__.transformers.modeling_openai.___torch_mangle_139.Block = prim::GetAttr[name=\"11\"](%3)\r\n %6 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name=\"h\"](%self.1)\r\n %7 : __torch__.transformers.modeling_openai.___torch_mangle_127.Block = prim::GetAttr[name=\"10\"](%6)\r\n %9 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name=\"h\"](%self.1)\r\n %10 : __torch__.transformers.modeling_openai.___torch_mangle_115.Block = prim::GetAttr[name=\"9\"](%9)\r\n %12 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name=\"h\"](%self.1)\r\n %13 : __torch__.transformers.modeling_openai.___torch_mangle_103.Block = prim::GetAttr[name=\"8\"](%12)\r\n %15 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name=\"h\"](%self.1)\r\n %16 : __torch__.transformers.modeling_openai.___torch_mangle_91.Block = prim::GetAttr[name=\"7\"](%15)\r\n %18 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name=\"h\"](%self.1)\r\n %19 : __torch__.transformers.modeling_openai.___torch_mangle_79.Block = prim::GetAttr[name=\"6\"](%18)\r\n %21 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name=\"h\"](%self.1)\r\n %22 : __torch__.transformers.modeling_openai.___torch_mangle_67.Block = prim::GetAttr[name=\"5\"](%21)\r\n %24 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name=\"h\"](%self.1)\r\n %25 : __torch__.transformers.modeling_openai.___torch_mangle_55.Block = prim::GetAttr[name=\"4\"](%24)\r\n %27 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name=\"h\"](%self.1)\r\n %28 : __torch__.transformers.modeling_openai.___torch_mangle_43.Block = prim::GetAttr[name=\"3\"](%27)\r\n %30 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name=\"h\"](%self.1)\r\n %31 : __torch__.transformers.modeling_openai.___torch_mangle_31.Block = prim::GetAttr[name=\"2\"](%30)\r\n %33 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name=\"h\"](%self.1)\r\n %34 : __torch__.transformers.modeling_openai.___torch_mangle_19.Block = prim::GetAttr[name=\"1\"](%33)\r\n %36 : __torch__.torch.nn.modules.container.ModuleList = prim::GetAttr[name=\"h\"](%self.1)\r\n %37 : __torch__.transformers.modeling_openai.Block = prim::GetAttr[name=\"0\"](%36)\r\n %39 : __torch__.torch.nn.modules.dropout.Dropout = prim::GetAttr[name=\"drop\"](%self.1)\r\n %41 : __torch__.torch.nn.modules.sparse.___torch_mangle_0.Embedding = prim::GetAttr[name=\"positions_embed\"](%self.1)\r\n %43 : __torch__.torch.nn.modules.sparse.Embedding = prim::GetAttr[name=\"tokens_embed\"](%self.1)\r\n %45 : Tensor = prim::GetAttr[name=\"position_ids\"](%self.1)\r\n %48 : int = aten::size(%input_ids.1, %47) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:438:0\r\n %49 : Tensor = prim::NumToTensor(%48) # :0:0\r\n %51 : int = aten::Int(%49)\r\n %54 : int = aten::size(%input_ids.1, %53) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:438:0\r\n %55 : Tensor = prim::NumToTensor(%54) # :0:0\r\n %57 : int = aten::Int(%55)\r\n %59 : int = aten::Int(%55)\r\n %63 : int = aten::Int(%55)\r\n %64 : int[] = prim::ListConstruct(%61, %63)\r\n %input.1 : Tensor = aten::view(%input_ids.1, %64) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:439:0\r\n %67 : Tensor = aten::unsqueeze(%45, %47) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:447:0\r\n %input0.1 : Tensor = aten::select(%67, %53, %59) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:447:0\r\n %72 : Tensor = prim::CallMethod[name=\"forward\"](%43, %input.1) # :0:0\r\n %75 : Tensor = prim::CallMethod[name=\"forward\"](%41, %input0.1) # :0:0\r\n %76 : Tensor = aten::add(%72, %75, %53) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:479:0\r\n %input1.1 : Tensor = aten::add(%76, %78, %53) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:479:0\r\n %82 : Tensor = prim::CallMethod[name=\"forward\"](%39, %input1.1) # :0:0\r\n %84 : int = aten::size(%82, %61) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:482:0\r\n %85 : Tensor = prim::NumToTensor(%84) # :0:0\r\n %87 : int = aten::Int(%85)\r\n %91 : Tensor = prim::CallMethod[name=\"forward\"](%37, %82) # :0:0\r\n %92 : Tensor = prim::CallMethod[name=\"forward\"](%34, %91) # :0:0\r\n %97 : Tensor = prim::CallMethod[name=\"forward\"](%31, %92) # :0:0\r\n %98 : Tensor = prim::CallMethod[name=\"forward\"](%28, %97) # :0:0\r\n %99 : Tensor = prim::CallMethod[name=\"forward\"](%25, %98) # :0:0\r\n %104 : Tensor = prim::CallMethod[name=\"forward\"](%22, %99) # :0:0\r\n %105 : Tensor = prim::CallMethod[name=\"forward\"](%19, %104) # :0:0\r\n %106 : Tensor = prim::CallMethod[name=\"forward\"](%16, %105) # :0:0\r\n %111 : Tensor = prim::CallMethod[name=\"forward\"](%13, %106) # :0:0\r\n %112 : Tensor = prim::CallMethod[name=\"forward\"](%10, %111) # :0:0\r\n %113 : Tensor = prim::CallMethod[name=\"forward\"](%7, %112) # :0:0\r\n %116 : Tensor = prim::CallMethod[name=\"forward\"](%4, %113) # :0:0\r\n %120 : int[] = prim::ListConstruct(%51, %57, %87)\r\n %121 : Tensor = aten::view(%116, %120) # /home/mfuntowicz/Workspace/transformers/src/transformers/modeling_openai.py:495:0\r\n %123 : (Tensor) = prim::TupleConstruct(%121)\r\n return (%123)\r\n\r\n(tensor([[[ 7.3003e-02, -1.2431e+00, 7.9122e-01, ..., 1.6806e+00,\r\n -4.3945e-01, 1.1449e+00],\r\n [-3.6239e-01, -8.3647e-01, 1.2019e+00, ..., 1.5575e+00,\r\n -8.4937e-04, 1.0779e+00],\r\n [-1.0138e+00, -7.1013e-01, 6.3510e-01, ..., 1.6684e+00,\r\n -4.6459e-01, 1.5093e+00],\r\n [-6.1989e-01, -2.9499e-01, 9.9504e-01, ..., 2.0421e+00,\r\n 4.2680e-01, 2.1920e+00],\r\n [-5.2932e-01, -1.7599e-02, 7.4836e-01, ..., 2.2980e+00,\r\n 3.4806e-01, 2.7045e+00],\r\n [-1.4679e-01, -9.8562e-02, 1.3909e+00, ..., 1.9108e+00,\r\n 6.0796e-01, 2.1617e+00]]], grad_fn=<ViewBackward>),)\r\n```", "That's very interesting @mfuntowicz ! I think we will probably have multiple of such failures - I would guess for all models that use `position_ids`. Also as a rule, should one never create a tensor on the fly, but always register a buffer for that? @mfuntowicz @sshleifer ", "Awesome, thanks for fixing this. I will test this fix. What release of transformers will this change be reflected in? I was testing with transformers 3.0.2. \r\n\r\n@patrickvonplaten : I think you are right. I remember seeing this with bert base uncased as well. It would definitely be useful to have this fix across all models. ", "Looking for easier solutions than changing all the code:\r\n\r\n1) Can we just trace the thing correctly by passing `traced_model = torch.jit.trace(model, (inputs,position_ids))`? and then document the correct way to trace (maybe we can add `jit_inputs` or reuse `dummy_inputs`?)\r\n\r\n2) how much faster is the model afterwards?\r\n\r\n3) We should add a save/load test to test_modeling_common.py if we want to support. The current test_torch_script just traces and then runs forward, and can clearly pass with many unregistered buffers.", "@sshleifer : Thanks for the response. In the example script I pasted above, do you see any errors in the way I am tracing and using the traced model? Please let me know if that needs to be changed.\r\n\r\n", "@vdantu \r\nI don't know a ton about jit, but you could try:\r\n```python\r\ntraced_model = torch.jit.trace(model, (inputs,position_ids))\r\n```\r\nand see if that fixes the error.", "@mfuntowicz : Which version of transformers (pypi package) will these changes be available with? I am used to testing the models throug pypi package or through torch.hub. What is the recommended way to get and test these fixes?", "We'll release a new version in the coming weeks, in the meantime you can install from source: `pip install git+https://github.com/huggingface/transformers`", "PyTorch doesn't currently support tracing passed devices correctly: https://github.com/pytorch/pytorch/issues/31141#issuecomment-675506630\r\n\r\nI stumbled on two problematic lines while tracing GPT-2:\r\n1. https://github.com/huggingface/transformers/blob/1d6e71e1167dea9e026391ec5a1a2d7ec33d22af/src/transformers/modeling_gpt2.py#L554\r\n2. https://github.com/huggingface/transformers/blob/1d6e71e1167dea9e026391ec5a1a2d7ec33d22af/src/transformers/modeling_gpt2.py#L582\r\n\r\nBecause of this, tracing the model on one device and then using it on another device doesn't work", "I'm also seeing this issue when trying to trace DistilBert - https://github.com/huggingface/transformers/blob/eb3bd73ce35bfef56eeb722d697f2d39a06a8f8d/src/transformers/modeling_distilbert.py#L105\r\n\r\nLooks to be the same issue.", "Observing the same issue while trying to load traced DistilBERT on cpu.", "@eugeneware @kavinsabharwal I managed to get DistilBert work through making similar changes with the PR listed in this issue. However, there is a torch script warning saying seq_length set as constant after doing the trace.\r\n\r\nAdd this in the constructor of embedding\r\n```\r\n # position_ids (1, len position emb) is contiguous in memory and exported when serialized\r\n self.register_buffer(\"position_ids\", torch.arange(config.max_position_embeddings).expand((1, -1)))\r\n```\r\n\r\nand change `position_ids` as following:\r\n\r\n```\r\nposition_ids = self.position_ids[:, :seq_length]\r\n```\r\n\r\nTo save the sake of accuracy, I finally decided to add `position_ids` as one of the inputs that passed to the model. And everything seemed working now. Just a workkaround to this problem.\r\n\r\nThis change is verified working on the \r\n```\r\nmodel = DistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased', return_dict=False, torchscript=True)\r\n```\r\n\r\n@vdantu I ran some training test on the model and it seemed performing fine. The way making position_ids as input should be a safe bet to get away from the warning.", "This issue has been stale for 1 month." ]
1,594
1,618
1,618
NONE
null
# ❓ Questions & Help ## Details I am trying to trace/save openai_gpt on GPU and use that model on a CPU and facing issues. Is this possible to do? I have attached the link to the question posted on the discuss forum as well. ### Sample Script ```python from transformers import OpenAIGPTTokenizer, OpenAIGPTModel import torch tokenizer = OpenAIGPTTokenizer.from_pretrained('openai-gpt') model = OpenAIGPTModel.from_pretrained('openai-gpt') inputs = torch.tensor([tokenizer.encode("Hello, my dog is cute")]) outputs = model(inputs) print(outputs) print("To CUDA:") inputs = inputs.to("cuda") model = model.to("cuda") traced_model = torch.jit.trace(model, (inputs,)) torch.jit.save(traced_model, "openai_gpt_cuda.pt") print(traced_model.graph) print("\n") print("Load model onto CPU") loaded = torch.jit.load("openai_gpt_cuda.pt", map_location=torch.device("cpu")) inputs = inputs.to("cpu") print("\n") print(loaded.graph) outputs = loaded(inputs) print(outputs) ``` ### Error seen ``` Traceback (most recent call last): File "gpt.py", line 23, in <module> outputs = loaded(inputs) File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_select The above operation failed in interpreter. Traceback (most recent call last): Serialized File "code/__torch__/torch/nn/modules/module/___torch_mangle_147.py", line 35 position_ids = torch.arange(_20, dtype=4, layout=0, device=torch.device("cuda:0"), pin_memory=False) input0 = torch.view(torch.unsqueeze(position_ids, 0), [-1, _19]) _21 = torch.add((_14).forward(input, ), (_13).forward(input0, ), alpha=1) ~~~~~~~~~~~~ <--- HERE input1 = torch.add(_21, CONSTANTS.c0, alpha=1) _22 = (_12).forward(input1, ) /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/functional.py(1484): embedding /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/sparse.py(114): forward /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(516): _slow_forward /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(530): __call__ /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/modeling_openai.py(433): forward /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(516): _slow_forward /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py(530): __call__ /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/jit/__init__.py(1034): trace_module /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/jit/__init__.py(882): trace gpt.py(14): <module> Serialized File "code/__torch__/torch/nn/modules/module/___torch_mangle_0.py", line 8, in forward def forward(self: __torch__.torch.nn.modules.module.___torch_mangle_0.Module, input: Tensor) -> Tensor: position_embeds = torch.embedding(self.weight, input, -1, False, False) ~~~~~~~~~~~~~~~ <--- HERE return position_embeds The above operation failed in interpreter. Traceback (most recent call last): ``` https://discuss.huggingface.co/t/pytorch-trace-on-cpu-and-use-on-gpu/181/3
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5664/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5664/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5663
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5663/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5663/comments
https://api.github.com/repos/huggingface/transformers/issues/5663/events
https://github.com/huggingface/transformers/issues/5663
654,864,544
MDU6SXNzdWU2NTQ4NjQ1NDQ=
5,663
Request for Support to Adapt a Model (Human Dignity Observatory: Non-Profit Project)
{ "login": "KnowmadInstitut", "id": 67144701, "node_id": "MDQ6VXNlcjY3MTQ0NzAx", "avatar_url": "https://avatars.githubusercontent.com/u/67144701?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KnowmadInstitut", "html_url": "https://github.com/KnowmadInstitut", "followers_url": "https://api.github.com/users/KnowmadInstitut/followers", "following_url": "https://api.github.com/users/KnowmadInstitut/following{/other_user}", "gists_url": "https://api.github.com/users/KnowmadInstitut/gists{/gist_id}", "starred_url": "https://api.github.com/users/KnowmadInstitut/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KnowmadInstitut/subscriptions", "organizations_url": "https://api.github.com/users/KnowmadInstitut/orgs", "repos_url": "https://api.github.com/users/KnowmadInstitut/repos", "events_url": "https://api.github.com/users/KnowmadInstitut/events{/privacy}", "received_events_url": "https://api.github.com/users/KnowmadInstitut/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi @KnowmadInstitut you should post this on the forum as well at https://discuss.huggingface.co/", "> Hi @KnowmadInstitut you should post this on the forum as well at https://discuss.huggingface.co/\r\n\r\nThank you so much for the guidance, I'll get right on it. :)\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,594
1,600
1,600
NONE
null
Dear @huggingface community from the Knowmad Institut we need your contribution and support for the Observatory of Human Dignity that we have developed. You can access to the observatory here: (https://bit.ly/MQHHRRES) We need your support to adapt the hugging face tools. We need to get the sentiments out of the content of the tweets and also (based on the data we have filtered out one by one) filter out the ones that are really human rights violations, like classifying from 1 to 5 with 1 being the one that looks the least like a human rights violation and 5 being the one that most matches the data we already have. We thank the community and the @huggingface team for their support in disseminating this request for support. |<center>[![FB18.png](https://knowmadinstitut.org/wp-content/uploads/2020/04/Black-and-Red-Geometric-Technology-Keynote-Presentation-1.png)](https://bit.ly/MQHHRRES) </center>| <div class="center"> <blockquote class="twitter-tweet"><p lang="en" dir="ltr">Dear <a href="https://twitter.com/huggingface?ref_src=twsrc%5Etfw">@huggingface</a> community from the Knowmad Institut we need your contribution and support for the Observatory of Human Dignity that we have developed.<br><br>You can access to the observatory here: <a href="https://t.co/3TgKwjs3PJ">https://t.co/3TgKwjs3PJ</a><br><br>1/3 <a href="https://t.co/ly4Z139qHb">pic.twitter.com/ly4Z139qHb</a></p>&mdash; Knowmad Institut (@KnowmadInstitut) <a href="https://twitter.com/KnowmadInstitut/status/1280904300831612935?ref_src=twsrc%5Etfw">July 8, 2020</a></blockquote><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> </div>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5663/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5663/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5662
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5662/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5662/comments
https://api.github.com/repos/huggingface/transformers/issues/5662/events
https://github.com/huggingface/transformers/pull/5662
654,854,465
MDExOlB1bGxSZXF1ZXN0NDQ3NDk5MDUx
5,662
[WIP - don't merge][TF generate] Make tf generate compatible with tf.function
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "is this still being worked on?", "I won't be able to take a look in the next ~2 weeks. Feel free to continue the PR :-) ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "@patrickvonplaten can I ask whether this change is still working on? Will we be able to get one example for greedy search with tf function compatible? Thanks.", "Thank you for all the amazing work! This library is too good to be true and this would be a really good feature to have if possible and when possible! " ]
1,594
1,622
1,605
MEMBER
null
The tf generate function shoud be cleaned up so that it can be used with tf.function.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5662/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5662/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5662", "html_url": "https://github.com/huggingface/transformers/pull/5662", "diff_url": "https://github.com/huggingface/transformers/pull/5662.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5662.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/5661
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5661/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5661/comments
https://api.github.com/repos/huggingface/transformers/issues/5661/events
https://github.com/huggingface/transformers/pull/5661
654,814,986
MDExOlB1bGxSZXF1ZXN0NDQ3NDY2Njc3
5,661
Create Model card for RoBERTa-hindi-guj-san
{ "login": "parmarsuraj99", "id": 9317265, "node_id": "MDQ6VXNlcjkzMTcyNjU=", "avatar_url": "https://avatars.githubusercontent.com/u/9317265?v=4", "gravatar_id": "", "url": "https://api.github.com/users/parmarsuraj99", "html_url": "https://github.com/parmarsuraj99", "followers_url": "https://api.github.com/users/parmarsuraj99/followers", "following_url": "https://api.github.com/users/parmarsuraj99/following{/other_user}", "gists_url": "https://api.github.com/users/parmarsuraj99/gists{/gist_id}", "starred_url": "https://api.github.com/users/parmarsuraj99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/parmarsuraj99/subscriptions", "organizations_url": "https://api.github.com/users/parmarsuraj99/orgs", "repos_url": "https://api.github.com/users/parmarsuraj99/repos", "events_url": "https://api.github.com/users/parmarsuraj99/events{/privacy}", "received_events_url": "https://api.github.com/users/parmarsuraj99/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,594
1,594
1,594
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5661/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5661/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5661", "html_url": "https://github.com/huggingface/transformers/pull/5661", "diff_url": "https://github.com/huggingface/transformers/pull/5661.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5661.patch", "merged_at": 1594395264000 }
https://api.github.com/repos/huggingface/transformers/issues/5660
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5660/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5660/comments
https://api.github.com/repos/huggingface/transformers/issues/5660/events
https://github.com/huggingface/transformers/issues/5660
654,796,844
MDU6SXNzdWU2NTQ3OTY4NDQ=
5,660
"How to train a new language model from scratch" colab stuck at training
{ "login": "iggygeek", "id": 24476737, "node_id": "MDQ6VXNlcjI0NDc2NzM3", "avatar_url": "https://avatars.githubusercontent.com/u/24476737?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iggygeek", "html_url": "https://github.com/iggygeek", "followers_url": "https://api.github.com/users/iggygeek/followers", "following_url": "https://api.github.com/users/iggygeek/following{/other_user}", "gists_url": "https://api.github.com/users/iggygeek/gists{/gist_id}", "starred_url": "https://api.github.com/users/iggygeek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iggygeek/subscriptions", "organizations_url": "https://api.github.com/users/iggygeek/orgs", "repos_url": "https://api.github.com/users/iggygeek/repos", "events_url": "https://api.github.com/users/iggygeek/events{/privacy}", "received_events_url": "https://api.github.com/users/iggygeek/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi @iggygeek not sure what the exact problem is, can you provide exact details, env info, transformers and torch version and probably your code (script or colab).", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,594
1,600
1,600
NONE
null
Hello, I am following the tutorial : https://huggingface.co/blog/how-to-train At command : trainer.train() It gets stuck (Nothing displayed except "Using deprecated `--per_gpu_train_batch_size` argument"). Any idea ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5660/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5660/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5659
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5659/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5659/comments
https://api.github.com/repos/huggingface/transformers/issues/5659/events
https://github.com/huggingface/transformers/pull/5659
654,785,305
MDExOlB1bGxSZXF1ZXN0NDQ3NDQyMzEx
5,659
[Longformer] fix longformer global attention output
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5659?src=pr&el=h1) Report\n> Merging [#5659](https://codecov.io/gh/huggingface/transformers/pull/5659?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0267668c3d648c6e41afda97f5df8671ee880ac3&el=desc) will **decrease** coverage by `0.13%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5659/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5659?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5659 +/- ##\n==========================================\n- Coverage 77.01% 76.87% -0.14% \n==========================================\n Files 128 145 +17 \n Lines 21615 25369 +3754 \n==========================================\n+ Hits 16646 19502 +2856 \n- Misses 4969 5867 +898 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5659?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <ø> (+0.11%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark.py](https://codecov.io/gh/huggingface/transformers/pull/5659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrLnB5) | `74.01% <ø> (+5.16%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3MucHk=) | `86.04% <ø> (+0.68%)` | :arrow_up: |\n| [src/transformers/benchmark/benchmark\\_args\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdGYucHk=) | `87.50% <ø> (ø)` | |\n| [src/transformers/benchmark/benchmark\\_args\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX2FyZ3NfdXRpbHMucHk=) | `89.13% <ø> (-7.75%)` | :arrow_down: |\n| [src/transformers/benchmark/benchmark\\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3RmLnB5) | `61.53% <ø> (ø)` | |\n| [src/transformers/benchmark/benchmark\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9iZW5jaG1hcmsvYmVuY2htYXJrX3V0aWxzLnB5) | `69.84% <ø> (-3.26%)` | :arrow_down: |\n| [src/transformers/configuration\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.18% <ø> (+0.32%)` | :arrow_up: |\n| [src/transformers/configuration\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5659/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.75% <ø> (+0.41%)` | :arrow_up: |\n| ... and [118 more](https://codecov.io/gh/huggingface/transformers/pull/5659/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5659?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5659?src=pr&el=footer). Last update [bfacb2e...ee88c2f](https://codecov.io/gh/huggingface/transformers/pull/5659?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Global and Local attention probs have now always the same output shape. This is both more consistent in terms of the output signature for the user and solves the multi-gpu issue.", "Pinging @thomwolf @sshleifer @LysandreJik @sgugger for notification -> more details can be found in issue: https://github.com/huggingface/transformers/issues/5646. " ]
1,594
1,594
1,594
MEMBER
null
This PR fixes the attention probs that are outputted when longformer uses global attention and sets `output_attention=True`. Thanks a million to @k141303 for very clean issue + perfect proposed solution in https://github.com/huggingface/transformers/issues/5646 .
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5659/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5659/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5659", "html_url": "https://github.com/huggingface/transformers/pull/5659", "diff_url": "https://github.com/huggingface/transformers/pull/5659.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5659.patch", "merged_at": 1594653803000 }
https://api.github.com/repos/huggingface/transformers/issues/5658
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5658/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5658/comments
https://api.github.com/repos/huggingface/transformers/issues/5658/events
https://github.com/huggingface/transformers/pull/5658
654,725,797
MDExOlB1bGxSZXF1ZXN0NDQ3Mzk1Njg0
5,658
Create README.md - Model card
{ "login": "nreimers", "id": 10706961, "node_id": "MDQ6VXNlcjEwNzA2OTYx", "avatar_url": "https://avatars.githubusercontent.com/u/10706961?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nreimers", "html_url": "https://github.com/nreimers", "followers_url": "https://api.github.com/users/nreimers/followers", "following_url": "https://api.github.com/users/nreimers/following{/other_user}", "gists_url": "https://api.github.com/users/nreimers/gists{/gist_id}", "starred_url": "https://api.github.com/users/nreimers/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nreimers/subscriptions", "organizations_url": "https://api.github.com/users/nreimers/orgs", "repos_url": "https://api.github.com/users/nreimers/repos", "events_url": "https://api.github.com/users/nreimers/events{/privacy}", "received_events_url": "https://api.github.com/users/nreimers/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5658?src=pr&el=h1) Report\n> Merging [#5658](https://codecov.io/gh/huggingface/transformers/pull/5658?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2e6bb0e9c37655a03adaa3238dd6d4645fba8dc1&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5658/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5658?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5658 +/- ##\n=======================================\n Coverage 78.26% 78.26% \n=======================================\n Files 145 145 \n Lines 25366 25366 \n=======================================\n Hits 19852 19852 \n Misses 5514 5514 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5658?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5658?src=pr&el=footer). Last update [2e6bb0e...8bdceb7](https://codecov.io/gh/huggingface/transformers/pull/5658?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,594
1,594
1,594
CONTRIBUTOR
null
Model card for sentence-transformers/bert-base-nli-max-tokens
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5658/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5658/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5658", "html_url": "https://github.com/huggingface/transformers/pull/5658", "diff_url": "https://github.com/huggingface/transformers/pull/5658.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5658.patch", "merged_at": 1594395476000 }
https://api.github.com/repos/huggingface/transformers/issues/5657
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5657/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5657/comments
https://api.github.com/repos/huggingface/transformers/issues/5657/events
https://github.com/huggingface/transformers/pull/5657
654,719,030
MDExOlB1bGxSZXF1ZXN0NDQ3MzkwMzQx
5,657
Create README.md - Model card
{ "login": "nreimers", "id": 10706961, "node_id": "MDQ6VXNlcjEwNzA2OTYx", "avatar_url": "https://avatars.githubusercontent.com/u/10706961?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nreimers", "html_url": "https://github.com/nreimers", "followers_url": "https://api.github.com/users/nreimers/followers", "following_url": "https://api.github.com/users/nreimers/following{/other_user}", "gists_url": "https://api.github.com/users/nreimers/gists{/gist_id}", "starred_url": "https://api.github.com/users/nreimers/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nreimers/subscriptions", "organizations_url": "https://api.github.com/users/nreimers/orgs", "repos_url": "https://api.github.com/users/nreimers/repos", "events_url": "https://api.github.com/users/nreimers/events{/privacy}", "received_events_url": "https://api.github.com/users/nreimers/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5657?src=pr&el=h1) Report\n> Merging [#5657](https://codecov.io/gh/huggingface/transformers/pull/5657?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2e6bb0e9c37655a03adaa3238dd6d4645fba8dc1&el=desc) will **decrease** coverage by `1.37%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5657/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5657?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5657 +/- ##\n==========================================\n- Coverage 78.26% 76.88% -1.38% \n==========================================\n Files 145 145 \n Lines 25366 25366 \n==========================================\n- Hits 19852 19503 -349 \n- Misses 5514 5863 +349 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5657?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5657/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5657/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5657/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5657/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.75%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5657/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.32% <0.00%> (+31.77%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5657/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5657?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5657?src=pr&el=footer). Last update [2e6bb0e...e839f19](https://codecov.io/gh/huggingface/transformers/pull/5657?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,594
1,594
1,594
CONTRIBUTOR
null
Model card for sentence-transformers/bert-base-nli-cls-token
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5657/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5657/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5657", "html_url": "https://github.com/huggingface/transformers/pull/5657", "diff_url": "https://github.com/huggingface/transformers/pull/5657.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5657.patch", "merged_at": 1594395484000 }
https://api.github.com/repos/huggingface/transformers/issues/5656
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5656/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5656/comments
https://api.github.com/repos/huggingface/transformers/issues/5656/events
https://github.com/huggingface/transformers/issues/5656
654,694,591
MDU6SXNzdWU2NTQ2OTQ1OTE=
5,656
Truncated Outputs by t5 fine-tuned models
{ "login": "manojpreveen", "id": 64023526, "node_id": "MDQ6VXNlcjY0MDIzNTI2", "avatar_url": "https://avatars.githubusercontent.com/u/64023526?v=4", "gravatar_id": "", "url": "https://api.github.com/users/manojpreveen", "html_url": "https://github.com/manojpreveen", "followers_url": "https://api.github.com/users/manojpreveen/followers", "following_url": "https://api.github.com/users/manojpreveen/following{/other_user}", "gists_url": "https://api.github.com/users/manojpreveen/gists{/gist_id}", "starred_url": "https://api.github.com/users/manojpreveen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manojpreveen/subscriptions", "organizations_url": "https://api.github.com/users/manojpreveen/orgs", "repos_url": "https://api.github.com/users/manojpreveen/repos", "events_url": "https://api.github.com/users/manojpreveen/events{/privacy}", "received_events_url": "https://api.github.com/users/manojpreveen/received_events", "type": "User", "site_admin": false }
[ { "id": 2197722692, "node_id": "MDU6TGFiZWwyMTk3NzIyNjky", "url": "https://api.github.com/repos/huggingface/transformers/labels/t5", "name": "t5", "color": "509fc4", "default": false, "description": "" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "Hi, what are the arguments for `.generate` method ? you can control the generation length using `max_length` and `min_length` parameter. And if you want to see if there's something wrong with the fine-tuning code, then take the default t5-small model (it's already trained for summerization) and generate summaries using it and compare with your model. This should give you some idea.", "The arguments for the .generate method are (input_ids=input_ids, attention_mask=attention_mask, early_stopping= True, length_penalty = 2.0, max_length = 142, min_length = 56, no_repeat_ngram_size = 3, num_beams = 4), where input_ids and attention_mask are the corresponding tensors obtained through tokenizer.encode(). The config.json file of my saved t5-small model fine-tuned on cnn/dm is same as the default t5-small model(I cross-checked that).\r\n\r\nI set the parameters I'm passing while running run_eval.py the same for both fine-tuned t5-small and default t5-small but the former produces truncated outputs whereas the later produces complete outputs.\r\n\r\nYeah I can control the generation length by the min_length and max_length parameters but in default t5-small model whatever be the above two parameters it always produced complete sentences whose length are within that range but in case of the fine-tuned model its giving truncated outputs for all combinations of these two parameters.\r\n\r\nThat's why I strongly felt that there was some problem with the fine-tuning code.", "cc @sshleifer ", "Since the default --max_source_length is 1024 and some articles in CNN are bigger than that, thought that the truncation of the input sentences was messing up the fine-tuned model and tried fine-tuning t5-small over xsum.\r\n\r\nThe xsum articles are relatively smaller and none of them exceeds 1024 tokens.\r\nUsed --max_target_length=60 -- val_max_target_length=60 --test_max_target_length=100 in finetune.py as they are mentioned as reasonable setting for XSUM.\r\n\r\nRan the script finetune_t5.sh for xsum, i.e.,\r\npython finetune.py \\\r\n--data_dir=xsum \\\r\n--model_name_or_path=t5-small \\\r\n--learning_rate=3e-5 \\\r\n--train_batch_size=8 \\\r\n--eval_batch_size=4 \\\r\n--output_dir=xsum_results \\\r\n--max_source_length=1024 \\\r\n--val_check_interval=0.1 --n_val=200 \\\r\n--do_train --do_predict \\\r\n $@\r\n\r\nThe outputs produced by the best_tfmr model for the test.souce dataset of xsum is still truncated as given by the test_generations.txt\r\n\r\nEg.\r\n\r\nArticle : (1st article of test.source dataset of xsum)\r\n\r\nThe London trio are up for best UK act and best album, as well as getting two nominations in the best song category.\"We got told like this morning 'Oh I think you're nominated'\", said Dappy.\"And I was like 'Oh yeah, which one?' And now we've got nominated for four awards. I mean, wow!\"Bandmate Fazer added: \"We thought it's best of us to come down and mingle with everyone and say hello to the cameras. And now we find we've got four nominations.\"The band have two shots at the best song prize, getting the nod for their Tynchy Stryder collaboration Number One, and single Strong Again.Their album Uncle B will also go up against records by the likes of Beyonce and Kanye West.N-Dubz picked up the best newcomer Mobo in 2007, but female member Tulisa said they wouldn't be too disappointed if they didn't win this time around.\"At the end of the day we're grateful to be where we are in our careers.\"If it don't happen then it don't happen - live to fight another day and keep on making albums and hits for the fans.\"Dappy also revealed they could be performing live several times on the night.The group will be doing Number One and also a possible rendition of the War Child single, I Got Soul.The charity song is a re-working of The Killers' All These Things That I've Done and is set to feature artists like Chipmunk, Ironik and Pixie Lott.This year's Mobos will be held outside of London for the first time, in Glasgow on 30 September.N-Dubz said they were looking forward to performing for their Scottish fans and boasted about their recent shows north of the border.\"We just done Edinburgh the other day,\" said Dappy.\"We smashed up an N-Dubz show over there. We done Aberdeen about three or four months ago - we smashed up that show over there! Everywhere we go we smash it up!\"\r\n\r\nOutput :(output produced by the best_tfmr for the 1st article of test.source dataset of xsum)\r\n\r\nN-Dubz have announced they have been nominated for the UK's best song prize. They have been told 'Oh yeah, which one?' - and now they've got four nominations. \"We're going to be the best newcomer\r\n\r\n\r\nSimilarly, all the others outputs too are truncated at the end similar to the previous case(fine-tuned over CNN/DM) .\r\nOnce the default model goes under fine-tuning, it's unable to finish the summary with EOS token but getting cut abruptly. ", "I think part of the problem may be that t5 tokenizer is not adding EOS token.\r\n@patrickvonplaten \r\n\r\n```python\r\nipdb> tok_bart = AutoTokenizer.from_pretrained('facebook/bart-large-cnn')\r\nipdb> tok_bart('sentence')\r\n{'input_ids': [0, 19530, 4086, 2], 'attention_mask': [1, 1, 1, 1]}\r\nipdb> tok_t5 = AutoTokenizer.from_pretrained('t5-small')\r\nipdb> tok_t5('sentence')\r\n{'input_ids': [7142], 'attention_mask': [1]}\r\n```\r\nSo maybe the model is training on targets without EOS, and eventually learns to stop generating it?", "Thanks a lot for checking this @sshleifer! Yeah, I agree - I think T5 should add the EOS token to the end.\r\nIs there a reason why T5 does not add the EOS token? @thomwolf @mfuntowicz @n1t0 ?", "Yes, in case of T5 we manually need to add ` </s>` at the end of text. I think this same issue is causing [this](https://discuss.huggingface.co/t/generate-very-short-summaries/277/5) ", "@patil-suraj /others Have you ran clean experiments with and without adding `<s>`?\r\n\r\nI don't want to merge #5866 this without more evidence that it is helpful, and [my first experiment](https://github.com/huggingface/transformers/pull/5866) did not result in any change. \r\n\r\nTo those of you on many of these related issues, sorry for spamming.\r\n", "Hi @sshleifer, in all of my T5 experiments I didn't use the bos token `<s>` at all, all of those experiments gave expected results (even better in some cases). But `</s>` is very important, without it the model generates really weird text, and its very easy to forget. So adding `</s>` automatically is really important. `<s>` won't matter", "Ok, I'll merge the change. You won't need to add it anymore.", "@tromedlov22 Did you ever figure out what the issue was? I have the same problem, doesn't seem to be an issue with the tokenizer adding eos since it's doing that. ", "#5866 This solved my issue. If you still facing the issue, post your sample output maybe along with the input and the hyper-params you using.", "hey guys, I am facing the same issue with truncation . My input:\r\nWhen I first entered high school I was very nervous as it was a new school for me and it was a big adjustment</s>. I was overwhelmed with work and mentally wasn't staying optimistic as I found it hard to manage my time and make friends. I felt like I wasn't good enough, and this caused me to treat myself like I wasn't worthy of being at such a place</s>. In terms of behavior to others, I would say it made me more shy while still adapting to the new environment</s>.\r\n\r\nOutput:\r\n\r\nwhen I first entered high school I was very nervous as it was a new school for me. I felt like I wasn't good enough to manage my time and make friends. it made me more shy while still adapting to\r\n\r\n\r\nGenerate\r\ntokens_input,\r\n min_length= 0,\r\n max_length=50,\r\n num_beams=4,\r\n early_stopping=True,\r\n no_repeat_ngram_size=3,\r\n num_return_sequences=2,\r\n \r\n" ]
1,594
1,604
1,598
NONE
null
I fine-tuned t5-small over CNN/DM dataset using the finetune_t5.sh script. The outputs produced by the saved fine-tuned model is okayish but it's getting cut i.e., producing incomplete sentence at the end. Example : Artcile: (CNN)The only thing crazier than a guy in snowbound Massachusetts boxing up the powdery white stuff and offering it for sale online? People are actually buying it. For $89, self-styled entrepreneur Kyle Waring will ship you 6 pounds of Boston-area snow in an insulated Styrofoam box -- enough for 10 to 15 snowballs, he says.Kyle Waring died last week. But not if you live in New England or surrounding states. "We will not ship snow to any states in the northeast!" says Waring's website, ShipSnowYo.com. "We're in the business of expunging snow!" His website and social media accounts claim to have filled more than 133 orders for snow -- more than 30 on Tuesday alone, his busiest day yet. With more than 45 total inches, Boston has set a record this winter for the snowiest month in its history. Most residents see the huge piles of snow choking their yards and sidewalks as a nuisance, but Waring saw an opportunity. According to Boston.com, it all started a few weeks ago, when Waring and his wife were shoveling deep snow from their yard in Manchester-by-the-Sea, a coastal suburb north of Boston. He joked about shipping the stuff to friends and family in warmer states, and an idea was born. His business slogan: "Our nightmare is your dream!" At first, ShipSnowYo sold snow packed into empty 16.9-ounce water bottles for $19.99, but the snow usually melted before it reached its destination. So this week, Waring began shipping larger amounts in the Styrofoam cubes, which he promises will arrive anywhere in the U.S. in less than 20 hours. He also has begun selling a 10-pound box of snow for $119. Many of his customers appear to be companies in warm-weather states who are buying the snow as a gag, he said. Whether Waring can sustain his gimmicky venture into the spring remains to be seen. But he has no shortage of product. "At this rate, it's going to be July until the snow melts," he told Boston.com. "But I've thought about taking this idea and running with it for other seasonal items. Maybe I'll ship some fall foliage." Summary produced by t5-small fine-tuned over CNN/DM : Kyle Waring will ship you 6 pounds of snow in an insulated Styrofoam box for $89 . The self-styled entrepreneur says he will not ship snow to any states in the northeast . Waring's website and social media accounts claim to have filled more than 133 orders for snow . "We're in the business of expunging snow!" Waring says . He has begun selling a 10-pound box of snow for $119 . His business slogan: "Our nightmare is your At first I thought this might be because the model hasn't converged as I just ran for 1 epoch but it's producing similar truncated outputs even for t5-small fine-tuned over cnn/dm for 5 epochs.Also this problem is not related to min_length or max_length parameters I think, as it produced similar outputs for all combinations of those two parameters. Tried changing --max_source_length, --max_target_length, --val_max_target_length, --test_max_target_length(these 4 parameters are present in finetune.py) parameter's values too from their default values before fine-tuning but no use. What might be the reason for this truncation? Is this a problem of the fine-tuning code used to fine-tune pretrained models as pre-trained models don't produce this kind of outputs.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5656/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5656/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5655
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5655/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5655/comments
https://api.github.com/repos/huggingface/transformers/issues/5655/events
https://github.com/huggingface/transformers/pull/5655
654,686,830
MDExOlB1bGxSZXF1ZXN0NDQ3MzYzNzE0
5,655
Create model card
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5655?src=pr&el=h1) Report\n> Merging [#5655](https://codecov.io/gh/huggingface/transformers/pull/5655?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2e6bb0e9c37655a03adaa3238dd6d4645fba8dc1&el=desc) will **decrease** coverage by `0.46%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5655/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5655?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5655 +/- ##\n==========================================\n- Coverage 78.26% 77.80% -0.47% \n==========================================\n Files 145 145 \n Lines 25366 25366 \n==========================================\n- Hits 19852 19735 -117 \n- Misses 5514 5631 +117 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5655?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5655/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5655/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5655/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5655/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5655/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.32% <0.00%> (+31.77%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5655?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5655?src=pr&el=footer). Last update [2e6bb0e...98c16fd](https://codecov.io/gh/huggingface/transformers/pull/5655?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,594
1,594
1,594
CONTRIBUTOR
null
Create model card for T5-small fine-tuned on SQUAD v2
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5655/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5655/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5655", "html_url": "https://github.com/huggingface/transformers/pull/5655", "diff_url": "https://github.com/huggingface/transformers/pull/5655.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5655.patch", "merged_at": 1594395492000 }
https://api.github.com/repos/huggingface/transformers/issues/5654
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5654/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5654/comments
https://api.github.com/repos/huggingface/transformers/issues/5654/events
https://github.com/huggingface/transformers/issues/5654
654,685,451
MDU6SXNzdWU2NTQ2ODU0NTE=
5,654
❓ Difficulties to reproduce BART results on CNN/DM by fine-tuning bart-large
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "Are the outputs produced by your best-checkpoint after fine-tuning producing proper outputs? or are the truncated at the end?\r\nI did fine-tune t5-small on CNN/DM but the best-checkpoint was producing outputs which were truncated in the end(for sample output, I just raised an issue, refer to that) and this was leading to reduced R1 scores too. Just wanted to know if you faced the same issue or if not what might be the reason for it, as I couldn't find why.\r\n\r\nThanks.", "@cola I haven't tried finetuning bart-large. Could take a pass if you have a command you are running that I can reproduce. Without code, I can speculate on ideas but I can't check if you are already doing them, so sorry if this is useless:\r\n\r\n(1)\r\n@tromedlov22 's idea reminds me that you should make sure you set config.task_specific_params\r\n```python\r\ndef use_task_specific_params(model, task):\r\n # update config with summarization specific params\r\n task_specific_params = model.config.task_specific_params\r\n if task_specific_params is not None:\r\n model.config.update(task_specific_params.get(task, {}))\r\nuse_task_specific_params(model, 'summarization')\r\n```\r\n(2)\r\nAnother idea, I suspect the authors checked rouge every epoch and stopped at the best validation rouge, (roughly what `finetune.py`) and that might help results.\r\n\r\nFor reference, the params I see are:\r\n```\r\n {'early_stopping': True,\r\n 'length_penalty': 2.0,\r\n 'max_length': 142,\r\n 'min_length': 56,\r\n 'no_repeat_ngram_size': 3,\r\n 'num_beams': 4}}\r\n\r\n```\r\n\r\n(3) IIRC, authors use `label_smoothing_cross_entropy` do you? \r\n(4) for cnn, truncation parameters matter on the target side.\r\n(5) if you are purely interested in reproducing finetuning performance, I would experiment with xsum since it trains 30% faster than cnn (shorter targets). (and make sure to use `AutoConfig.from_pretrained('facebook/bart-large-xsum')` params) You could also use wandb and then share your logs, which would allow me to give better advice.", "@tromedlov22 Thanks for the answer. I checked but the answer seems fine, not truncated at the end. I guess we are having different problem.\r\n\r\n@sshleifer Thanks for the very detailed answer !\r\nI can't give you a one-command for reproducing, I modified the example code to add missing details from the Fairseq repo, such as `label-smoothing` !\r\n\r\n---\r\n\r\n> (3) IIRC, authors use label_smoothing_cross_entropy do you?\r\n\r\nYes I do\r\n\r\n> Another idea, I suspect the authors checked rouge every epoch and stopped at the best validation rouge, (roughly what finetune.py) and that might help results.\r\n\r\nIndeed I'm saving only at the end of training. I will try that.\r\n\r\n> (5) if you are purely interested in reproducing finetuning performance, I would experiment with xsum since it trains 30% faster than cnn (shorter targets). (and make sure to use AutoConfig.from_pretrained('facebook/bart-large-xsum') params) You could also use wandb and then share your logs, which would allow me to give better advice.\r\n\r\nThanks for the advice !\r\n\r\n> (4) for cnn, truncation parameters matter on the target side.\r\n\r\nWhat do you mean ?", "That would be a very useful PR @cola ! ", "I could improve a my results by using early-stopping, thank you very much for the idea @sshleifer !\r\n\r\nNow I have **43.68** as R1. Almost 44.16 from the paper !\r\n\r\nI'm trying to find what can cause this small difference, and I would love to hear your opinion about this :\r\n\r\nI'm training with batch-size 1 (I can't fit more in my 16Gb memory). The authors fine-tuned it with batch-size 2 (with 32Gb memory).\r\n\r\nCan it come from here ? Does the layer batch-normalization act differently with single-samples batch for example ?", "I'm in a similar place with machine translation.\r\nThe things I know to be different from fairseq are:\r\n\r\n- [ ] (probably only matters for MT) their dataloader creates 1 batch for every N tokens.\r\n- [ ] dropout, attention_dropout (need to be set through config)\r\n- [ ] weight_decay = 0.1\r\n- [ ] adam_betas\r\n- [ ] lr_scheduler=polynomial_decay\r\n- [ ] warmup_updates\r\n- [ ] Did you figure out whether update_freq is the same as `gradient_accumulation_steps`?\r\n\r\nif you have all those squared away, the only other thing I can think of is that the embeddings (we use `model.model.shared` , they don't) somehow become untied or get different gradients.\r\n\r\nLet me know if any of these have mattered, cause I'm trying to prioritize what to implement in `transformers`", "Here is what I did so far :\r\n\r\n- [ ] (probably only matters for MT) their dataloader creates 1 batch for every N tokens.\r\n- [x] dropout, attention_dropout (need to be set through config)\r\n- [x] weight_decay = 0.1\r\n- [ ] adam_betas\r\n- [x] lr_scheduler=polynomial_decay\r\n- [x] warmup_updates\r\n- [ ] Did you figure out whether update_freq is the same as gradient_accumulation_steps?\r\n\r\nImplementing the first one seems complicated, so I didn't try.\r\n\r\nThanks for the help, the detailed list of things to try is awesome !\r\n\r\nSo far I'm satisfied with the results, it's really close to the paper's results. Maybe some tiny difference in the code is responsible for the difference ? If I have more time I will try the other things I didn't try so far :)", "I am having similar problems with this myself. @Colanim do you know which if your above changes had the largest impact so I can begin with those?\r\n\r\n@sshleifer I think there is a bug with `label_smoothed_nll_loss`. I have tried using it with current master and I am getting infinite losses because the `bs` term is zero and this is the denominator in line 45 (`return loss / bs, nll_loss / bs`). ", "wowo great catch this line I wrote is broken in so many ways:\r\n\r\n\r\n```python\r\nbs = pad_mask.long().sum() # pad mask has 1 where labels.eq(pad_token_id). This is num pad tokens in the batch....\r\n```\r\n\r\nI would delete the denominator if I were you.\r\n\r\nIn my experience: warmup_updates can help a lot, as well as playing with gradient_accumulation_batches. (more for MT, lower -> better). But interested in @Colanim 's experience.\r\n\r\nBTW, thanks to @stas00 you can now pass `--dropout`, `--attention_dropout`, `--decoder_layerdrop`, and `--encoder_layerdrop` through the command line.\r\n\r\n", "@Colanim can you rerun evaluation on your 43.68 R1 model?\r\nI hope that #6526 might have helped close the gap!\r\nIt doesn't help for bart-large-cnn, but it does help bart-large-xsum.", "Will try as soon as I can ! I have to find my checkpoint... ^^", "What command are you using @Colanim ? I get OOM even with BS=1 on a 32GB v100 GPU. @sshleifer \r\n\r\n```\r\npython finetune.py \\\r\n --data_dir=data/cnn_dm/ \\\r\n --output_dir=${RESULTS_DIR} \\\r\n --learning_rate=3e-5 \\\r\n --fp16 \\\r\n --gpus 8 \\\r\n --do_train \\\r\n --do_predict \\\r\n --n_val 1000 \\\r\n --val_check_interval 0.1 \\\r\n --train_batch_size=1 --gradient_accumulation_steps=4 \\\r\n --eval_batch_size=1 \\\r\n --max_steps 20000 --warmup_steps=500 \\\r\n --eval_max_gen_length=142 --max_source_length=1042 --max_target_length=56 \\\r\n --sortish_sampler \\\r\n --lr_scheduler polynomial \\\r\n --label_smoothing 0.1 \\\r\n --weight_decay 0.01 \\\r\n --dropout 0.1 --attention_dropout 0.1 --gradient_clip_val=0.1 --early_stop_callback=1\r\n```\r\n\r\nand initializing model without autoconfig as \r\n\r\n```\r\n config = BartConfig(**json.load(open(args.config_path, \"r\")))\r\n model = BartForConditionalGeneration(config)\r\n tokenizer = BartTokenizer.from_pretrained(\r\n 'facebook/bart-large-cnn') # Downloads vocab and merges file automatically\r\n```", "+ `Try --num_sanity_val_steps=0 --eval_beams 2`\r\n+ Cola is starting with `model= BartForConditionalGeneration.from_pretrained('facebook/bart-large')` this will do better than random init.", "That works initially but fails after ~15k steps - what eval_max_gen_length are you using? not sure if you froze embeds as mentioned in #6711 for BART CNN/DM as well. \r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"finetune.py\", line 446, in <module>\r\n main(args)\r\n File \"finetune.py\", line 421, in main\r\n logger=logger,\r\n File \"/workspace/bart/lightning_base.py\", line 369, in generic_train\r\n trainer.fit(model)\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/states.py\", line 48, in wrapped_fn\r\n result = fn(self, *args, **kwargs)\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py\", line 1058, in fit\r\n results = self.accelerator_backend.spawn_ddp_children(model)\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/accelerators/ddp_backend.py\", line 123, in spawn_ddp_children\r\n results = self.ddp_train(local_rank, mp_queue=None, model=model, is_master=True)\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/accelerators/ddp_backend.py\", line 224, in ddp_train\r\n results = self.trainer.run_pretrain_routine(model)\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py\", line 1239, in run_pretrain_routine\r\n self.train()\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py\", line 394, in train\r\n self.run_training_epoch()\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py\", line 516, in run_training_epoch\r\n self.run_evaluation(test_mode=False)\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py\", line 582, in run_evaluation\r\n eval_results = self._evaluate(self.model, dataloaders, max_batches, test_mode)\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py\", line 331, in _evaluate\r\n output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode)\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py\", line 661, in evaluation_forward\r\n output = model(*args)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 577, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.6/site-packages/pytorch_lightning/overrides/data_parallel.py\", line 174, in forward\r\n output = self.module.validation_step(*inputs[0], **kwargs[0])\r\n File \"finetune.py\", line 175, in validation_step\r\n return self._generative_step(batch)\r\n File \"finetune.py\", line 218, in _generative_step\r\n max_length=self.eval_max_length,\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/autograd/grad_mode.py\", line 15, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/workspace/bart/generation_utils.py\", line 469, in generate\r\n model_specific_kwargs=model_specific_kwargs,\r\n File \"/workspace/bart/generation_utils.py\", line 648, in _generate_beam_search\r\n outputs = self(**model_inputs) # (batch_size * num_beams, cur_len, vocab_size)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 577, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/workspace/bart/modeling_bart.py\", line 1037, in forward\r\n return_dict=return_dict,\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 577, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/workspace/bart/modeling_bart.py\", line 909, in forward\r\n return_dict=return_dict,\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 577, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/workspace/bart/modeling_bart.py\", line 570, in forward\r\n output_attentions=output_attentions,\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 577, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/workspace/bart/modeling_bart.py\", line 443, in forward\r\n x = self.activation_fn(self.fc1(x))\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 577, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/modules/linear.py\", line 87, in forward\r\n return F.linear(input, self.weight, self.bias)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/functional.py\", line 1676, in linear\r\n output = input.matmul(weight.t())\r\nRuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 15.78 GiB total capacity; 14.56 GiB already allocated; 11.44 MiB free; 14.79 GiB reserved in total by PyTorch)\r\n```", "Definitely use `--freeze_embeds`. I have never seen it hurt metrics. I have actually never tried to finetune on cnn_dm, but interested to hear your results!", "Still OOMs even with eval_beams=1. #7004 works for me, will confirm when I have it working e2e ", "Unfortunately I'm not working with BART anymore these days... I didn't try more experiments", "Hi, @Colanim , could you share you eval script that get a score of 44.09 with facebook/bart-large-cnn? Thanks!", "Basically I use `nlp` package to get the `cnn_dm` data, then run generation with :\r\n\r\n```\r\npreds = model.generate(samples['article'],\r\n num_beams=4, length_penalty=2,\r\n max_length=142, min_length=56,\r\n early_stopping=True,\r\n no_repeat_ngram_size=3)\r\n```\r\n\r\nand save the predictions and gold in text files. Then use the `files2rouge` package to get ROUGE scores.\r\n\r\nAlso don't forget to tokenize the predictions and gold with `StanFord CoreNLP` !", "Hi, @Colanim I tried to reproduce the paper's results from the checkpoint facebook/bart-large-cnn, but somehow my rouge1 score is only 42.62. I tried the following steps, could you help me to find out what's wrong? Thanks!\r\n**infer:**\r\n```\r\nfrom transformers import BartTokenizer, BartForConditionalGeneration\r\ntokenizer = BartTokenizer.from_pretrained('facebook/bart-large-cnn')\r\nmodel = BartForConditionalGeneration.from_pretrained('facebook/bart-large-cnn')\r\nsource_pwd='./test.source'\r\ninput_sents=open(source_pwd,'r',encoding='utf8').readlines()\r\nwith open('./test.pred','w',encoding='utf8') as out:\r\n inputs = tokenizer(input_sents, max_length=1024, return_tensors='pt',truncation=True,padding=True)\r\n summary_ids = model.generate(inputs['input_ids'], num_beams=4, length_penalty=2,max_length=142, min_length=56,early_stopping=True,no_repeat_ngram_size=3)\r\n for summary_id in summary_ids:\r\n out.write(tokenizer.decode(summary_id, skip_special_tokens=True, clean_up_tokenization_spaces=False).strip()+'\\n')\r\n```\r\n**eval:**\r\ncat test.target | java edu.stanford.nlp.process.PTBTokenizer -ioFileList -preserveLines > test.target.tokenized\r\ncat test.pred | java edu.stanford.nlp.process.PTBTokenizer -ioFileList -preserveLines > test.pred.tokenized\r\nfiles2rouge test.pred.tokenized test.target.tokenized\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "> @Cola I haven't tried finetuning bart-large. Could take a pass if you have a command you are running that I can reproduce. Without code, I can speculate on ideas but I can't check if you are already doing them, so sorry if this is useless:\r\n> \r\n> (1)\r\n> @tromedlov22 's idea reminds me that you should make sure you set config.task_specific_params\r\n> \r\n> ```python\r\n> def use_task_specific_params(model, task):\r\n> # update config with summarization specific params\r\n> task_specific_params = model.config.task_specific_params\r\n> if task_specific_params is not None:\r\n> model.config.update(task_specific_params.get(task, {}))\r\n> use_task_specific_params(model, 'summarization')\r\n> ```\r\n> \r\n> (2)\r\n> Another idea, I suspect the authors checked rouge every epoch and stopped at the best validation rouge, (roughly what `finetune.py`) and that might help results.\r\n> \r\n> For reference, the params I see are:\r\n> \r\n> ```\r\n> {'early_stopping': True,\r\n> 'length_penalty': 2.0,\r\n> 'max_length': 142,\r\n> 'min_length': 56,\r\n> 'no_repeat_ngram_size': 3,\r\n> 'num_beams': 4}}\r\n> ```\r\n> \r\n> (3) IIRC, authors use `label_smoothing_cross_entropy` do you?\r\n> (4) for cnn, truncation parameters matter on the target side.\r\n> (5) if you are purely interested in reproducing finetuning performance, I would experiment with xsum since it trains 30% faster than cnn (shorter targets). (and make sure to use `AutoConfig.from_pretrained('facebook/bart-large-xsum')` params) You could also use wandb and then share your logs, which would allow me to give better advice.\r\n\r\nHi @sshleifer, I'm trying to test the best fine-tuned SUMM model on CNNDM dataset. But seems like I need to use args.use_task_specific_params, but can't use it by simply add --task_specific_params. Is there a solution for that? " ]
1,594
1,614
1,608
CONTRIBUTOR
null
# ❓ Help I'm trying to fine-tune BART on CNN/DM by myself (so, starting from `facebook/bart-large` checkpoint). However I can't reproduce the results so far... BART authors report a R1 score of `44.16` in their paper, but my best checkpoint so far is only `42.53`. It's not an issue with the eval script, as I can reproduce the authors' results from the checkpoint `facebook/bart-large-cnn`. I get a score of `44.09` using this checkpoint. I tried several hyper-parameters : the ones provided in the example folder, but also the ones used in fairseq repo. It doesn't change anything... --- I'm a bit at loss on how to reproduce these fine-tuning score... Could anyone fine-tune BART successfully using `transformers` repo ? If yes, can you share your parameters ? Any help would be greatly appreciated ! @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5654/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5654/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5653
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5653/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5653/comments
https://api.github.com/repos/huggingface/transformers/issues/5653/events
https://github.com/huggingface/transformers/issues/5653
654,673,239
MDU6SXNzdWU2NTQ2NzMyMzk=
5,653
AutoTokenizer.from_pretrained("hfl/chinese-roberta-wwm-ext")
{ "login": "Single430", "id": 7894408, "node_id": "MDQ6VXNlcjc4OTQ0MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/7894408?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Single430", "html_url": "https://github.com/Single430", "followers_url": "https://api.github.com/users/Single430/followers", "following_url": "https://api.github.com/users/Single430/following{/other_user}", "gists_url": "https://api.github.com/users/Single430/gists{/gist_id}", "starred_url": "https://api.github.com/users/Single430/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Single430/subscriptions", "organizations_url": "https://api.github.com/users/Single430/orgs", "repos_url": "https://api.github.com/users/Single430/repos", "events_url": "https://api.github.com/users/Single430/events{/privacy}", "received_events_url": "https://api.github.com/users/Single430/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I also got the same issue.\r\nMaybe you can try `BertTokenizer.from_pretrained(\"hfl/chinese-roberta-wwm-ext\")`\r\nIt works for me.\r\n", "> I also got the same issue.\r\n> Maybe you can try `BertTokenizer.from_pretrained(\"hfl/chinese-roberta-wwm-ext\")`\r\n> It works for me.\r\n\r\nYes!! I succeeded, thank you very much for your help!" ]
1,594
1,594
1,594
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details ``` > from transformers import AutoTokenizer, AutoModelWithLMHead > tokenizer = AutoTokenizer.from_pretrained("hfl/chinese-roberta-wwm-ext") I0710 17:52:53.548153 139925919450880 tokenization_utils_base.py:1167] Model name 'hfl/chinese-roberta-wwm-ext' not found in model shortcut name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). Assuming 'hfl/chinese-roberta-wwm-ext' is a path, a model identifier, or url to a directory containing tokenizer files. I0710 17:52:59.942922 139925919450880 tokenization_utils_base.py:1254] loading file https://s3.amazonaws.com/models.huggingface.co/bert/hfl/chinese-roberta-wwm-ext/vocab.json from cache at None I0710 17:52:59.943219 139925919450880 tokenization_utils_base.py:1254] loading file https://s3.amazonaws.com/models.huggingface.co/bert/hfl/chinese-roberta-wwm-ext/merges.txt from cache at None I0710 17:52:59.943420 139925919450880 tokenization_utils_base.py:1254] loading file https://s3.amazonaws.com/models.huggingface.co/bert/hfl/chinese-roberta-wwm-ext/added_tokens.json from cache at /home/ubuntu/.cache/torch/transformers/23740a16768d945f44a24590dc8f5e572773b1b2868c5e58f7ff4fae2a721c49.3889713104075cfee9e96090bcdd0dc753733b3db9da20d1dd8b2cd1030536a2 I0710 17:52:59.943602 139925919450880 tokenization_utils_base.py:1254] loading file https://s3.amazonaws.com/models.huggingface.co/bert/hfl/chinese-roberta-wwm-ext/special_tokens_map.json from cache at /home/ubuntu/.cache/torch/transformers/6f13f9fe28f96dd7be36b84708332115ef90b3b310918502c13a8f719a225de2.275045728fbf41c11d3dae08b8742c054377e18d92cc7b72b6351152a99b64e4 I0710 17:52:59.943761 139925919450880 tokenization_utils_base.py:1254] loading file https://s3.amazonaws.com/models.huggingface.co/bert/hfl/chinese-roberta-wwm-ext/tokenizer_config.json from cache at /home/ubuntu/.cache/torch/transformers/5bb5761fdb6c8f42bf7705c27c48cffd8b40afa8278fa035bc81bf288f108af9.1ade4e0ac224a06d83f2cb9821a6656b6b59974d6552e8c728f2657e4ba445d9 I0710 17:52:59.943786 139925919450880 tokenization_utils_base.py:1254] loading file https://s3.amazonaws.com/models.huggingface.co/bert/hfl/chinese-roberta-wwm-ext/tokenizer.json from cache at None Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ubuntu/anaconda3/envs/deeplearning/lib/python3.6/site-packages/transformers/tokenization_auto.py", line 217, in from_pretrained return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/home/ubuntu/anaconda3/envs/deeplearning/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1140, in from_pretrained return cls._from_pretrained(*inputs, **kwargs) File "/home/ubuntu/anaconda3/envs/deeplearning/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1288, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/ubuntu/anaconda3/envs/deeplearning/lib/python3.6/site-packages/transformers/tokenization_roberta.py", line 171, in __init__ **kwargs, File "/home/ubuntu/anaconda3/envs/deeplearning/lib/python3.6/site-packages/transformers/tokenization_gpt2.py", line 167, in __init__ with open(vocab_file, encoding="utf-8") as vocab_handle: TypeError: expected str, bytes or os.PathLike object, not NoneType ``` Does it support `hfl/chinese-roberta-wwm-ext` now? Or what should i do. Hope for help, thx! @julien-c <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5653/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5653/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5652
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5652/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5652/comments
https://api.github.com/repos/huggingface/transformers/issues/5652/events
https://github.com/huggingface/transformers/pull/5652
654,644,380
MDExOlB1bGxSZXF1ZXN0NDQ3MzI5MDgy
5,652
Create README.md - Model card for sentence-transformers/bert-base-nli-mean-tokens
{ "login": "nreimers", "id": 10706961, "node_id": "MDQ6VXNlcjEwNzA2OTYx", "avatar_url": "https://avatars.githubusercontent.com/u/10706961?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nreimers", "html_url": "https://github.com/nreimers", "followers_url": "https://api.github.com/users/nreimers/followers", "following_url": "https://api.github.com/users/nreimers/following{/other_user}", "gists_url": "https://api.github.com/users/nreimers/gists{/gist_id}", "starred_url": "https://api.github.com/users/nreimers/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nreimers/subscriptions", "organizations_url": "https://api.github.com/users/nreimers/orgs", "repos_url": "https://api.github.com/users/nreimers/repos", "events_url": "https://api.github.com/users/nreimers/events{/privacy}", "received_events_url": "https://api.github.com/users/nreimers/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Thanks for sharing! Note that we don't currently have automated deployment on ExBERT (cc @bhoov)\r\n\r\n➡️ [model page](https://huggingface.co/sentence-transformers/bert-base-nli-mean-tokens)" ]
1,594
1,594
1,594
CONTRIBUTOR
null
Model card for https://huggingface.co/sentence-transformers/bert-base-nli-mean-tokens
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5652/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5652/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5652", "html_url": "https://github.com/huggingface/transformers/pull/5652", "diff_url": "https://github.com/huggingface/transformers/pull/5652.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5652.patch", "merged_at": 1594374071000 }
https://api.github.com/repos/huggingface/transformers/issues/5651
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5651/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5651/comments
https://api.github.com/repos/huggingface/transformers/issues/5651/events
https://github.com/huggingface/transformers/issues/5651
654,565,297
MDU6SXNzdWU2NTQ1NjUyOTc=
5,651
T5 fp16 overflow in forward (T5DenseReluDense)
{ "login": "lior1990", "id": 20380399, "node_id": "MDQ6VXNlcjIwMzgwMzk5", "avatar_url": "https://avatars.githubusercontent.com/u/20380399?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lior1990", "html_url": "https://github.com/lior1990", "followers_url": "https://api.github.com/users/lior1990/followers", "following_url": "https://api.github.com/users/lior1990/following{/other_user}", "gists_url": "https://api.github.com/users/lior1990/gists{/gist_id}", "starred_url": "https://api.github.com/users/lior1990/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lior1990/subscriptions", "organizations_url": "https://api.github.com/users/lior1990/orgs", "repos_url": "https://api.github.com/users/lior1990/repos", "events_url": "https://api.github.com/users/lior1990/events{/privacy}", "received_events_url": "https://api.github.com/users/lior1990/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "See: #4586" ]
1,594
1,594
1,594
NONE
null
# 🐛 Bug Using `AutoModelWithLMHead.from_pretrained("t5-base")` for fine-tuning, after 34 iterations I get nan loss from the forward method. After debugging it, I found that the source of the nan is due to an overflow that happens in `T5DenseReluDense`, when running `h = self.wo(h)`. The result of this forward is a tensor that has `inf` in one of its values, which later on causes the nan loss. I looked into this calculation with fp32 and I saw that his `inf` is caused due to a value of 66246.3906, which is over the maximum value of 65504 in fp16. This issue only happens with fp16 (opt_level="O1"), with opt_level="O0" everything is fine. ## Information Model I am using (Bert, XLNet ...): T5 Language I am using the model on (English, Chinese ...): English The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce I don't have step by step instructions, because I will need to upload my entire dataset for that. I have a pickle for the vector `h` and the weights of `self.wo` that causes the overflow in `T5DenseReluDense`, I can upload it if it might help. ## Expected behavior get a numeric loss ## Environment info - `transformers` version: 3.0.2 - Platform: Linux-5.3.0-1030-aws-x86_64-with-debian-buster-sid - Python version: 3.6.10 - PyTorch version (GPU?): 1.5.1 (True) - Tensorflow version (GPU?): not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5651/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5651/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5650
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5650/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5650/comments
https://api.github.com/repos/huggingface/transformers/issues/5650/events
https://github.com/huggingface/transformers/issues/5650
654,510,012
MDU6SXNzdWU2NTQ1MTAwMTI=
5,650
Wrong answers from Longformer model even on simple questions
{ "login": "danishpruthi", "id": 4627113, "node_id": "MDQ6VXNlcjQ2MjcxMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4627113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danishpruthi", "html_url": "https://github.com/danishpruthi", "followers_url": "https://api.github.com/users/danishpruthi/followers", "following_url": "https://api.github.com/users/danishpruthi/following{/other_user}", "gists_url": "https://api.github.com/users/danishpruthi/gists{/gist_id}", "starred_url": "https://api.github.com/users/danishpruthi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danishpruthi/subscriptions", "organizations_url": "https://api.github.com/users/danishpruthi/orgs", "repos_url": "https://api.github.com/users/danishpruthi/repos", "events_url": "https://api.github.com/users/danishpruthi/events{/privacy}", "received_events_url": "https://api.github.com/users/danishpruthi/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Yeah this is related to a bug, see: https://github.com/huggingface/transformers/pull/4615\r\n\r\ncc @mfuntowicz @julien-c - we should refactor the squad preprocessing in pipelines to make longformer work.", "Hi @patrickvonplaten: are there are any updates with respect to this?", "We will probably start working on a fix in ~2 weeks.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,594
1,602
1,602
NONE
null
I am using the pretrained model `allenai/longformer-large-4096-finetuned-triviaqa`, however upon inspecting it on my system and the demo on the huggingface website, the outputs seem off even for very simple examples and samples from the dataset. 1. [Example 1](https://www.dropbox.com/s/9h3dcqpwq0n1b05/download%20%283%29.png?dl=0) 2. [Example 2](https://www.dropbox.com/s/40e93m2odix8x1p/download%20%284%29.png?dl=0) 3. [Example 3](https://www.dropbox.com/s/s5t1k6jluyzfs33/download%20%286%29.png?dl=0) 4. [Example 4](https://www.dropbox.com/s/oyps5a5gr2e4c25/download%20%287%29.png?dl=0) Other models for QA (like `bert-large-uncased-whole-word-masking`) get such simple examples right.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5650/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5650/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5649
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5649/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5649/comments
https://api.github.com/repos/huggingface/transformers/issues/5649/events
https://github.com/huggingface/transformers/issues/5649
654,488,767
MDU6SXNzdWU2NTQ0ODg3Njc=
5,649
Bugs due to design choices in LongformerTokenizer
{ "login": "danishpruthi", "id": 4627113, "node_id": "MDQ6VXNlcjQ2MjcxMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4627113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danishpruthi", "html_url": "https://github.com/danishpruthi", "followers_url": "https://api.github.com/users/danishpruthi/followers", "following_url": "https://api.github.com/users/danishpruthi/following{/other_user}", "gists_url": "https://api.github.com/users/danishpruthi/gists{/gist_id}", "starred_url": "https://api.github.com/users/danishpruthi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danishpruthi/subscriptions", "organizations_url": "https://api.github.com/users/danishpruthi/orgs", "repos_url": "https://api.github.com/users/danishpruthi/repos", "events_url": "https://api.github.com/users/danishpruthi/events{/privacy}", "received_events_url": "https://api.github.com/users/danishpruthi/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Thanks for this issue. Longformer was trained on trivia_qa by default and not squad, so the model is not by default compatible with `squad` and needs some special post processing as shown in the example of this model, here:\r\nhttps://huggingface.co/transformers/model_doc/longformer.html#longformerforquestionanswering\r\n\r\nThis is also related to: https://github.com/huggingface/transformers/pull/4615 ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,594
1,600
1,600
NONE
null
I am using the transformers library from source (version: 3.0.2). ```python import transformers from transformers import * longformer_tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-base-4096") longformer_tokenizer.tokenize("This is a sample sentence for the tokenizer.") ``` The output I get is ``` ['This', 'Ġis', 'Ġa', 'Ġsample', 'Ġsentence', 'Ġfor', 'Ġthe', 'Ġtoken', 'izer']``` The design choice here is to use the `Ġ` as a start of every new word (except for the first word). This is in contrast with other tokenizers which insert `##` tokens for suffixes of broken words. Due to this slightly different tokenization quirk, many things could break, one of which is the following piece of code in `squad.py`: https://github.com/huggingface/transformers/blob/02a0b43014ac333a169e99d76aaba023a316e384/src/transformers/data/processors/squad.py#L106-L112 The `doc_tokens` from the processor are whitespace separated tokens, which are to be further tokenized using this code. But since each word is treated individually, and `LongformerTokenizer` doesn't insert `Ġ` for the first token, there is a problem. The resulting `all_doc_tokens` can not be correctly converted to original string using `tokenizer.convert_tokens_to_string` because it is missing the `Ġ` at the start.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5649/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5649/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5648
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5648/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5648/comments
https://api.github.com/repos/huggingface/transformers/issues/5648/events
https://github.com/huggingface/transformers/issues/5648
654,477,198
MDU6SXNzdWU2NTQ0NzcxOTg=
5,648
Classification accuracy on validation set didn't improve while fine-tuning BERT
{ "login": "rxlian", "id": 35382484, "node_id": "MDQ6VXNlcjM1MzgyNDg0", "avatar_url": "https://avatars.githubusercontent.com/u/35382484?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rxlian", "html_url": "https://github.com/rxlian", "followers_url": "https://api.github.com/users/rxlian/followers", "following_url": "https://api.github.com/users/rxlian/following{/other_user}", "gists_url": "https://api.github.com/users/rxlian/gists{/gist_id}", "starred_url": "https://api.github.com/users/rxlian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rxlian/subscriptions", "organizations_url": "https://api.github.com/users/rxlian/orgs", "repos_url": "https://api.github.com/users/rxlian/repos", "events_url": "https://api.github.com/users/rxlian/events{/privacy}", "received_events_url": "https://api.github.com/users/rxlian/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hello! This is an interesting question, but is kind of out of scope for the Github issues. We just opened a forum at [discuss.huggingface.co](https://discuss.huggingface.co). Do you think you could ask your question over there?\r\n\r\nThank you.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,594
1,600
1,600
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> I was fine-tuning BERT with BertForSequenceClassification on my own dataset. 0.8 of the whole dataset was used as training set and the others as validation set. The training loss decreased during the process. However, the accuracy on validation set was always around 0.5, which is similar to random guessing. And the accuracy on validation didn't improve a lot after fine-tuning. For example, from epochs 1-3, accuracy was from 0.48-0.52. So I was wondering whether this problem was caused by my dataset itself or if I did something wrong while fine-tuning it? Does anybody have any ideas on this? By the way, before this, I was fine-tuning BERT on another dataset and it did improve the classification accuracy a lot . <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5648/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5648/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5647
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5647/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5647/comments
https://api.github.com/repos/huggingface/transformers/issues/5647/events
https://github.com/huggingface/transformers/issues/5647
654,476,312
MDU6SXNzdWU2NTQ0NzYzMTI=
5,647
T5 TorchScript (Trace) Conversion
{ "login": "gyin94", "id": 67664443, "node_id": "MDQ6VXNlcjY3NjY0NDQz", "avatar_url": "https://avatars.githubusercontent.com/u/67664443?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gyin94", "html_url": "https://github.com/gyin94", "followers_url": "https://api.github.com/users/gyin94/followers", "following_url": "https://api.github.com/users/gyin94/following{/other_user}", "gists_url": "https://api.github.com/users/gyin94/gists{/gist_id}", "starred_url": "https://api.github.com/users/gyin94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gyin94/subscriptions", "organizations_url": "https://api.github.com/users/gyin94/orgs", "repos_url": "https://api.github.com/users/gyin94/repos", "events_url": "https://api.github.com/users/gyin94/events{/privacy}", "received_events_url": "https://api.github.com/users/gyin94/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Hey @gyin-ai - can you specify your version?\r\n\r\nI cannot reproduce the error on master.", "In master, the above example works for me but it doesn't work for T5ForConditionalGeneration\r\n\r\n```\r\nfrom transformers import T5ForConditionalGeneration\r\nimport torch\r\n\r\ntokens_tensor = torch.ones(1, 10, dtype=torch.long)\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"t5-small\", torchscript=True)\r\nmodel.eval()\r\nscripted_model = torch.jit.trace(model, (tokens_tensor, tokens_tensor))\r\n```\r\nIt fails with the same error\r\n```ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds```", "Sadly, I don't have a good answer here :-/ \r\n\r\nThe problem is that `decoder_input_ids` is not the second argument -> so that's why your function does not work. \r\nThis PR would make it possible to run your code: #6268 , but it does not really solve the problem because one might want to use `input_embeds` instead of `input_ids` and she/he would run into the same problem. It would allow for torchtrace for the most general case though...\r\n\r\ni guess since usually one passes `input_ids` and `decoder_input_ids`, we could merge the PR...What do you think? @LysandreJik ", "```\r\nimport torch\r\nfrom transformers import T5Tokenizer, T5ForConditionalGeneration\r\n\r\ntokenizer = T5Tokenizer.from_pretrained(\"google/flan-t5-large\", torchscript=True)\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"google/flan-t5-large\", torchscript=True)\r\n\r\ntokenized_dict = tokenizer(\r\n [\"please answer the following question: what is the boiling point of nitrogen\",], [\"-320.4F\",], \r\n return_tensors=\"pt\"\r\n)\r\ninput_tuple = (tokenized_dict['input_ids'], tokenized_dict['attention_mask'])\r\n\r\ntraced_model = torch.jit.trace(model, input_tuple)\r\ntorch.jit.save(traced_model, \"flan-t5-large.pt\")\r\n```\r\n\r\nI was trying to trace `google/flan-t5-large` model in torchScript. But I'm facing following exception:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\nInput In [29], in <cell line: 13>()\r\n 7 tokenized_dict = tokenizer(\r\n 8 [\"please answer the following question: what is the boiling point of nitrogen\",], [\"-320.4F\",], \r\n 9 return_tensors=\"pt\"\r\n 10 )\r\n 11 input_tuple = (tokenized_dict['input_ids'], tokenized_dict['attention_mask'])\r\n---> 13 traced_model = torch.jit.trace(model, input_tuple)\r\n 14 torch.jit.save(traced_model, \"flan-t5-large.pt\")\r\n\r\nFile ~/Library/Python/3.9/lib/python/site-packages/torch/jit/_trace.py:759, in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit)\r\n 756 return func\r\n 758 if isinstance(func, torch.nn.Module):\r\n--> 759 return trace_module(\r\n 760 func,\r\n 761 {\"forward\": example_inputs},\r\n 762 None,\r\n 763 check_trace,\r\n 764 wrap_check_inputs(check_inputs),\r\n 765 check_tolerance,\r\n 766 strict,\r\n 767 _force_outplace,\r\n 768 _module_class,\r\n 769 )\r\n 771 if (\r\n 772 hasattr(func, \"__self__\")\r\n 773 and isinstance(func.__self__, torch.nn.Module)\r\n 774 and func.__name__ == \"forward\"\r\n 775 ):\r\n 776 return trace_module(\r\n 777 func.__self__,\r\n 778 {\"forward\": example_inputs},\r\n (...)\r\n 785 _module_class,\r\n 786 )\r\n\r\nFile ~/Library/Python/3.9/lib/python/site-packages/torch/jit/_trace.py:976, in trace_module(mod, inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit)\r\n 972 argument_names = get_callable_argument_names(func)\r\n 974 example_inputs = make_tuple(example_inputs)\r\n--> 976 module._c._create_method_from_trace(\r\n 977 method_name,\r\n 978 func,\r\n 979 example_inputs,\r\n 980 var_lookup_fn,\r\n 981 strict,\r\n 982 _force_outplace,\r\n 983 argument_names,\r\n 984 )\r\n 985 check_trace_method = module._c._get_method(method_name)\r\n 987 # Check the trace against new traces created from user-specified inputs\r\n\r\nFile ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)\r\n 1190 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1191 # this function, and just call forward.\r\n 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1193 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1194 return forward_call(*input, **kwargs)\r\n 1195 # Do not call functions when jit is used\r\n 1196 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1182, in Module._slow_forward(self, *input, **kwargs)\r\n 1180 recording_scopes = False\r\n 1181 try:\r\n-> 1182 result = self.forward(*input, **kwargs)\r\n 1183 finally:\r\n 1184 if recording_scopes:\r\n\r\nFile ~/Library/Python/3.9/lib/python/site-packages/transformers/models/t5/modeling_t5.py:1660, in T5ForConditionalGeneration.forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 1657 decoder_attention_mask = decoder_attention_mask.to(self.decoder.first_device)\r\n 1659 # Decode\r\n-> 1660 decoder_outputs = self.decoder(\r\n 1661 input_ids=decoder_input_ids,\r\n 1662 attention_mask=decoder_attention_mask,\r\n 1663 inputs_embeds=decoder_inputs_embeds,\r\n 1664 past_key_values=past_key_values,\r\n 1665 encoder_hidden_states=hidden_states,\r\n 1666 encoder_attention_mask=attention_mask,\r\n 1667 head_mask=decoder_head_mask,\r\n 1668 cross_attn_head_mask=cross_attn_head_mask,\r\n 1669 use_cache=use_cache,\r\n 1670 output_attentions=output_attentions,\r\n 1671 output_hidden_states=output_hidden_states,\r\n 1672 return_dict=return_dict,\r\n 1673 )\r\n 1675 sequence_output = decoder_outputs[0]\r\n 1677 # Set device for model parallelism\r\n\r\nFile ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1194, in Module._call_impl(self, *input, **kwargs)\r\n 1190 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1191 # this function, and just call forward.\r\n 1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\r\n 1193 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1194 return forward_call(*input, **kwargs)\r\n 1195 # Do not call functions when jit is used\r\n 1196 full_backward_hooks, non_full_backward_hooks = [], []\r\n\r\nFile ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1182, in Module._slow_forward(self, *input, **kwargs)\r\n 1180 recording_scopes = False\r\n 1181 try:\r\n-> 1182 result = self.forward(*input, **kwargs)\r\n 1183 finally:\r\n 1184 if recording_scopes:\r\n\r\nFile ~/Library/Python/3.9/lib/python/site-packages/transformers/models/t5/modeling_t5.py:949, in T5Stack.forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 947 else:\r\n 948 err_msg_prefix = \"decoder_\" if self.is_decoder else \"\"\r\n--> 949 raise ValueError(f\"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds\")\r\n 951 if inputs_embeds is None:\r\n 952 assert self.embed_tokens is not None, \"You have to initialize the model with valid token embeddings\"\r\n\r\nValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds\r\n\r\n\r\n\r\nHow should I trace t5 model? Can you provide any example? Thanks\r\n```", "@dhrubo-os have fixed it, am also seeing the same issue", "@dhrubo-os can be fixed we just need to pass as below \r\n```\r\ntraced_token_predictor = torch.jit.trace(model,\r\n [\r\n input_ids[\"input_ids\"],\r\n input_ids[\"attention_mask\"],\r\n decoder_input_ids[\"input_ids\"]\r\n ])\r\n```\r\nsince model second argument is attention_mask its taking decoder_input_ids as None" ]
1,594
1,685
1,601
NONE
null
# ❓ Questions & Help How can we correctly set inputs for t5 TorchScript? ## Details <!-- Description of your issue --> ```python from transformers import T5Model import torch tokens_tensor = torch.ones(1, 10, dtype=torch.long) model = T5Model.from_pretrained("t5-small", torchscript=True) model.eval() scripted_model = torch.jit.trace(model, (tokens_tensor, tokens_tensor)) ``` Error: ``` ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5647/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5647/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5646
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5646/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5646/comments
https://api.github.com/repos/huggingface/transformers/issues/5646/events
https://github.com/huggingface/transformers/issues/5646
654,458,686
MDU6SXNzdWU2NTQ0NTg2ODY=
5,646
Can't get (global) attention probs using Longformer
{ "login": "k141303", "id": 25025195, "node_id": "MDQ6VXNlcjI1MDI1MTk1", "avatar_url": "https://avatars.githubusercontent.com/u/25025195?v=4", "gravatar_id": "", "url": "https://api.github.com/users/k141303", "html_url": "https://github.com/k141303", "followers_url": "https://api.github.com/users/k141303/followers", "following_url": "https://api.github.com/users/k141303/following{/other_user}", "gists_url": "https://api.github.com/users/k141303/gists{/gist_id}", "starred_url": "https://api.github.com/users/k141303/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/k141303/subscriptions", "organizations_url": "https://api.github.com/users/k141303/orgs", "repos_url": "https://api.github.com/users/k141303/repos", "events_url": "https://api.github.com/users/k141303/events{/privacy}", "received_events_url": "https://api.github.com/users/k141303/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @k141303, \r\n\r\nThanks a lot for the issue - I can reproduce!", "Thanks a lot for your very clean issue + proposed solution. It makes it very easy to find the error and fix it :-) \r\n\r\nBTW, in cases like this issue when you see a clear fix to the bug, Pull Requests are very welcome as well!", "Hi, @patrickvonplaten\r\n\r\nI also thought this was the solution, but it turned out to create a new bug. \r\n\r\n## To reproduce\r\nSteps to reproduce the behavior:\r\n\r\n1. Set config.output_attentions=True\r\n1. Use global attention (sum(global_attention_mask)>0)\r\n1. **Use multiple GPUs**\r\n1. **`max_num_global_attn_indices` is different in the batch**\r\n\r\nI confirmed it with the following code. (Apply the above solution by overriding.)\r\n\r\n~~~python\r\nimport math\r\nimport torch\r\nfrom torch.nn import functional as F\r\nfrom transformers import LongformerModel, AutoTokenizer, AutoConfig\r\nfrom transformers.modeling_longformer import LongformerSelfAttention\r\n\r\nclass MyLongformerSelfAttention(LongformerSelfAttention):\r\n def forward(\r\n self, hidden_states, attention_mask=None, output_attentions=False,\r\n ):\r\n\r\n attention_mask = attention_mask.squeeze(dim=2).squeeze(dim=1)\r\n\r\n # is index masked or global attention\r\n is_index_masked = attention_mask < 0\r\n is_index_global_attn = attention_mask > 0\r\n is_global_attn = any(is_index_global_attn.flatten())\r\n\r\n hidden_states = hidden_states.transpose(0, 1)\r\n\r\n # project hidden states\r\n query_vectors = self.query(hidden_states)\r\n key_vectors = self.key(hidden_states)\r\n value_vectors = self.value(hidden_states)\r\n\r\n seq_len, batch_size, embed_dim = hidden_states.size()\r\n assert (\r\n embed_dim == self.embed_dim\r\n ), f\"hidden_states should have embed_dim = {self.embed_dim}, but has {embed_dim}\"\r\n\r\n # normalize query\r\n query_vectors /= math.sqrt(self.head_dim)\r\n\r\n query_vectors = query_vectors.view(seq_len, batch_size, self.num_heads, self.head_dim).transpose(0, 1)\r\n key_vectors = key_vectors.view(seq_len, batch_size, self.num_heads, self.head_dim).transpose(0, 1)\r\n\r\n # attn_probs = (batch_size, seq_len, num_heads, window*2+1)\r\n attn_scores = self._sliding_chunks_query_key_matmul(\r\n query_vectors, key_vectors, self.one_sided_attn_window_size\r\n )\r\n\r\n # values to pad for attention probs\r\n remove_from_windowed_attention_mask = (attention_mask != 0).unsqueeze(dim=-1).unsqueeze(dim=-1)\r\n\r\n # cast to fp32/fp16 then replace 1's with -inf\r\n float_mask = remove_from_windowed_attention_mask.type_as(query_vectors).masked_fill(\r\n remove_from_windowed_attention_mask, -10000.0\r\n )\r\n # diagonal mask with zeros everywhere and -inf inplace of padding\r\n diagonal_mask = self._sliding_chunks_query_key_matmul(\r\n float_mask.new_ones(size=float_mask.size()), float_mask, self.one_sided_attn_window_size\r\n )\r\n\r\n # pad local attention probs\r\n attn_scores += diagonal_mask\r\n\r\n assert list(attn_scores.size()) == [\r\n batch_size,\r\n seq_len,\r\n self.num_heads,\r\n self.one_sided_attn_window_size * 2 + 1,\r\n ], f\"attn_probs should be of size ({batch_size}, {seq_len}, {self.num_heads}, {self.one_sided_attn_window_size * 2 + 1}), but is of size {attn_scores.size()}\"\r\n\r\n # compute local attention probs from global attention keys and contact over window dim\r\n if is_global_attn:\r\n # compute global attn indices required through out forward fn\r\n (\r\n max_num_global_attn_indices,\r\n is_index_global_attn_nonzero,\r\n is_local_index_global_attn_nonzero,\r\n is_local_index_no_global_attn_nonzero,\r\n ) = self._get_global_attn_indices(is_index_global_attn)\r\n # calculate global attn probs from global key\r\n global_key_attn_scores = self._concat_with_global_key_attn_probs(\r\n query_vectors=query_vectors,\r\n key_vectors=key_vectors,\r\n max_num_global_attn_indices=max_num_global_attn_indices,\r\n is_index_global_attn_nonzero=is_index_global_attn_nonzero,\r\n is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero,\r\n is_local_index_no_global_attn_nonzero=is_local_index_no_global_attn_nonzero,\r\n )\r\n # concat to attn_probs\r\n # (batch_size, seq_len, num_heads, extra attention count + 2*window+1)\r\n attn_scores = torch.cat((global_key_attn_scores, attn_scores), dim=-1)\r\n\r\n # free memory\r\n del global_key_attn_scores\r\n\r\n attn_probs_fp32 = F.softmax(attn_scores, dim=-1, dtype=torch.float32) # use fp32 for numerical stability\r\n attn_probs = attn_probs_fp32.type_as(attn_scores)\r\n\r\n # free memory\r\n del attn_probs_fp32\r\n\r\n # softmax sometimes inserts NaN if all positions are masked, replace them with 0\r\n attn_probs = torch.masked_fill(attn_probs, is_index_masked.unsqueeze(-1).unsqueeze(-1), 0.0)\r\n\r\n # apply dropout\r\n attn_probs = F.dropout(attn_probs, p=self.dropout, training=self.training)\r\n\r\n value_vectors = value_vectors.view(seq_len, batch_size, self.num_heads, self.head_dim).transpose(0, 1)\r\n\r\n # compute local attention output with global attention value and add\r\n if is_global_attn:\r\n # compute sum of global and local attn\r\n attn_output = self._compute_attn_output_with_global_indices(\r\n value_vectors=value_vectors,\r\n attn_probs=attn_probs,\r\n max_num_global_attn_indices=max_num_global_attn_indices,\r\n is_index_global_attn_nonzero=is_index_global_attn_nonzero,\r\n is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero,\r\n )\r\n else:\r\n # compute local attn only\r\n attn_output = self._sliding_chunks_matmul_attn_probs_value(\r\n attn_probs, value_vectors, self.one_sided_attn_window_size\r\n )\r\n\r\n assert attn_output.size() == (batch_size, seq_len, self.num_heads, self.head_dim), \"Unexpected size\"\r\n attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous()\r\n\r\n # compute value for global attention and overwrite to attention output\r\n # TODO: remove the redundant computation\r\n if is_global_attn:\r\n global_attn_output = self._compute_global_attn_output_from_hidden(\r\n hidden_states=hidden_states,\r\n max_num_global_attn_indices=max_num_global_attn_indices,\r\n is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero,\r\n is_index_global_attn_nonzero=is_index_global_attn_nonzero,\r\n is_local_index_no_global_attn_nonzero=is_local_index_no_global_attn_nonzero,\r\n is_index_masked=is_index_masked,\r\n )\r\n\r\n # get only non zero global attn output\r\n nonzero_global_attn_output = global_attn_output[\r\n is_local_index_global_attn_nonzero[0], :, is_local_index_global_attn_nonzero[1]\r\n ]\r\n # overwrite values with global attention\r\n attn_output[is_index_global_attn_nonzero[::-1]] = nonzero_global_attn_output.view(\r\n len(is_local_index_global_attn_nonzero[0]), -1\r\n )\r\n\r\n attn_output = attn_output.transpose(0, 1)\r\n\r\n if output_attentions:\r\n if is_global_attn:\r\n # With global attention, return global attention probabilities only\r\n # batch_size x num_heads x max_num_global_attention_tokens x sequence_length\r\n # which is the attention weights from tokens with global attention to all tokens\r\n # It doesn't not return local attention\r\n # In case of variable number of global attantion in the rows of a batch,\r\n # attn_probs are padded with -10000.0 attention scores\r\n\r\n #attn_probs = attn_probs.view(batch_size, self.num_heads, max_num_global_attn_indices, seq_len)\r\n attn_probs = attn_probs[:,:,:,:max_num_global_attn_indices]\r\n attn_probs = attn_probs.permute(0, 2, 1, 3)\r\n else:\r\n # without global attention, return local attention probabilities\r\n # batch_size x num_heads x sequence_length x window_size\r\n # which is the attention weights of every token attending to its neighbours\r\n attn_probs = attn_probs.permute(0, 2, 1, 3)\r\n\r\n outputs = (attn_output, attn_probs) if output_attentions else (attn_output,)\r\n return outputs\r\n\r\nclass MyLongformerModel(LongformerModel):\r\n def __init__(self, config):\r\n super().__init__(config)\r\n for i, layer in enumerate(self.encoder.layer):\r\n layer.attention.self = MyLongformerSelfAttention(config, i)\r\n self.init_weights()\r\n\r\nif __name__ == '__main__':\r\n config = AutoConfig.from_pretrained(\"allenai/longformer-base-4096\", output_attentions=True)\r\n model = MyLongformerModel.from_pretrained(\"allenai/longformer-base-4096\", config=config)\r\n tokenizer = AutoTokenizer.from_pretrained(\"allenai/longformer-base-4096\")\r\n\r\n token_ids = [[\r\n tokenizer.cls_token_id, 10, 11, 12,\r\n tokenizer.sep_token_id, 21, 22, 23,\r\n tokenizer.sep_token_id\r\n ]]*2\r\n global_attention_mask = [[1,1,1,1,1,0,0,0,0], [1,1,1,1,1,1,1,0,0]]\r\n\r\n device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\r\n n_gpu = torch.cuda.device_count()\r\n model.to(device)\r\n if n_gpu > 1:\r\n model = torch.nn.DataParallel(model)\r\n print(f\"DEVICE:{device} N_GPU:{n_gpu}\")\r\n\r\n logit, *_, attention_probs = model(\r\n torch.LongTensor(token_ids),\r\n global_attention_mask=torch.LongTensor(global_attention_mask)\r\n )\r\n\r\n print(attention_probs[0].size())\r\n~~~\r\n\r\n~~~bash\r\nusername@34dcdd033731:~/Python/temp$ python3 test_longformer.py\r\nDEVICE:cuda N_GPU:4\r\nTraceback (most recent call last):\r\n File \"test_longformer.py\", line 194, in <module>\r\n global_attention_mask=torch.LongTensor(global_attention_mask)\r\n File \"/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 550, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py\", line 156, in forward\r\n return self.gather(outputs, self.output_device)\r\n File \"/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py\", line 168, in gather\r\n return gather(outputs, output_device, dim=self.dim)\r\n File \"/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py\", line 68, in gather\r\n res = gather_map(outputs)\r\n File \"/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py\", line 63, in gather_map\r\n return type(out)(map(gather_map, zip(*outputs)))\r\n File \"/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py\", line 63, in gather_map\r\n return type(out)(map(gather_map, zip(*outputs)))\r\n File \"/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py\", line 55, in gather_map\r\n return Gather.apply(target_device, dim, *outputs)\r\n File \"/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/parallel/_functions.py\", line 68, in forward\r\n return comm.gather(inputs, ctx.dim, ctx.target_device)\r\n File \"/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/cuda/comm.py\", line 165, in gather\r\n return torch._C._gather(tensors, dim, destination)\r\nRuntimeError: Gather got an input of invalid size: got [1, 12, 512, 7], but expected [1, 12, 512, 5]\r\n~~~\r\n\r\nI think there are some solutions. \r\n\r\nFor example:\r\n- Share `max_num_global_attn_indices` between GPUs.\r\n- Define `max_num_global_attn_indices` in config. \r\n\r\nI'm sorry I can't suggest a specific solution.", "Thanks for the notification - will take a look next week :-) ", "Sorry, I forgot that today is Friday. \r\nHave a good weekend :-)\r\n\r\n## For those facing the same problem.\r\n\r\nThe following is an idea for a temporary solution to the problem. \r\nIt might be helpful.\r\n\r\nhttps://github.com/huggingface/transformers/blob/02a0b43014ac333a169e99d76aaba023a316e384/src/transformers/modeling_longformer.py#L435\r\n↓↓↓\r\n~~~python\r\n #attn_probs = attn_probs.view(batch_size, self.num_heads, max_num_global_attn_indices, seq_len)\r\n attn_probs = attn_probs[:,:,:,:max_num_global_attn_indices]\r\n attn_probs = F.pad(\r\n attn_probs,\r\n (0, seq_len-max_num_global_attn_indices),\r\n \"constant\",\r\n 0.0,\r\n )\r\n attn_probs = attn_probs.permute(0, 2, 1, 3)\r\n~~~\r\n~~~bash\r\n$ python3 test.py\r\nDEVICE:cuda N_GPU:4\r\ntorch.Size([2, 12, 512, 512])\r\n~~~", "@k141303 - thanks a lot for your proposed solution. Padding to the sequence length is actually a very clean solution. \r\nSince we are only returning global attention probs, I think logically it makes also sense to pad the other values with 0.0 since they weren't attended to for global attention => so we'll go for this here.\r\nInstead of `seq_len` we will pad to `window_size` so that local and global attention always have the same output dimension. I think this has a slight advantage in that the output signature is more consistent. ", "So in this case the output would be:\r\n\r\n```python\r\ntorch.Size([1, 12, 512, 513])\r\n```\r\nwhich is the same as if only local attention would have been used.", "@patrickvonplaten It seems that the code causing the error in commit 02a0b43 (fixed by commit 7096e47) was reintroduced at some point. The code of current commit df53643 looks like 02a0b43 instead of 7096e47.", "Also, I wonder if the output is correct. Add the following lines right after the minimum code of @k141303.\r\n\r\n print(attention_probs[0][0,0,:5,:].sum(dim=1))\r\n print(attention_probs[0][0,0,:,:5].sum(dim=0))\r\n\r\nThis shows that:\r\n1. For each head (showing only for the first), all the rows with global attention do not sum to 1. \r\n1. For each head (showing only for the first), all the columns with global attention do not sum to 1. \r\n\r\nTherefore neither the rows nor the column of the attention matrices can be `the attention weights from tokens with global attention to all tokens`. As far as I understand from the code, the columns are actually the attention weights from all tokens to the tokens with global attention, but this is not really useful, is it? For instance, it would be more useful to know where `CLS` puts attention instead of knowing which tokens pay attention to `CLS`.\r\n\r\n", "@patrickvonplaten I think that the global attention that should be returned is a computation intermediate of the function `_compute_global_attn_output_from_hidden`. It is called `global_attn_probs` (or `global_attn_probs_float` before the dropouts are applied).\r\n\r\nIf only global attention is to be returned, you could consider returning this intermediate together with the attention output of `_compute_global_attn_output_from_hidden`. If you assign it to `attn_probs` in the function `forward` then you are almost done (otherwise you have to recompute it).\r\n\r\nThe dimension of this intermediate are `(H,G,L)` where `H` is the number of attention heads, `G` is the number of tokens with global attention and `L` is the text length (a multiple of `attention_window`, which I will write `W` for short). If you want the output to have dimensions `(H,L,W)` to be congruent with the local attention, you would have to transpose it before padding. This may be very confusing because the rows of the local attention would sum to 1, whereas the the first `G` columns of the global attention would sum to 1 and all the others would sum to 0.\r\n\r\nSince the dimensions of global attention are intrinsically different from those of local attention, it's probably better to leave them as `(H,G,L)`. You could output a tuple with local attention `(H,L,W)` and global attention `(H,G,L)` instead of a single tensor. Unfortunately reconstituting full attention matrices `(H,L,L)` is a no go: you need Longformers precisely because this does not fit in memory.", "Hey @gui11aume , good point! \r\n\r\nI guess, either way we do it, it's not perfect for Longformer....I think the cleanest solution would actually to add a new output type called `global_attentions` and output both `attentions` and `global_attentions`. This is more or less the same idea as outputting two tuples that you proposed. \r\n\r\nOpened an issue about it here: -> Feel free to open a PR if you want :-) It's not of very high prio for me at the moment - so I thought it might be a good issue to tackle for people that work with Longformer. If no one is interested in opening a PR, I'll eventually do it :-) ", "I didn't want to do a PR earlier because I wasn't sure about the interface you want. Having a separate field `global_attentions` is much cleaner. I should be able to propose something soon and I'll continue the discussion on issue #7514." ]
1,594
1,601
1,594
NONE
null
# 🐛 Bug ## Information Model I am using **Longformer**: Language I am using the model on Japanese: The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Set config.output_attentions=True 2. Use global attention (sum(global_attention_mask)>0) The following is the minimum code to reproduce the error. ~~~python3:test.py import torch from transformers import AutoModel, AutoTokenizer, AutoConfig if __name__ == '__main__': config = AutoConfig.from_pretrained("allenai/longformer-base-4096", output_attentions=True) model = AutoModel.from_pretrained("allenai/longformer-base-4096", config=config) tokenizer = AutoTokenizer.from_pretrained("allenai/longformer-base-4096") token_ids = [[ tokenizer.cls_token_id, 10, 11, 12, tokenizer.sep_token_id, 21, 22, 23, tokenizer.sep_token_id ]] global_attention_mask = [[1,1,1,1,1,0,0,0,0]] logit, *_, attention_probs = model( torch.LongTensor(token_ids), global_attention_mask=torch.LongTensor(global_attention_mask) ) print(attention_probs[0].size()) ~~~ ~~~bash $ python3 test.py Traceback (most recent call last): File "test_longformer.py", line 16, in <module> global_attention_mask=torch.LongTensor(global_attention_mask) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/transformers/modeling_longformer.py", line 1004, in forward output_hidden_states=output_hidden_states, File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/transformers/modeling_longformer.py", line 695, in forward layer_outputs = layer_module(hidden_states, attention_mask, output_attentions,) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/transformers/modeling_longformer.py", line 658, in forward self_attn_outputs = self.attention(hidden_states, attention_mask, output_attentions=output_attentions,) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/transformers/modeling_longformer.py", line 642, in forward self_outputs = self.self(hidden_states, attention_mask, output_attentions,) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/uge_mnt/home/username/.local/lib/python3.6/site-packages/transformers/modeling_longformer.py", line 435, in forward attn_probs = attn_probs.view(batch_size, self.num_heads, max_num_global_attn_indices, seq_len) RuntimeError: shape '[1, 12, 5, 512]' is invalid for input of size 3182592 ~~~ <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The model can output attention probs for each attention head. ~~~bash $ python3 test.py torch.Size([1, 12, 4096, 5]) ~~~ It would seem to work if I rewrite the target line as follows. https://github.com/huggingface/transformers/blob/02a0b43014ac333a169e99d76aaba023a316e384/src/transformers/modeling_longformer.py#L435 ~~~python3 #attn_probs = attn_probs.view(batch_size, self.num_heads, max_num_global_attn_indices, seq_len) attn_probs = attn_probs[:,:,:,:max_num_global_attn_indices] attn_probs = attn_probs.permute(0, 2, 1, 3) ~~~ ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version:3.0.2 - Platform:Ubuntu 18.04.4 LTS - Python version:Python 3.6.9 :: Anaconda, Inc. - PyTorch version (GPU?):1.5.1 (Yes) - Tensorflow version (GPU?): - Using GPU in script?:Yes - Using distributed or parallel set-up in script?:Yes
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5646/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5646/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5645
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5645/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5645/comments
https://api.github.com/repos/huggingface/transformers/issues/5645/events
https://github.com/huggingface/transformers/pull/5645
654,437,296
MDExOlB1bGxSZXF1ZXN0NDQ3MTY2MjE2
5,645
enable easy checkout switch
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5645?src=pr&el=h1) Report\n> Merging [#5645](https://codecov.io/gh/huggingface/transformers/pull/5645?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/02a0b43014ac333a169e99d76aaba023a316e384&el=desc) will **increase** coverage by `0.97%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5645/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5645?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5645 +/- ##\n==========================================\n+ Coverage 78.17% 79.14% +0.97% \n==========================================\n Files 145 145 \n Lines 25366 25366 \n==========================================\n+ Hits 19829 20076 +247 \n+ Misses 5537 5290 -247 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5645?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5645/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+6.51%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5645/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.32% <0.00%> (+31.77%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5645/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5645?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5645?src=pr&el=footer). Last update [02a0b43...a348729](https://codecov.io/gh/huggingface/transformers/pull/5645?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Not seeing any activity, I'm not sure the purpose of this feature is clear, so I'd try to clarify:\r\n\r\nIf you just use one repo check out and then just switch git branches then running `pip install -e .[dev]` once is sufficient for an error-free work-flow.\r\n\r\nProblems:\r\n- if you also want to have a normal `transformers` installed and used - you can't, you will have to constantly rerun `pip install -e .` , unless you use a different virtual environment - this is very error-prone - forgetting to switch\r\n- you can't have more than one check out - you again have to re-run `pip install -e .` - so easy to forget and waste time figuring out why code modifications have no effect.\r\n\r\nSolution:\r\n- let's point python path to `/full/path/to/checktout-dir/src/`and now you will never again need to remember to run `pip install -e .` to run the test suite against. And you can still have \"normal\" `transformers` installed for normal use.\r\n\r\nIt doesn't interfere with anybody's current work flow.\r\n\r\nI at the very least have two checkouts - one remote master, which I can run tests against any moment, after just `git pull` and then the forked master and its branches, where development is done. I typically have several check outs for different branches in my dev environments since I find it's often simpler to manage then switching branches all the time.\r\n\r\nThank you. ", "And I see that `examples` needs the same solution (added).\r\n\r\nThe other workaround is to run tests with:\r\n\r\n```\r\nPYTHONPATH=`pwd`/src:$PYTHONPATH python -m pytest ...\r\n```\r\nbut this is far from being easy to be used often.\r\n\r\nAnd finally, removing the intermediary `src` dir and making `transformers` a top-level dir will fix this problem as well for the `python -m pytest` situation, but not in other kinds of invocation.", "Could someone with write access rerun this CI check - the failure has nothing to do with my PR.\r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/9527/workflows/73306d70-4190-48cd-b24a-b73619cd2002/jobs/64665/steps\r\nThank you.\r\n\r\n---\r\nThank you to the kind soul who triggered a re-run." ]
1,594
1,596
1,596
CONTRIBUTOR
null
allow having multiple repository checkouts and not needing to remember to rerun `pip install -e .[dev]` when switching between checkouts and running tests. This code will automatically do the right thing for the test suite. Note that `python -m pytest` automatically adds `.` to the path, so normally most packages get automatically tested against the local checkout. However, since this project is under a sub-dir `src/` this feature doesn't help.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5645/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5645/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5645", "html_url": "https://github.com/huggingface/transformers/pull/5645", "diff_url": "https://github.com/huggingface/transformers/pull/5645.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5645.patch", "merged_at": 1596184487000 }
https://api.github.com/repos/huggingface/transformers/issues/5644
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5644/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5644/comments
https://api.github.com/repos/huggingface/transformers/issues/5644/events
https://github.com/huggingface/transformers/pull/5644
654,423,594
MDExOlB1bGxSZXF1ZXN0NDQ3MTU1NDU3
5,644
FlaubertForTokenClassification
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5644?src=pr&el=h1) Report\n> Merging [#5644](https://codecov.io/gh/huggingface/transformers/pull/5644?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0befb513278f6e42b722be340dbc667e0ba2718e&el=desc) will **decrease** coverage by `1.02%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5644/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5644?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5644 +/- ##\n==========================================\n- Coverage 78.26% 77.24% -1.03% \n==========================================\n Files 146 146 \n Lines 25998 26005 +7 \n==========================================\n- Hits 20348 20088 -260 \n- Misses 5650 5917 +267 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5644?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/5644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <ø> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `74.41% <ø> (ø)` | |\n| [src/transformers/modeling\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/5644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `85.18% <100.00%> (+0.81%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `81.49% <0.00%> (+0.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5644/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5644?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5644?src=pr&el=footer). Last update [0befb51...4bb7577](https://codecov.io/gh/huggingface/transformers/pull/5644?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks! Can you also add the model to the common tests (by adding it to the [all_model_classes](https://github.com/huggingface/transformers/blob/b2747af5434e5a5d8ab1d7e2789699d20d7a4ab8/tests/test_modeling_flaubert.py#L316)) and in the [documentation file](https://github.com/huggingface/transformers/blob/master/docs/source/model_doc/flaubert.rst)?\r\n\r\nThis looks great to me otherwise.", "> Can you also add the model to the common tests (by adding it to the [all_model_classes](https://github.com/huggingface/transformers/blob/b2747af5434e5a5d8ab1d7e2789699d20d7a4ab8/tests/test_modeling_flaubert.py#L316))\r\n\r\nI tried that originally, but they don't support `*TokenClassification` - its outputs are different, so most tests break. Note that `XLMForTokenClassification` isn't being tested in the common tests. This PR was really monkeyseemonkeydo. Perhaps merging this and then work on `XLMForTokenClassification` common tests first? and then the subclass will be easy.\r\n\r\n> and in the documentation file?\r\n\r\ndone.", "I think there is a bug in `XLMForTokenClassification` - if I do this fix:\r\n```\r\n--- a/src/transformers/modeling_xlm.py\r\n+++ b/src/transformers/modeling_xlm.py\r\n@@ -1079,7 +1079,7 @@ class XLMForTokenClassification(XLMPreTrainedModel):\r\n sequence_output = self.dropout(sequence_output)\r\n logits = self.classifier(sequence_output)\r\n\r\n- outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here\r\n+ outputs = (logits,) + outputs[1:] # add hidden states and attention if they are here\r\n if labels is not None:\r\n loss_fct = CrossEntropyLoss()\r\n # Only keep active parts of the loss\r\n```\r\nI can now add `XLMForTokenClassification` to `all_model_classes` - and 99% of it now passes.\r\n\r\nIt looks like that line of code was copied from `BertForTokenClassification`, but for XLM it appears to need to be `outputs[1:]`\r\n", "Oh, my fork was outdated - I see you have just fixed this bug. OK, adding FlaubertForTokenClassification to all_model_classes should work now." ]
1,594
1,594
1,594
CONTRIBUTOR
null
implement FlaubertForTokenClassification as a subclass of XLMForTokenClassification. Based on an item from https://github.com/huggingface/transformers/projects/17
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5644/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5644/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5644", "html_url": "https://github.com/huggingface/transformers/pull/5644", "diff_url": "https://github.com/huggingface/transformers/pull/5644.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5644.patch", "merged_at": 1594666793000 }
https://api.github.com/repos/huggingface/transformers/issues/5643
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5643/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5643/comments
https://api.github.com/repos/huggingface/transformers/issues/5643/events
https://github.com/huggingface/transformers/issues/5643
654,406,832
MDU6SXNzdWU2NTQ0MDY4MzI=
5,643
Help with Using TFXLNet on custom embeddings
{ "login": "andrewlee98", "id": 14003549, "node_id": "MDQ6VXNlcjE0MDAzNTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/14003549?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andrewlee98", "html_url": "https://github.com/andrewlee98", "followers_url": "https://api.github.com/users/andrewlee98/followers", "following_url": "https://api.github.com/users/andrewlee98/following{/other_user}", "gists_url": "https://api.github.com/users/andrewlee98/gists{/gist_id}", "starred_url": "https://api.github.com/users/andrewlee98/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andrewlee98/subscriptions", "organizations_url": "https://api.github.com/users/andrewlee98/orgs", "repos_url": "https://api.github.com/users/andrewlee98/repos", "events_url": "https://api.github.com/users/andrewlee98/events{/privacy}", "received_events_url": "https://api.github.com/users/andrewlee98/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,594
1,600
1,600
NONE
null
# 🐛 Bug ## Information Hi, I am working on implementing the multimodal bitransformer: https://arxiv.org/pdf/1909.02950.pdf I have already gotten this working using your implementation of TFBertModel, but I want to try using TFXLNetModel in place of BERT to see if it makes an improvement. What I've done for using TFBertModel is extract the word_embeddings and pass the word embeddings with token_type_ids = 0 and image embeddings with token_type_ids = 1. ## To reproduce I have the following definition of my model ` class BERT(transformers.TFXLNetModel): def __init__(self, config, *inputs, **kwargs): super(BERT, self).__init__(config, *inputs, **kwargs) self.call = tf.function(self.call) class MyModel(tf.keras.Model): def __init__(self): super(MyModel, self).__init__() self.resnet = tf.keras.applications.ResNet152V2(include_top=False, weights='imagenet', input_shape=(224, 224, 3)) self.bert = BERT.from_pretrained('xlnet-base-cased') self.text_embedding = self.bert.get_input_embeddings().weights[0] self.pooling = layers.AveragePooling2D(pool_size=(2, 2), padding='same') self.reshape = layers.Reshape((4 * 4, 2048)) # 4 is from 7//2 + 1 self.W_ns = [layers.Dense(self.bert.config.hidden_size) for _ in range(self.reshape.target_shape[0])] self.concat = layers.Concatenate(axis=1) self.dropout = layers.Dropout(0.1) self.denseout = layers.Dense(1, activation='sigmoid') def call(self, inputs): text, image = inputs # handle image image = tf.keras.applications.resnet_v2.preprocess_input(image) image_emb = self.resnet(image) image_emb = self.pooling(image_emb) image_emb = self.reshape(image_emb) image_embeds = [self.W_ns[i](image_emb[:, i]) for i in range(self.reshape.target_shape[0])] image_emb = tf.keras.backend.stack(image_embeds, axis=1) # handle text text_emb = tf.gather(self.text_embedding, text) # concat and feed to bert concat_emb = self.concat([text_emb, image_emb]) seg_ids = np.concatenate((np.zeros(max_len, dtype=np.int64), np.ones(self.reshape.target_shape[0], dtype=np.int64))) print('input shapes to xlnet', concat_emb.shape, seg_ids.shape) bert_encodings = self.bert(inputs={'inputs_embeds': concat_emb, 'token_type_ids': seg_ids})[0] doc_encoding = tf.squeeze(bert_encodings[:, 0:1, :], axis=1) doc_encoding = self.dropout(doc_encoding) output = self.denseout(doc_encoding) return output ` In the line that prints "input shapes to xlnet", I get (None, 116, 768) for the inputs_embeds and (116,) for the token_type_ids, which I expect because I have 100 word embeddings and 16 image embeddings. When I call fit() on this model, it gives the error: > ValueError: in converted code: > > <ipython-input-12-b6bccd3c83e0>:50 call * > bert_encodings = self.bert(inputs={'inputs_embeds': concat_emb, > /homes/awl27/python36env/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py:891 __call__ > outputs = self.call(cast_inputs, *args, **kwargs) > /homes/awl27/python36env/lib/python3.6/site-packages/transformers/modeling_tf_xlnet.py:824 call * > outputs = self.transformer(inputs, **kwargs) > /homes/awl27/python36env/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py:842 __call__ > outputs = call_fn(cast_inputs, *args, **kwargs) > /homes/awl27/python36env/lib/python3.6/site-packages/transformers/modeling_tf_xlnet.py:530 call * > token_type_ids = tf.transpose(token_type_ids, perm=(1, 0)) if token_type_ids is not None else None > /homes/awl27/python36env/lib/python3.6/site-packages/tensorflow_core/python/ops/array_ops.py:1780 transpose_v2 > return transpose(a=a, perm=perm, name=name, conjugate=conjugate) > /homes/awl27/python36env/lib/python3.6/site-packages/tensorflow_core/python/ops/array_ops.py:1870 transpose > ret = transpose_fn(a, perm, name=name) > /homes/awl27/python36env/lib/python3.6/site-packages/tensorflow_core/python/ops/gen_array_ops.py:11455 transpose > "Transpose", x=x, perm=perm, name=name) > /homes/awl27/python36env/lib/python3.6/site-packages/tensorflow_core/python/framework/op_def_library.py:793 _apply_op_helper > op_def=op_def) > /homes/awl27/python36env/lib/python3.6/site-packages/tensorflow_core/python/framework/func_graph.py:548 create_op > compute_device) > /homes/awl27/python36env/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:3429 _create_op_internal > op_def=op_def) > /homes/awl27/python36env/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1773 __init__ > control_input_ops) > /homes/awl27/python36env/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1613 _create_c_op > raise ValueError(str(e)) > > ValueError: Dimension must be 1 but is 2 for 'transformer/transpose_1' (op: 'Transpose') with input shapes: [116], [2]. > ## Expected behavior I expected this to work just like TFBertModel did. If I just change the definition in the BERT class to use TFBertModel instead of TFXLNetModel, it works fine. ## Environment info PyTorch version: 1.5.0 Is debug build: No CUDA used to build PyTorch: 10.2 OS: Ubuntu 16.04.6 LTS GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609 CMake version: version 3.5.1 Python version: 3.6 Is CUDA available: No CUDA runtime version: 10.0.130 GPU models and configuration: GPU 0: Tesla K40c Nvidia driver version: 418.87.00 cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.3 Versions of relevant libraries: [pip3] numpy==1.16.6 [pip3] torch==1.5.0 [pip3] torchtext==0.5.0 [pip3] torchvision==0.6.0 [conda] Could not collect
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5643/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5643/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5642
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5642/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5642/comments
https://api.github.com/repos/huggingface/transformers/issues/5642/events
https://github.com/huggingface/transformers/pull/5642
654,377,714
MDExOlB1bGxSZXF1ZXN0NDQ3MTE4MjEy
5,642
Improvements to PretrainedConfig documentation
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5642?src=pr&el=h1) Report\n> Merging [#5642](https://codecov.io/gh/huggingface/transformers/pull/5642?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/760f726e516752d27142346d8552682d3f6f0532&el=desc) will **increase** coverage by `0.89%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5642/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5642?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5642 +/- ##\n==========================================\n+ Coverage 76.87% 77.77% +0.89% \n==========================================\n Files 145 145 \n Lines 25364 25366 +2 \n==========================================\n+ Hits 19499 19728 +229 \n+ Misses 5865 5638 -227 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5642?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.45% <100.00%> (+0.05%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.27% <0.00%> (-74.92%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.71% <0.00%> (-1.51%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.79% <0.00%> (+33.89%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5642/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.77% <0.00%> (+73.38%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5642?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5642?src=pr&el=footer). Last update [760f726...56b5942](https://codecov.io/gh/huggingface/transformers/pull/5642?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,594
1,594
1,594
COLLABORATOR
null
Preview is [here](https://59194-155220641-gh.circle-artifacts.com/0/docs/_build/html/main_classes/configuration.html)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5642/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5642/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5642", "html_url": "https://github.com/huggingface/transformers/pull/5642", "diff_url": "https://github.com/huggingface/transformers/pull/5642.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5642.patch", "merged_at": 1594391508000 }
https://api.github.com/repos/huggingface/transformers/issues/5641
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5641/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5641/comments
https://api.github.com/repos/huggingface/transformers/issues/5641/events
https://github.com/huggingface/transformers/issues/5641
654,365,730
MDU6SXNzdWU2NTQzNjU3MzA=
5,641
Multiple Mask Tokens
{ "login": "zbush548", "id": 61605741, "node_id": "MDQ6VXNlcjYxNjA1NzQx", "avatar_url": "https://avatars.githubusercontent.com/u/61605741?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zbush548", "html_url": "https://github.com/zbush548", "followers_url": "https://api.github.com/users/zbush548/followers", "following_url": "https://api.github.com/users/zbush548/following{/other_user}", "gists_url": "https://api.github.com/users/zbush548/gists{/gist_id}", "starred_url": "https://api.github.com/users/zbush548/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zbush548/subscriptions", "organizations_url": "https://api.github.com/users/zbush548/orgs", "repos_url": "https://api.github.com/users/zbush548/repos", "events_url": "https://api.github.com/users/zbush548/events{/privacy}", "received_events_url": "https://api.github.com/users/zbush548/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi! This is a very good question. We just opened a forum on [discuss.huggingface.co](https://discuss.huggingface.co/) to discuss those kind of questions exactly. Do you think you could go over there and ask it? Thanks a lot!", "Sure, I'll do that now! @LysandreJik ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,594
1,600
1,600
NONE
null
For those wishing to [MASK] several tokens, here this is. My question, however, relates to the output. I added "top_k" assuming I'd be able to return multiple sentences, but that was not the case. I am not sure how exactly I can achieve this. ``` import torch from transformers import BertTokenizer, BertModel,BertForMaskedLM tokenizer = BertTokenizer.from_pretrained('bert-base-cased') input_tx = "[CLS] [MASK] [MASK] [MASK] of the United States mismangement of the Coronavirus is its distrust of science. [SEP]" tokenized_text = tokenizer.tokenize(input_tx) indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) top_k = 10 tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([[0]*25]) model = BertForMaskedLM.from_pretrained('bert-base-cased') outputs = model(tokens_tensor, token_type_ids=segments_tensors) predictions = outputs[0] predicted_index = [torch.argmax(predictions[0, i]).item() for i in range(0,24)] predicted_token = [tokenizer.convert_ids_to_tokens([predicted_index[x]])[0] for x in range(1,24)] print(predicted_token) ``` `Output: 'The', 'main', 'cause', 'of', 'the', 'United', 'States', 'mi', '##sman', '##gement', 'of', 'the', 'Co', '##rona', '##virus', 'is', 'its', 'di', '##st', '##rust', 'of', 'science', '`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5641/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5641/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5640
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5640/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5640/comments
https://api.github.com/repos/huggingface/transformers/issues/5640/events
https://github.com/huggingface/transformers/pull/5640
654,353,353
MDExOlB1bGxSZXF1ZXN0NDQ3MDk4MzE5
5,640
Cleanup bart caching logic
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5640?src=pr&el=h1) Report\n> Merging [#5640](https://codecov.io/gh/huggingface/transformers/pull/5640?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7fad617dc1fc681a7f5da5e0172c8b83f4bf0024&el=desc) will **decrease** coverage by `0.12%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5640/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5640?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5640 +/- ##\n==========================================\n- Coverage 78.11% 77.99% -0.13% \n==========================================\n Files 146 146 \n Lines 25983 25975 -8 \n==========================================\n- Hits 20297 20259 -38 \n- Misses 5686 5716 +30 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5640?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5640/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.74% <100.00%> (-0.06%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5640/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5640/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |\n| [src/transformers/generation\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5640/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.11% <0.00%> (+0.28%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5640/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5640?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5640?src=pr&el=footer). Last update [7fad617...94aac5d](https://codecov.io/gh/huggingface/transformers/pull/5640?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,594
1,594
1,594
CONTRIBUTOR
null
Previously, we had a helper function that checked 4 possible cases to determine whether we should: (a) combine a cached attention mask with a new one. (b) just use the cached one (c) just use the new/passed one This consolidates that logic into 3 branches and deletes the helper func, which was only called once.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5640/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5640/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5640", "html_url": "https://github.com/huggingface/transformers/pull/5640", "diff_url": "https://github.com/huggingface/transformers/pull/5640.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5640.patch", "merged_at": 1594721586000 }
https://api.github.com/repos/huggingface/transformers/issues/5639
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5639/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5639/comments
https://api.github.com/repos/huggingface/transformers/issues/5639/events
https://github.com/huggingface/transformers/issues/5639
654,289,701
MDU6SXNzdWU2NTQyODk3MDE=
5,639
test suite fails due to pytorch bug in torch.seed
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "A fix has been just applied here:\r\nhttps://github.com/pytorch/pytorch/commit/5edd9aa95a8a73e940185f8448e7db05394ce6fe\r\nwill re-test with the nightly", "I confirmed that this pytorch bug has been fixed in nightly and the tests don't fail anymore." ]
1,594
1,597
1,597
CONTRIBUTOR
null
# 🐛 Bug ## Information This is on a dual-gpu machine. Almost all tests/test_modeling_reformer.py sub-tests fail with: ``` def cb(): for i in range(device_count()): default_generator = torch.cuda.default_generators[i] > default_generator.manual_seed(seed) E RuntimeError: Overflow when unpacking long ``` when run after any test_multigpu_data_parallel_forward sub-test, e.g.: `python -m pytest -n 1 --dist=loadfile -v tests/test_modeling_electra.py::ElectraModelTest::test_multigpu_data_parallel_forward tests/test_modeling_reformer.py::ReformerLocalAttnModelTest::test_attention_outputs` **The failure gets triggered here:** ``` transformers/modeling_reformer.py:1102: in _init_attention_seed self.attention_seed = int((torch.seed() % sys.maxsize)) ``` I reduced the failing sequence of code to this: ``` # test.py import torch print(f"Torch version: {torch.__version__}") x = torch.tensor(data=[[1,2],[3,4]], dtype=torch.long, device=None) x = x.to('cuda:0') seed = torch.seed() ``` ``` $ python tests/test.py Torch version: 1.5.1 Traceback (most recent call last): File "tests/test.py", line 10, in <module> seed = torch.seed() File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/random.py", line 45, in seed torch.cuda.manual_seed_all(seed) File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/cuda/random.py", line 111, in manual_seed_all _lazy_call(cb) File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/cuda/__init__.py", line 99, in _lazy_call callable() File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/cuda/random.py", line 109, in cb default_generator.manual_seed(seed) RuntimeError: Overflow when unpacking long ``` It fails about 75% of time. It happens after moving the tensor to gpu. This seems to be related to this [pytorch bug](https://github.com/pytorch/pytorch/issues/33546), albeit somewhat different sequence of code. ## Environment info - `transformers` version: 3.0.2 - Platform: Linux-4.15.0-109-generic-x86_64-with-debian-buster-sid - Python version: 3.7.5 - PyTorch version (GPU?): 1.5.1 (True) - Tensorflow version (GPU?): 2.0.1 (False) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no Full trace of failing sub-tests (one of them): ``` python -m pytest -n 1 --dist=loadfile -v tests/test_modeling_electra.py::ElectraModelTest::test_multigpu_data_parallel_forward tests/test_modeling_reformer.py::ReformerLocalAttnModelTest::test_attention_outputs ====================================================================== test session starts ======================================================================= platform linux -- Python 3.7.5, pytest-5.4.3, py-1.9.0, pluggy-0.13.1 -- /home/stas/anaconda3/envs/main/bin/python cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/mnt/nvme1/code/huggingface/transformers-FlaubertForTokenClassification/.hypothesis/examples') rootdir: /mnt/nvme1/code/huggingface/transformers-FlaubertForTokenClassification plugins: hypothesis-5.5.4, filter-subpackage-0.1.1, arraydiff-0.3, flaky-3.6.1, ipynb-1.1.1.dev0, cov-2.10.0, astropy-header-0.1.2, forked-1.2.0, doctestplus-0.5.0, openfiles-0.4.0, remotedata-0.3.2, xdist-1.32.0 [gw0] linux Python 3.7.5 cwd: /mnt/nvme1/code/huggingface/transformers-FlaubertForTokenClassification [gw0] Python 3.7.5 (default, Oct 25 2019, 15:51:11) -- [GCC 7.3.0] gw0 [2] scheduling tests via LoadFileScheduling tests/test_modeling_electra.py::ElectraModelTest::test_multigpu_data_parallel_forward [gw0] [ 50%] PASSED tests/test_modeling_electra.py::ElectraModelTest::test_multigpu_data_parallel_forward tests/test_modeling_reformer.py::ReformerLocalAttnModelTest::test_attention_outputs [gw0] [100%] FAILED tests/test_modeling_reformer.py::ReformerLocalAttnModelTest::test_attention_outputs ============================================================================ FAILURES ============================================================================ _______________________________________________________ ReformerLocalAttnModelTest.test_attention_outputs ________________________________________________________ [gw0] linux -- Python 3.7.5 /home/stas/anaconda3/envs/main/bin/python self = <tests.test_modeling_reformer.ReformerLocalAttnModelTest testMethod=test_attention_outputs> def test_attention_outputs(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() seq_len = getattr(self.model_tester, "seq_length", None) decoder_seq_length = getattr(self.model_tester, "decoder_seq_length", seq_len) encoder_seq_length = getattr(self.model_tester, "encoder_seq_length", seq_len) decoder_key_length = getattr(self.model_tester, "key_length", decoder_seq_length) encoder_key_length = getattr(self.model_tester, "key_length", encoder_seq_length) chunk_length = getattr(self.model_tester, "chunk_length", None) if chunk_length is not None and hasattr(self.model_tester, "num_hashes"): encoder_seq_length = encoder_seq_length * self.model_tester.num_hashes for model_class in self.all_model_classes: inputs_dict["output_attentions"] = True inputs_dict["output_hidden_states"] = False model = model_class(config) model.to(torch_device) model.eval() with torch.no_grad(): > outputs = model(**self._prepare_for_class(inputs_dict, model_class)) tests/test_modeling_common.py:149: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/modules/module.py:550: in __call__ result = self.forward(*input, **kwargs) ../transformers-XLMForTokenClassification/src/transformers/modeling_reformer.py:1623: in forward output_attentions=output_attentions, /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/modules/module.py:550: in __call__ result = self.forward(*input, **kwargs) ../transformers-XLMForTokenClassification/src/transformers/modeling_reformer.py:1371: in forward output_attentions, ../transformers-XLMForTokenClassification/src/transformers/modeling_reformer.py:1267: in forward output_attentions=output_attentions, /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/nn/modules/module.py:550: in __call__ result = self.forward(*input, **kwargs) ../transformers-XLMForTokenClassification/src/transformers/modeling_reformer.py:1138: in forward self._init_attention_seed() ../transformers-XLMForTokenClassification/src/transformers/modeling_reformer.py:1102: in _init_attention_seed self.attention_seed = int((torch.seed() % sys.maxsize)) /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/random.py:45: in seed torch.cuda.manual_seed_all(seed) /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/cuda/random.py:111: in manual_seed_all _lazy_call(cb) /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/cuda/__init__.py:99: in _lazy_call callable() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ def cb(): for i in range(device_count()): default_generator = torch.cuda.default_generators[i] > default_generator.manual_seed(seed) E RuntimeError: Overflow when unpacking long /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/torch/cuda/random.py:109: RuntimeError ======================================================================== warnings summary ======================================================================== /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15 /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/graphql/type/directives.py:55 /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/graphql/type/directives.py:55: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working assert isinstance(locations, collections.Iterable), 'Must provide locations for directive.' /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/graphql/type/directives.py:62 /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/graphql/type/directives.py:62: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working assert isinstance(args, collections.Mapping), '{} args must be a dict with argument names as keys.'.format(name) /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/graphql/type/typemap.py:1 /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/graphql/type/typemap.py:1: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working from collections import OrderedDict, Sequence, defaultdict -- Docs: https://docs.pytest.org/en/latest/warnings.html ==================================================================== short test summary info ===================================================================== FAILED tests/test_modeling_reformer.py::ReformerLocalAttnModelTest::test_attention_outputs - RuntimeError: Overflow when unpacking long ============================================================ 1 failed, 1 passed, 4 warnings in 5.57s ======================================= ``` ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5639/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5639/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5638
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5638/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5638/comments
https://api.github.com/repos/huggingface/transformers/issues/5638/events
https://github.com/huggingface/transformers/pull/5638
654,283,153
MDExOlB1bGxSZXF1ZXN0NDQ3MDQwNjA2
5,638
Create README.md
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5638?src=pr&el=h1) Report\n> Merging [#5638](https://codecov.io/gh/huggingface/transformers/pull/5638?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b25f7802de2749a5f8c3430437eceabf9e6384b8&el=desc) will **increase** coverage by `0.23%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5638/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5638?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5638 +/- ##\n==========================================\n+ Coverage 77.52% 77.75% +0.23% \n==========================================\n Files 145 145 \n Lines 25364 25364 \n==========================================\n+ Hits 19663 19723 +60 \n+ Misses 5701 5641 -60 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5638?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5638/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.98% <0.00%> (-49.38%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5638/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.70% <0.00%> (-2.26%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5638/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5638/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5638?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5638?src=pr&el=footer). Last update [b25f780...689146c](https://codecov.io/gh/huggingface/transformers/pull/5638?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,594
1,594
1,594
CONTRIBUTOR
null
Create model card for T5-small fine-tuned on SQUAD v1.1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5638/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5638/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5638", "html_url": "https://github.com/huggingface/transformers/pull/5638", "diff_url": "https://github.com/huggingface/transformers/pull/5638.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5638.patch", "merged_at": 1594395503000 }
https://api.github.com/repos/huggingface/transformers/issues/5637
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5637/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5637/comments
https://api.github.com/repos/huggingface/transformers/issues/5637/events
https://github.com/huggingface/transformers/pull/5637
654,281,904
MDExOlB1bGxSZXF1ZXN0NDQ3MDM5NTU5
5,637
Add forum link in the docs
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5637?src=pr&el=h1) Report\n> Merging [#5637](https://codecov.io/gh/huggingface/transformers/pull/5637?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b25f7802de2749a5f8c3430437eceabf9e6384b8&el=desc) will **decrease** coverage by `0.64%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5637/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5637?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5637 +/- ##\n==========================================\n- Coverage 77.52% 76.88% -0.65% \n==========================================\n Files 145 145 \n Lines 25364 25364 \n==========================================\n- Hits 19663 19500 -163 \n- Misses 5701 5864 +163 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5637?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.53% <0.00%> (+69.51%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5637/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5637?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5637?src=pr&el=footer). Last update [b25f780...d6ab752](https://codecov.io/gh/huggingface/transformers/pull/5637?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,594
1,594
1,594
COLLABORATOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5637/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5637/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5637", "html_url": "https://github.com/huggingface/transformers/pull/5637", "diff_url": "https://github.com/huggingface/transformers/pull/5637.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5637.patch", "merged_at": 1594322002000 }
https://api.github.com/repos/huggingface/transformers/issues/5636
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5636/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5636/comments
https://api.github.com/repos/huggingface/transformers/issues/5636/events
https://github.com/huggingface/transformers/pull/5636
654,237,272
MDExOlB1bGxSZXF1ZXN0NDQ3MDAzMTE3
5,636
Should check that torch TPU is available
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5636?src=pr&el=h1) Report\n> Merging [#5636](https://codecov.io/gh/huggingface/transformers/pull/5636?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b9d8af07e66764bbf4213e1ce443fcdfa927ca46&el=desc) will **not change** coverage.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5636/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5636?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5636 +/- ##\n=======================================\n Coverage 77.66% 77.66% \n=======================================\n Files 145 145 \n Lines 25364 25364 \n=======================================\n Hits 19700 19700 \n Misses 5664 5664 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5636?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5636/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.60% <ø> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5636/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `86.91% <100.00%> (ø)` | |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5636/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.70% <0.00%> (-2.26%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5636/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `95.23% <0.00%> (+10.71%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5636?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5636?src=pr&el=footer). Last update [3cc23ee...a09ad90](https://codecov.io/gh/huggingface/transformers/pull/5636?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,594
1,594
1,594
MEMBER
null
fix #5634
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5636/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5636/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5636", "html_url": "https://github.com/huggingface/transformers/pull/5636", "diff_url": "https://github.com/huggingface/transformers/pull/5636.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5636.patch", "merged_at": 1594317273000 }
https://api.github.com/repos/huggingface/transformers/issues/5635
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5635/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5635/comments
https://api.github.com/repos/huggingface/transformers/issues/5635/events
https://github.com/huggingface/transformers/pull/5635
654,204,951
MDExOlB1bGxSZXF1ZXN0NDQ2OTc2NzYw
5,635
[WIP][Examples] Adding more examples and more introductory tutorials
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5635?src=pr&el=h1) Report\n> Merging [#5635](https://codecov.io/gh/huggingface/transformers/pull/5635?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8edfaaa81b9995cedea2f8805e4c18c2b6cb5bfc&el=desc) will **decrease** coverage by `0.58%`.\n> The diff coverage is `18.18%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5635/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5635?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5635 +/- ##\n==========================================\n- Coverage 78.29% 77.71% -0.59% \n==========================================\n Files 146 146 \n Lines 26607 26344 -263 \n==========================================\n- Hits 20832 20473 -359 \n- Misses 5775 5871 +96 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5635?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils\\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5635/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `83.85% <18.18%> (-9.99%)` | :arrow_down: |\n| [...c/transformers/modeling\\_tf\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/5635/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `9.90% <0.00%> (-76.24%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5635/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `17.22% <0.00%> (-72.24%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/5635/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `25.82% <0.00%> (-62.38%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5635/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-21.63%)` | :arrow_down: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5635/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-19.18%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5635/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `80.98% <0.00%> (-11.99%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5635/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.42% <0.00%> (-6.24%)` | :arrow_down: |\n| [src/transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5635/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: |\n| ... and [49 more](https://codecov.io/gh/huggingface/transformers/pull/5635/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5635?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5635?src=pr&el=footer). Last update [8edfaaa...67859e5](https://codecov.io/gh/huggingface/transformers/pull/5635?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Ooops, the rebase made the diff unreadable on this PR. Opening a new PR from this branch." ]
1,594
1,597
1,597
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5635/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5635/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5635", "html_url": "https://github.com/huggingface/transformers/pull/5635", "diff_url": "https://github.com/huggingface/transformers/pull/5635.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5635.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/5634
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5634/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5634/comments
https://api.github.com/repos/huggingface/transformers/issues/5634/events
https://github.com/huggingface/transformers/issues/5634
654,196,438
MDU6SXNzdWU2NTQxOTY0Mzg=
5,634
T5 has no module ```torch_xla``` when using T5 fine-tuned on SQUADv2
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Indeed, it seems this model was once trained or initialized on TPU. Thanks for letting us know, I'm patching it in #5636.", "Yes, it was trained on TPU", "It should be fixed on master now, can you try pulling from master and running your code?", "I already did it and it works!!!\r\nThank you!!" ]
1,594
1,594
1,594
CONTRIBUTOR
null
# 🐛 Bug ## Information I get this error: ``` ModuleNotFoundError: No module named 'torch_xla'``` Full error message: ``` 2 3 tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-squadv2") ----> 4 model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-squadv2") 5 6 def get_answer(question, context): 1 frames /usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 796 797 if hasattr(config, "xla_device") and config.xla_device: --> 798 import torch_xla.core.xla_model as xm 799 800 model = xm.send_cpu_data_to_device(model, xm.xla_device()) ModuleNotFoundError: No module named 'torch_xla' ``` ## To reproduce ```python from transformers import AutoModelWithLMHead, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mrm8488/t5-base-finetuned-squadv2") model = AutoModelWithLMHead.from_pretrained("mrm8488/t5-base-finetuned-squadv2") def get_answer(question, context): input_text = "question: %s context: %s </s>" % (question, context) features = tokenizer([input_text], return_tensors='pt') output = model.generate(input_ids=features['input_ids'], attention_mask=features['attention_mask']) return tokenizer.decode(output[0]) context = "Manuel have created RuPERTa-base with the support of HF-Transformers and Google" question = "Who has supported Manuel?" get_answer(question, context) ``` I used this example code a few weeks ago and had no problem...
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5634/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5634/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5633
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5633/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5633/comments
https://api.github.com/repos/huggingface/transformers/issues/5633/events
https://github.com/huggingface/transformers/pull/5633
654,193,749
MDExOlB1bGxSZXF1ZXN0NDQ2OTY3NDkz
5,633
More explicit error when failing to tensorize overflowing tokens
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,594
1,594
1,594
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5633/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5633/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5633", "html_url": "https://github.com/huggingface/transformers/pull/5633", "diff_url": "https://github.com/huggingface/transformers/pull/5633.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5633.patch", "merged_at": 1594316121000 }
https://api.github.com/repos/huggingface/transformers/issues/5632
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5632/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5632/comments
https://api.github.com/repos/huggingface/transformers/issues/5632/events
https://github.com/huggingface/transformers/pull/5632
654,174,664
MDExOlB1bGxSZXF1ZXN0NDQ2OTUyNjA2
5,632
Fixed use of memories in XLNet (caching for language generation + warning when loading improper memoryless model)
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5632?src=pr&el=h1) Report\n> Merging [#5632](https://codecov.io/gh/huggingface/transformers/pull/5632?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fa5423b1695cd24856bcff47214172e0f540d924&el=desc) will **decrease** coverage by `0.07%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5632/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5632?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5632 +/- ##\n==========================================\n- Coverage 77.79% 77.72% -0.08% \n==========================================\n Files 145 145 \n Lines 25355 25364 +9 \n==========================================\n- Hits 19726 19715 -11 \n- Misses 5629 5649 +20 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5632?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/configuration\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3hsbmV0LnB5) | `94.33% <100.00%> (+0.33%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `91.39% <100.00%> (+0.04%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.96% <100.00%> (+0.10%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.50%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5632/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5632?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5632?src=pr&el=footer). Last update [fa5423b...25fae1b](https://codecov.io/gh/huggingface/transformers/pull/5632?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Also the slow test of XLNet will have to be adapted when doing this change no?", "So in order:\r\n\r\na) b) I would really like to to have proper GPT-2 style caching in XLNet. This would require changing the outputs of the XLNet `forward` to add `presents` outputs that contain the K/V pairs, like in GPT-2. That probably counts as BC-breaking, right @LysandreJik ?\r\n\r\nc) `offset = 2` is definitely a bit random. I was mistaken at first, as I thought it would be proper autoregressive generation, but `offset=1` is what should be used in this case. After manually checking outputs I had the impression that `offset = 2` was slightly better (mostly it goes less into repetitive generation loops) at a negligible computation time cost; but I agree that `offset = 1` is more principled and I don't have a strong opinion on that choice.\r\n\r\nd) The slow tests run with the default model, which still has `mem_length = 0` and no caching, so it doesn't make a difference yet. ", "> So in order:\r\n> \r\n> a) b) I would really like to to have proper GPT-2 style caching in XLNet. This would require changing the outputs of the XLNet `forward` to add `presents` outputs that contain the K/V pairs, like in GPT-2. That probably counts as BC-breaking, right @LysandreJik ?\r\n> \r\n> c) `offset = 2` is definitely a bit random. I was mistaken at first, as I thought it would be proper autoregressive generation, but `offset=1` is what should be used in this case. After manually checking outputs I had the impression that `offset = 2` was slightly better (mostly it goes less into repetitive generation loops) at a negligible computation time cost; but I agree that `offset = 1` is more principled and I don't have a strong opinion on that choice.\r\n> \r\n> d) The slow tests run with the default model, which still has `mem_length = 0` and no caching, so it doesn't make a difference yet.\r\n\r\na)b) I think would count as a feature enhancement. We would make the `past` variable optional so no backward breaking here IMO. But I agree it would definitely be better to this in a new PR.", "I've added a comment. I'm preparing another PR for proper caching and merging this one." ]
1,594
1,594
1,594
CONTRIBUTOR
null
The default XLNet model is loaded with 0 memory length, which is an issue both at training time (improper performance) and inference time (as there's no caching speed-up since it doesn't return former attentions). As discussed with @LysandreJik , this PR introduces a warning that in the future, the default XLNet model will have 1024 memory length, in accordance with [the original paper](https://arxiv.org/abs/1906.08237). It also fixes the re-use of cached memory, which was broken similarly to TransfoXL (#4752).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5632/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5632/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5632", "html_url": "https://github.com/huggingface/transformers/pull/5632", "diff_url": "https://github.com/huggingface/transformers/pull/5632.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5632.patch", "merged_at": 1594395517000 }
https://api.github.com/repos/huggingface/transformers/issues/5631
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5631/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5631/comments
https://api.github.com/repos/huggingface/transformers/issues/5631/events
https://github.com/huggingface/transformers/pull/5631
654,122,295
MDExOlB1bGxSZXF1ZXN0NDQ2OTEwNzM5
5,631
Correct extension for model summary links
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5631?src=pr&el=h1) Report\n> Merging [#5631](https://codecov.io/gh/huggingface/transformers/pull/5631?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5c82bf6831b49e1e6029d09488081d5d98a272e9&el=desc) will **decrease** coverage by `0.18%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5631/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5631?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5631 +/- ##\n==========================================\n- Coverage 77.05% 76.87% -0.19% \n==========================================\n Files 145 145 \n Lines 25364 25364 \n==========================================\n- Hits 19545 19499 -46 \n- Misses 5819 5865 +46 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5631?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5631/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5631/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5631/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.50%)` | :arrow_up: |\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/5631/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.63% <0.00%> (+1.31%)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5631/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+1.80%)` | :arrow_up: |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5631/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+2.73%)` | :arrow_up: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5631/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `77.55% <0.00%> (+11.22%)` | :arrow_up: |\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5631/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.09% <0.00%> (+17.09%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5631/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.96% <0.00%> (+21.29%)` | :arrow_up: |\n| [src/transformers/trainer\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5631/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `85.71% <0.00%> (+25.71%)` | :arrow_up: |\n| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/5631/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5631?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5631?src=pr&el=footer). Last update [5c82bf6...2bd8a57](https://codecov.io/gh/huggingface/transformers/pull/5631?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,594
1,594
1,594
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5631/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5631/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5631", "html_url": "https://github.com/huggingface/transformers/pull/5631", "diff_url": "https://github.com/huggingface/transformers/pull/5631.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5631.patch", "merged_at": 1594306988000 }
https://api.github.com/repos/huggingface/transformers/issues/5630
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5630/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5630/comments
https://api.github.com/repos/huggingface/transformers/issues/5630/events
https://github.com/huggingface/transformers/issues/5630
654,104,340
MDU6SXNzdWU2NTQxMDQzNDA=
5,630
How can I fine-tune on custom model
{ "login": "pinedbean", "id": 9152586, "node_id": "MDQ6VXNlcjkxNTI1ODY=", "avatar_url": "https://avatars.githubusercontent.com/u/9152586?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pinedbean", "html_url": "https://github.com/pinedbean", "followers_url": "https://api.github.com/users/pinedbean/followers", "following_url": "https://api.github.com/users/pinedbean/following{/other_user}", "gists_url": "https://api.github.com/users/pinedbean/gists{/gist_id}", "starred_url": "https://api.github.com/users/pinedbean/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pinedbean/subscriptions", "organizations_url": "https://api.github.com/users/pinedbean/orgs", "repos_url": "https://api.github.com/users/pinedbean/repos", "events_url": "https://api.github.com/users/pinedbean/events{/privacy}", "received_events_url": "https://api.github.com/users/pinedbean/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "You can try this:\r\nhttps://github.com/huggingface/transformers/pull/3009/commits/489dd7608c5b3d4acaf997a2b4fbccc3d7144cf3\r\n\r\nbut there is no bilstm layer. You can add it by yourself", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,594
1,604
1,604
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I would like to use TFBert as encoder then I will add additional layer on-top of it (with custom model class) So, I would like to fine-tune all layer down to encoder layer Specifically, I would like to do BERT-BiLSTM-CRF for NER task Is there a way to do it? Thank you for your answer <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5630/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5630/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5629
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5629/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5629/comments
https://api.github.com/repos/huggingface/transformers/issues/5629/events
https://github.com/huggingface/transformers/pull/5629
654,075,788
MDExOlB1bGxSZXF1ZXN0NDQ2ODcyNzEy
5,629
Fixed TextGenerationPipeline on torch + GPU
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5629?src=pr&el=h1) Report\n> Merging [#5629](https://codecov.io/gh/huggingface/transformers/pull/5629?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fa5423b1695cd24856bcff47214172e0f540d924&el=desc) will **decrease** coverage by `0.21%`.\n> The diff coverage is `100.00%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5629/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5629?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5629 +/- ##\n==========================================\n- Coverage 77.79% 77.58% -0.22% \n==========================================\n Files 145 145 \n Lines 25355 25357 +2 \n==========================================\n- Hits 19726 19672 -54 \n- Misses 5629 5685 +56 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5629?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.24% <100.00%> (+0.08%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-2.01%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5629/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5629?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5629?src=pr&el=footer). Last update [fa5423b...dfeeffa](https://codecov.io/gh/huggingface/transformers/pull/5629?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "@LysandreJik if you wanna take a look, it's just a short bugfix (essentially adding the `if self.framework == \"pt\", generated_sequence = generated_sequence.cpu()` line)", "LGTM!" ]
1,594
1,594
1,594
CONTRIBUTOR
null
Fixes #5622 .
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5629/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5629/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5629", "html_url": "https://github.com/huggingface/transformers/pull/5629", "diff_url": "https://github.com/huggingface/transformers/pull/5629.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5629.patch", "merged_at": 1594326573000 }
https://api.github.com/repos/huggingface/transformers/issues/5628
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5628/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5628/comments
https://api.github.com/repos/huggingface/transformers/issues/5628/events
https://github.com/huggingface/transformers/issues/5628
654,059,585
MDU6SXNzdWU2NTQwNTk1ODU=
5,628
Support for Polyencoder and other retriever based models
{ "login": "prakhar6sharma", "id": 37648724, "node_id": "MDQ6VXNlcjM3NjQ4NzI0", "avatar_url": "https://avatars.githubusercontent.com/u/37648724?v=4", "gravatar_id": "", "url": "https://api.github.com/users/prakhar6sharma", "html_url": "https://github.com/prakhar6sharma", "followers_url": "https://api.github.com/users/prakhar6sharma/followers", "following_url": "https://api.github.com/users/prakhar6sharma/following{/other_user}", "gists_url": "https://api.github.com/users/prakhar6sharma/gists{/gist_id}", "starred_url": "https://api.github.com/users/prakhar6sharma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prakhar6sharma/subscriptions", "organizations_url": "https://api.github.com/users/prakhar6sharma/orgs", "repos_url": "https://api.github.com/users/prakhar6sharma/repos", "events_url": "https://api.github.com/users/prakhar6sharma/events{/privacy}", "received_events_url": "https://api.github.com/users/prakhar6sharma/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,594
1,600
1,600
NONE
null
# ❓ Questions & Help Is there any way I can load Polyencoder and other retriever based models from ParlAI in huggingface/transformers because as of now, there seems no support of loading huggingface/transformers models in ParlAI other than GPT. Polyencoder: https://arxiv.org/abs/1905.01969 ParlAI Implementation : https://github.com/facebookresearch/ParlAI/blob/master/parlai/agents/transformer/polyencoder.py
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5628/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5628/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5627
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5627/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5627/comments
https://api.github.com/repos/huggingface/transformers/issues/5627/events
https://github.com/huggingface/transformers/issues/5627
654,053,334
MDU6SXNzdWU2NTQwNTMzMzQ=
5,627
Model doc failed
{ "login": "zheyuye", "id": 37728728, "node_id": "MDQ6VXNlcjM3NzI4NzI4", "avatar_url": "https://avatars.githubusercontent.com/u/37728728?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zheyuye", "html_url": "https://github.com/zheyuye", "followers_url": "https://api.github.com/users/zheyuye/followers", "following_url": "https://api.github.com/users/zheyuye/following{/other_user}", "gists_url": "https://api.github.com/users/zheyuye/gists{/gist_id}", "starred_url": "https://api.github.com/users/zheyuye/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zheyuye/subscriptions", "organizations_url": "https://api.github.com/users/zheyuye/orgs", "repos_url": "https://api.github.com/users/zheyuye/repos", "events_url": "https://api.github.com/users/zheyuye/events{/privacy}", "received_events_url": "https://api.github.com/users/zheyuye/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This has been fixed in master, https://huggingface.co/transformers/master/model_summary.html will have the proper links.", "This has now been fixed for the stable version as well!" ]
1,594
1,594
1,594
NONE
null
As title, model doc failed in page https://huggingface.co/transformers/model_summary.html, e.g. https://huggingface.co/model_doc/distilbert
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5627/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5627/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5626
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5626/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5626/comments
https://api.github.com/repos/huggingface/transformers/issues/5626/events
https://github.com/huggingface/transformers/pull/5626
654,047,683
MDExOlB1bGxSZXF1ZXN0NDQ2ODQ5OTE1
5,626
doc: fix apparent copy-paste error in docstring
{ "login": "gthb", "id": 153580, "node_id": "MDQ6VXNlcjE1MzU4MA==", "avatar_url": "https://avatars.githubusercontent.com/u/153580?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gthb", "html_url": "https://github.com/gthb", "followers_url": "https://api.github.com/users/gthb/followers", "following_url": "https://api.github.com/users/gthb/following{/other_user}", "gists_url": "https://api.github.com/users/gthb/gists{/gist_id}", "starred_url": "https://api.github.com/users/gthb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gthb/subscriptions", "organizations_url": "https://api.github.com/users/gthb/orgs", "repos_url": "https://api.github.com/users/gthb/repos", "events_url": "https://api.github.com/users/gthb/events{/privacy}", "received_events_url": "https://api.github.com/users/gthb/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5626?src=pr&el=h1) Report\n> Merging [#5626](https://codecov.io/gh/huggingface/transformers/pull/5626?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fa5423b1695cd24856bcff47214172e0f540d924&el=desc) will **decrease** coverage by `0.23%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5626/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5626?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5626 +/- ##\n==========================================\n- Coverage 77.79% 77.56% -0.24% \n==========================================\n Files 145 145 \n Lines 25355 25355 \n==========================================\n- Hits 19726 19667 -59 \n- Misses 5629 5688 +59 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5626?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/commands/train.py](https://codecov.io/gh/huggingface/transformers/pull/5626/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy90cmFpbi5weQ==) | `0.00% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5626/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.90% <0.00%> (-33.90%)` | :arrow_down: |\n| [src/transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5626/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5626/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.20% <0.00%> (-2.76%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5626/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5626?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5626?src=pr&el=footer). Last update [fa5423b...959b687](https://codecov.io/gh/huggingface/transformers/pull/5626?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,594
1,594
1,594
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5626/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5626/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5626", "html_url": "https://github.com/huggingface/transformers/pull/5626", "diff_url": "https://github.com/huggingface/transformers/pull/5626.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5626.patch", "merged_at": 1594712861000 }
https://api.github.com/repos/huggingface/transformers/issues/5625
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5625/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5625/comments
https://api.github.com/repos/huggingface/transformers/issues/5625/events
https://github.com/huggingface/transformers/issues/5625
654,047,157
MDU6SXNzdWU2NTQwNDcxNTc=
5,625
Cannot reproduce roberta-large on SQuAD
{ "login": "zheyuye", "id": 37728728, "node_id": "MDQ6VXNlcjM3NzI4NzI4", "avatar_url": "https://avatars.githubusercontent.com/u/37728728?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zheyuye", "html_url": "https://github.com/zheyuye", "followers_url": "https://api.github.com/users/zheyuye/followers", "following_url": "https://api.github.com/users/zheyuye/following{/other_user}", "gists_url": "https://api.github.com/users/zheyuye/gists{/gist_id}", "starred_url": "https://api.github.com/users/zheyuye/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zheyuye/subscriptions", "organizations_url": "https://api.github.com/users/zheyuye/orgs", "repos_url": "https://api.github.com/users/zheyuye/repos", "events_url": "https://api.github.com/users/zheyuye/events{/privacy}", "received_events_url": "https://api.github.com/users/zheyuye/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "And also, it seems that the glue results showcased in https://huggingface.co/roberta-large refers to the paper instead of fine-tuning from your own script.", "@ZheyuYe\r\n\r\nIs this hyperparameter ` --adam_betas '(0.9, 0.98)' \\` available in a modified Transformers v3.0.2 `run_squad.py`, or earlier version?\r\nIt is not appearing as an available hyperparameter in the current master version.\r\n\r\nI was able to fine-tune RoBERTa large on Squad 2.0 as late as 29June20: [\"ahotrod/roberta_large_squad2\"](https://huggingface.co/ahotrod/roberta_large_squad2) with satisfactory results.", "@ahotrod \r\n\r\nSince the `betas` is available in current Adamw, I added this flag `adam_betas` to match the optimizer hyper-parameters as https://github.com/ZheyuYe/transformers/blob/efc022060195dca384a95546c6134667696f957f/examples/question-answering/run_squad.py#L98-L101\r\n\r\n Thanks for providing these useful hyperparameter, I am going to re-fune-tune `roberta-large` on Squad 2.0. I was noticed that `warmup_steps = 1642` was selected with total `Total optimization steps = 8211`, so that is `warmup_ratio = 0.2`? The other thing that confuses me is why would you choose `do_lower_case` when RoBERTa was pretrained with cased setting https://github.com/pytorch/fairseq/issues/1429.", "@ZheyuYe \r\n\r\nI'm guessing the `do_lower_case` was overlooked when I started with script from another model fine-tuning. However, I believe I've read that `do_lower_case` has no effect with newer models running the latest `run_squad.py`. If I was fine-tuning RoBERTa_large again I'd leave it out.\r\n\r\nI have fine-tuned RoBERTa_large a bunch of times, varying the hyperparameters: #epochs, learning rate, warmup ratio, etc. Plus I switched mid-stream from using 2x NVIDIA 1080Ti GPUs to a single 24GB NVIDIA RTX Titan. Compared to the original RoBERTa paper's **Table 10 Hyperparameters**, for this particular run I bumped epochs from 2 to 3, and increased warmup ratio to 0.2 with good success. It may be a result of RoBERTa not being that dependent on warmup ratio used, I don't know for sure. With my configuration, this fine-tuning script produced the best results. Check-out the tensorboard loss & learning rate graphs, script, etc. at [https://huggingface.co/ahotrod/roberta_large_squad2#list-files](https://huggingface.co/ahotrod/roberta_large_squad2#list-files)\r\n\r\nAs quoted many times, \"Your mileage may vary\" ;-]\r\nHave fun with it! Hope you beat my results.", "I am closing this issue since I have got the competitive results although there is still a gap from the paper's" ]
1,594
1,594
1,594
NONE
null
## Information Using the same hyper-paramters as [paper](https://arxiv.org/abs/1907.11692), I fine-tund roberta-large on SQuAD1.1 resulting disappointing results as below. I am wondering that the reason of it might be the gradient noralization is different from the [official implementation](), although it hasn't been released yet. ``` {'exact': 0.21759697256385999, 'f1': 7.113439302309792, ' total': 10570, 'HasAns_exact': 0.21759697256385999, 'HasAns_f1': 7.113439302309792, 'HasAns_total': 10570, 'best_exact': 0.21759697256385999, 'best_exact_thresh': 0.0, 'best_f1': 7.113439302309792, 'best_f1_thresh': 0.0} ``` ## To reproduce ``` python3 -m torch.distributed.launch --nproc_per_node=4 ./examples/question-answering/run_squad.py \ --model_type roberta \ --model_name_or_path roberta-large \ --do_train \ --do_eval \ --train_file $SQUAD_DIR/train-v1.1.json \ --predict_file $SQUAD_DIR/dev-v1.1.json \ --learning_rate 1.5e-5 \ --weight_decay 0.01 \ --max_grad_norm 0.0 \ --num_train_epochs 2 \ --warmup_steps 222 \ --adam_betas '(0.9, 0.98)' \ --adam_epsilon 1e-6 \ --max_seq_length 512 \ --doc_stride 128 \ --output_dir ./examples/models/finetuned_squad1.1/ \ --per_gpu_eval_batch_size=8 \ --per_gpu_train_batch_size=2 \ --gradient_accumulation_steps=6 \ --threads 8 \ --overwrite_cache \ ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5625/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5625/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5624
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5624/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5624/comments
https://api.github.com/repos/huggingface/transformers/issues/5624/events
https://github.com/huggingface/transformers/issues/5624
654,010,117
MDU6SXNzdWU2NTQwMTAxMTc=
5,624
Inference widgets for self-hosted models?
{ "login": "cceyda", "id": 15624271, "node_id": "MDQ6VXNlcjE1NjI0Mjcx", "avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cceyda", "html_url": "https://github.com/cceyda", "followers_url": "https://api.github.com/users/cceyda/followers", "following_url": "https://api.github.com/users/cceyda/following{/other_user}", "gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}", "starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cceyda/subscriptions", "organizations_url": "https://api.github.com/users/cceyda/orgs", "repos_url": "https://api.github.com/users/cceyda/repos", "events_url": "https://api.github.com/users/cceyda/events{/privacy}", "received_events_url": "https://api.github.com/users/cceyda/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "do you mind sending an email to clement [at] huggingface [dot] co explaining your usecase?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I have created a simple package that demos how to self-host Hugging Face NER models with an UI akin to the Inference api~\r\n\r\nSelf-Hosted inference for NER:\r\n![image](https://user-images.githubusercontent.com/15624271/93560907-264e7580-f9be-11ea-8c57-8f878e95cb3a.png)\r\n\r\navailable @ https://github.com/cceyda/lit-NER\r\n❤️ using HuggingFace+Torchserve+Displacy+Streamlit \r\n", "looks good @cceyda! Do you have a publicly hosted demo somewhere?", "@julien-c my main objective was to provide an easy way to have an interface API&AI for self-hosted models vs public~ but why not do both 😄 so I put up a demo [here](https://share.streamlit.io/cceyda/lit-ner/public/lit_ner.py) (I recently got access to streamlit sharing beta 🥳 )\r\nYou can enter the model name of any NER model hosted at [huggingface ❤️ ](https://huggingface.co/models?filter=token-classification&search=ner) like so:\r\n![image](https://user-images.githubusercontent.com/15624271/95163741-7f524200-07e3-11eb-8c41-7714b8ed3ac8.png)\r\n\r\n\r\n**_OR_** even your custom self hosted model by using the example torchserve recipe I made [lit_ner/serve.py](https://github.com/cceyda/lit-NER/blob/master/examples/serve.ipynb) (currently there is no security setup) \r\n\r\nBTW, There are some problems with the current NER pipeline (which I provide a fix PR for [here](https://github.com/huggingface/transformers/pull/5970))\r\nExample error:\r\n![image](https://user-images.githubusercontent.com/15624271/95163217-5aa99a80-07e2-11eb-8a85-4144f7deb636.png)\r\nAt my local with changes from the PR:\r\n![image](https://user-images.githubusercontent.com/15624271/95163363-ab20f800-07e2-11eb-9184-8c300ea7c46b.png)\r\n\r\nPS: this can be easily expanded to other pipelines and also highly customizable 😉 will polish it more as soon as I have some time", "That's neat! Do you mind if I tweet it? (what's your Twitter handle :-)", "@julien-c of course I would like it very much. I have also written a [blog post](https://cceyda.github.io/blog/huggingface/torchserve/streamlit/ner/2020/10/09/huggingface_streamlit_serve.html) about it, my first blog post! 🥳 My [twitter](https://twitter.com/ceyda_cinarel) is so unused it is embarrassing 😆 until now I used it just for following the news, but in the future I will be using it for sharing my blog post notifications. hoping that someone will be reading it 🤞 😄 \r\n", "[Tweet is up](https://twitter.com/julien_c), I'll close this issue now, thanks again for sharing" ]
1,594
1,602
1,602
CONTRIBUTOR
null
# 🚀 Feature request I'm loving the new [huggingface](https://huggingface.co/bert-base-uncased?) dataset browsing & hosted model interfaces 🤯 So firstly a huge thank you to everyone <3 This is a question/feature request ~ - Can we use inference widgets for self-hosted models? I see that there is a [serving.py](https://github.com/huggingface/transformers/blob/master/src/transformers/commands/serving.py) ( transformer-cli ) but nothing about widgets as far as I can see. If this is possible I would love an example on how; if not, will it be in the future? ## Motivation Inference widgets would be nice to have during model testing & demos ## Your contribution I was planning on doing something similar using `huggingface -> spacy(displacy) -> streamlit` (for NER)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5624/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5624/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5623
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5623/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5623/comments
https://api.github.com/repos/huggingface/transformers/issues/5623/events
https://github.com/huggingface/transformers/issues/5623
654,003,305
MDU6SXNzdWU2NTQwMDMzMDU=
5,623
Predictor in Streamlit Docker eating all memory and OOM
{ "login": "Aklmenrah", "id": 62019846, "node_id": "MDQ6VXNlcjYyMDE5ODQ2", "avatar_url": "https://avatars.githubusercontent.com/u/62019846?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Aklmenrah", "html_url": "https://github.com/Aklmenrah", "followers_url": "https://api.github.com/users/Aklmenrah/followers", "following_url": "https://api.github.com/users/Aklmenrah/following{/other_user}", "gists_url": "https://api.github.com/users/Aklmenrah/gists{/gist_id}", "starred_url": "https://api.github.com/users/Aklmenrah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aklmenrah/subscriptions", "organizations_url": "https://api.github.com/users/Aklmenrah/orgs", "repos_url": "https://api.github.com/users/Aklmenrah/repos", "events_url": "https://api.github.com/users/Aklmenrah/events{/privacy}", "received_events_url": "https://api.github.com/users/Aklmenrah/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,594
1,600
1,600
NONE
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): DistilBert Language I am using the model on (English, Chinese ...): English The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Training on my datatset and creation of a model on Colab 2. Saving the said model locally 3. Loading the predictor in my app, with Streamlit library 4. Deployment of said app in a Docker container This is my app, which use a predictor with Streamlit. ``` import pprint import re import ktrain import numpy as np import pandas as pd import streamlit as st import trafilatura from googlesearch import search from ktrain import text, predictor st.title(':crystal_ball: schemaPredictor :crystal_ball:') add_selectbox = st.selectbox( 'How would you like to predict?', ('Text', 'Url')) predictor = ktrain.load_predictor('./tmp/schema_mapping') def get_prob(p): i = 0 for x in p: if x > i: i = x return i if add_selectbox == "Text": body = st.text_area('Insert your text here, as clean as possible.') if st.button("Predict"): st.success(":crystal_ball: " + predictor.predict(body) + " :crystal_ball:") st.success("With a probability of " + "{:.1%}".format(get_prob(predictor.predict_proba(body)))) elif add_selectbox == "Url": body = st.text_input('Insert your url here') if st.button("Predict"): page = body downloaded = trafilatura.fetch_url(page) result = trafilatura.extract(downloaded, include_tables=False, include_formatting=False, include_comments=False) st.success(":crystal_ball: " + predictor.predict(result) + " :crystal_ball:") st.success("With a probability of " + "{:.1%}".format(get_prob(predictor.predict_proba(result)))) ``` ## Expected behavior If I run this app locally, without a Docker container, but in a conda env it works differently, it still takes memory at each iteration, but when it gets to around 10/11gb it frees memory to use it again. That on my 12gb ram laptop. So I expected this to happen in my container too, but what happens in a container is that at each 'predict' it takes some memory, but it goes on untill it runs OOM. I tried with a Docker cointainer with 4 CPU, 12 gb of RAM and 1 gb SWAP ## Environment info Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. 2020-07-09 11:56:21.400037: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2020-07-09 11:56:21.404641: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3800005000 Hz 2020-07-09 11:56:21.404943: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55dce5a7cca0 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-07-09 11:56:21.405087: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-07-09 11:56:21.406403: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory 2020-07-09 11:56:21.406441: E tensorflow/stream_executor/cuda/cuda_driver.cc:351] failed call to cuInit: UNKNOWN ERROR (303) 2020-07-09 11:56:21.406509: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (f8630e8d0e49): /proc/driver/nvidia/version does not exist Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 2.11.0 - Platform: Linux-4.19.76-linuxkit-x86_64-with-debian-10.4 - Python version: 3.7.7 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.1.0 (False) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> #
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5623/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5623/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5622
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5622/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5622/comments
https://api.github.com/repos/huggingface/transformers/issues/5622/events
https://github.com/huggingface/transformers/issues/5622
653,923,436
MDU6SXNzdWU2NTM5MjM0MzY=
5,622
TextGenerationPipeline breaks when used with device=0
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[ { "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false } ]
[ "If that's not the case, we should make sure that the pipelines run on GPU in the GPU CI. (fast and slow), to catch things like this." ]
1,594
1,594
1,594
CONTRIBUTOR
null
# 🐛 Bug ## Information Model I am using (Bert, XLNet ...): model-agnostic (breaks with GPT2 and XLNet) Language I am using the model on (English, Chinese ...): English The problem arises when using: [x] my own modified scripts: (give details below) The tasks I am working on is: [x] my own task or dataset: plain old language generation ## To reproduce Steps to reproduce the behavior: ``` #!/usr/bin/env python3 import random from transformers import pipeline, XLNetLMHeadModel import torch import time random.seed(0) torch.manual_seed(0) generator = pipeline("text-generation", model="xlnet-base-cased", tokenizer="xlnet-base-cased", device=0) output_to_check = generator("Today is a beautiful day and I, ", offset=offset, do_sample=True, top_k=50, max_len=100) ``` ## Expected behavior What should happen : text generation What actually happens : ``` Traceback (most recent call last): File "/home/teven/dev_transformers/perso/transformers/generation_script.py", line 15, in <module> output_to_check = generator("Today is a beautiful day and I, ", offset=offset, do_sample=True, top_k=50, max_len=100) File "/home/teven/dev_transformers/perso/transformers/src/transformers/pipelines.py", line 692, in __call__ generated_sequence = generated_sequence.numpy().tolist() TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first. ``` Just missing a conversion before the `.numpy()` call ## Environment info - `transformers` version: 3.0.2 - Platform: Linux-5.3.0-62-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5622/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5622/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5621
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5621/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5621/comments
https://api.github.com/repos/huggingface/transformers/issues/5621/events
https://github.com/huggingface/transformers/pull/5621
653,840,915
MDExOlB1bGxSZXF1ZXN0NDQ2NjgwMzEx
5,621
Add freshly trained `codegram/calbert-base-uncased`
{ "login": "txus", "id": 83234, "node_id": "MDQ6VXNlcjgzMjM0", "avatar_url": "https://avatars.githubusercontent.com/u/83234?v=4", "gravatar_id": "", "url": "https://api.github.com/users/txus", "html_url": "https://github.com/txus", "followers_url": "https://api.github.com/users/txus/followers", "following_url": "https://api.github.com/users/txus/following{/other_user}", "gists_url": "https://api.github.com/users/txus/gists{/gist_id}", "starred_url": "https://api.github.com/users/txus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/txus/subscriptions", "organizations_url": "https://api.github.com/users/txus/orgs", "repos_url": "https://api.github.com/users/txus/repos", "events_url": "https://api.github.com/users/txus/events{/privacy}", "received_events_url": "https://api.github.com/users/txus/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5621?src=pr&el=h1) Report\n> Merging [#5621](https://codecov.io/gh/huggingface/transformers/pull/5621?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fa5423b1695cd24856bcff47214172e0f540d924&el=desc) will **decrease** coverage by `1.09%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/5621/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/5621?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #5621 +/- ##\n==========================================\n- Coverage 77.79% 76.70% -1.10% \n==========================================\n Files 145 145 \n Lines 25355 25355 \n==========================================\n- Hits 19726 19448 -278 \n- Misses 5629 5907 +278 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5621?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <0.00%> (-73.39%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <0.00%> (-69.52%)` | :arrow_down: |\n| [src/transformers/generation\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.25%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `93.36% <0.00%> (+49.37%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5621/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `95.18% <0.00%> (+74.91%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5621?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5621?src=pr&el=footer). Last update [fa5423b...72f9aec](https://codecov.io/gh/huggingface/transformers/pull/5621?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,594
1,594
1,594
CONTRIBUTOR
null
Trained from the rewrite mentioned in #5599, just finished training last night. The model card now reflects both models, with tested code examples and links to Exbert.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5621/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5621/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/5621", "html_url": "https://github.com/huggingface/transformers/pull/5621", "diff_url": "https://github.com/huggingface/transformers/pull/5621.diff", "patch_url": "https://github.com/huggingface/transformers/pull/5621.patch", "merged_at": 1594395544000 }