url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/3812 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3812/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3812/comments | https://api.github.com/repos/huggingface/transformers/issues/3812/events | https://github.com/huggingface/transformers/pull/3812 | 600,591,475 | MDExOlB1bGxSZXF1ZXN0NDAzOTkyMjE4 | 3,812 | Question Answering support for Albert and Roberta in TF | {
"login": "Pierrci",
"id": 5020707,
"node_id": "MDQ6VXNlcjUwMjA3MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5020707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pierrci",
"html_url": "https://github.com/Pierrci",
"followers_url": "https://api.github.com/users/Pierrci/followers",
"following_url": "https://api.github.com/users/Pierrci/following{/other_user}",
"gists_url": "https://api.github.com/users/Pierrci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pierrci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pierrci/subscriptions",
"organizations_url": "https://api.github.com/users/Pierrci/orgs",
"repos_url": "https://api.github.com/users/Pierrci/repos",
"events_url": "https://api.github.com/users/Pierrci/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pierrci/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3812?src=pr&el=h1) Report\n> Merging [#3812](https://codecov.io/gh/huggingface/transformers/pull/3812?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/301bf8d1b43d99efe1fdb5ba15871e975b3cb6cf&el=desc) will **increase** coverage by `0.04%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3812?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3812 +/- ##\n==========================================\n+ Coverage 78.27% 78.31% +0.04% \n==========================================\n Files 106 106 \n Lines 17964 17996 +32 \n==========================================\n+ Hits 14061 14094 +33 \n+ Misses 3903 3902 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3812?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/3812/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `98.98% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/3812/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `68.62% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/3812/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `86.25% <100.00%> (+0.67%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3812/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `100.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3812/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.00% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3812?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3812?src=pr&el=footer). Last update [301bf8d...44c92f3](https://codecov.io/gh/huggingface/transformers/pull/3812?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,586 | 1,587 | 1,587 | MEMBER | null | This PR simply adds `TFRobertaForQuestionAnswering` and `TFAlbertForQuestionAnswering` classes (I needed them to do some model conversions!) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3812/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3812",
"html_url": "https://github.com/huggingface/transformers/pull/3812",
"diff_url": "https://github.com/huggingface/transformers/pull/3812.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3812.patch",
"merged_at": 1587134730000
} |
https://api.github.com/repos/huggingface/transformers/issues/3811 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3811/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3811/comments | https://api.github.com/repos/huggingface/transformers/issues/3811/events | https://github.com/huggingface/transformers/issues/3811 | 600,433,686 | MDU6SXNzdWU2MDA0MzM2ODY= | 3,811 | Pre-trained BART performance on XSum lower than expected | {
"login": "morningmoni",
"id": 8191712,
"node_id": "MDQ6VXNlcjgxOTE3MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8191712?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/morningmoni",
"html_url": "https://github.com/morningmoni",
"followers_url": "https://api.github.com/users/morningmoni/followers",
"following_url": "https://api.github.com/users/morningmoni/following{/other_user}",
"gists_url": "https://api.github.com/users/morningmoni/gists{/gist_id}",
"starred_url": "https://api.github.com/users/morningmoni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/morningmoni/subscriptions",
"organizations_url": "https://api.github.com/users/morningmoni/orgs",
"repos_url": "https://api.github.com/users/morningmoni/repos",
"events_url": "https://api.github.com/users/morningmoni/events{/privacy}",
"received_events_url": "https://api.github.com/users/morningmoni/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I'm having the [exact same issue](https://github.com/pytorch/fairseq/issues/1971), with the official BART code on fairseq.\r\n\r\nThe author is currently looking into it.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I downloaded data from [here](https://s3.amazonaws.com/datasets.huggingface.co/summarization/xsum.tar.gz) and was able to get 45.37 / 22.30 / 37.19 using facebook/bart-large-xsum model\r\n",
"> I downloaded data from [here](https://s3.amazonaws.com/datasets.huggingface.co/summarization/xsum.tar.gz) and was able to get 45.37 / 22.30 / 37.19 using facebook/bart-large-xsum model\r\n\r\nHi @swethmandava , this dataset seems to have different train/valid/test split from the original dataset. Can you reproduce the scores with the original dataset?"
] | 1,586 | 1,612 | 1,592 | NONE | null | Greetings,
I am trying to reproduce BART's results on xsum using 'bart-large-xsum' and modified `examples/summarization/bart/evaluate_cnn.py` (max_length=60, min_length=10, beam=6, lenpen=1) but got lower ROUGE scores than reported.
I first obtained comparable results on CNNDM using 'bart-large-cnndm' and the dataset on s3:
CNNDM | R-1 | R-2 | R-L
-- | -- | -- | --
BART (Lewis et al., 2019) | 44.16 | 21.28 | 40.9
BART (ours) | 44.32 | 21.12 | 41.13
I then obtained the raw xsum dataset from the original authors and saved them to test.source and test.target (cased) as for CNNDM. Then I ran evaluate_cnn.py with the new parameters above. Is there anything that I am missing? Thank you!
XSum | R-1 | R-2 | R-L
-- | -- | -- | --
BART (Lewis et al., 2019) | 45.14 | 22.27 | 37.25
BART (ours) | 44.7 | 21.04 | 35.64
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3811/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/3811/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3810 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3810/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3810/comments | https://api.github.com/repos/huggingface/transformers/issues/3810/events | https://github.com/huggingface/transformers/issues/3810 | 600,413,860 | MDU6SXNzdWU2MDA0MTM4NjA= | 3,810 | run_glue.py example doesn't work for distilbert models | {
"login": "ereday",
"id": 13196191,
"node_id": "MDQ6VXNlcjEzMTk2MTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/13196191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ereday",
"html_url": "https://github.com/ereday",
"followers_url": "https://api.github.com/users/ereday/followers",
"following_url": "https://api.github.com/users/ereday/following{/other_user}",
"gists_url": "https://api.github.com/users/ereday/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ereday/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ereday/subscriptions",
"organizations_url": "https://api.github.com/users/ereday/orgs",
"repos_url": "https://api.github.com/users/ereday/repos",
"events_url": "https://api.github.com/users/ereday/events{/privacy}",
"received_events_url": "https://api.github.com/users/ereday/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think I might have broken this on `master` when merging #3688 🤔\r\n\r\nHi @ereday could you please try from the `trainer` branch described in PR #3800?\r\n\r\nOtherwise, hotfixing this in your code should be easy (just remove the `all_token_type_ids` line)"
] | 1,586 | 1,587 | 1,587 | NONE | null | # 🐛 Bug
## Information
Hi all,
I am succesfully able to run the run_glue.py example with BERT, XLNet and other architectures. However, when I try distilbert I got the following error:
```
Traceback (most recent call last):
File "run_glue.py", line 562, in <module>
main()
File "run_glue.py", line 510, in main
train_dataset = load_and_cache_examples(args, args.task_name, tokenizer, evaluate=False)
File "run_glue.py", line 373, in load_and_cache_examples
all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long)
TypeError: an integer is required (got type NoneType)
```
Model I am using (Bert, XLNet ...):
distilbert (distilbert-base-cased)
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: GLUE/SST-2
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
export GLUE_DIR=./glue/glue_data
export TASK_NAME=SST-2
CUDA_VISIBLE_DEVICES=2,3 python run_glue.py \
--model_type DistilBERT \
--model_name_or_path distilbert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--data_dir $GLUE_DIR/$TASK_NAME \
--max_seq_length 128 \
--per_gpu_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir ./output/$TASK_NAME/
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
```
- `transformers` version: 2.8.0
- Platform: Linux-5.5.15-200.fc31.x86_64-x86_64-with-fedora-31-Thirty_One
- Python version: 3.6.5
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3810/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3809 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3809/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3809/comments | https://api.github.com/repos/huggingface/transformers/issues/3809/events | https://github.com/huggingface/transformers/issues/3809 | 600,399,662 | MDU6SXNzdWU2MDAzOTk2NjI= | 3,809 | Roberta Tokenizer crashes when tokenizing empty string in 2.8.0 | {
"login": "HaokunLiu",
"id": 35565210,
"node_id": "MDQ6VXNlcjM1NTY1MjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/35565210?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HaokunLiu",
"html_url": "https://github.com/HaokunLiu",
"followers_url": "https://api.github.com/users/HaokunLiu/followers",
"following_url": "https://api.github.com/users/HaokunLiu/following{/other_user}",
"gists_url": "https://api.github.com/users/HaokunLiu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HaokunLiu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HaokunLiu/subscriptions",
"organizations_url": "https://api.github.com/users/HaokunLiu/orgs",
"repos_url": "https://api.github.com/users/HaokunLiu/repos",
"events_url": "https://api.github.com/users/HaokunLiu/events{/privacy}",
"received_events_url": "https://api.github.com/users/HaokunLiu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
},
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Fixed by #4209"
] | 1,586 | 1,592 | 1,592 | NONE | null | ```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("roberta-large")
tokenzer.tokenize("")
```
File "/scratch/wh629/nlu/env/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 1054, in encode_plus
first_ids = get_input_ids(text)
File "/scratch/wh629/nlu/env/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 1028, in get_input_ids
tokens = self.tokenize(text, add_special_tokens=add_special_tokens, **kwargs)
File "/scratch/wh629/nlu/env/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 759, in tokenize
text = self.prepare_for_tokenization(text, **kwargs)
File "/scratch/wh629/nlu/env/lib/python3.7/site-packages/transformers/tokenization_roberta.py", line 239, in prepare_for_tokenization
if add_prefix_space and not text[0].isspace():
IndexError: string index out of range | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3809/reactions",
"total_count": 7,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3809/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3808 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3808/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3808/comments | https://api.github.com/repos/huggingface/transformers/issues/3808/events | https://github.com/huggingface/transformers/pull/3808 | 600,384,026 | MDExOlB1bGxSZXF1ZXN0NDAzODI5ODE2 | 3,808 | typo: fine-grained token-leven | {
"login": "JonathanSum",
"id": 21982975,
"node_id": "MDQ6VXNlcjIxOTgyOTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/21982975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JonathanSum",
"html_url": "https://github.com/JonathanSum",
"followers_url": "https://api.github.com/users/JonathanSum/followers",
"following_url": "https://api.github.com/users/JonathanSum/following{/other_user}",
"gists_url": "https://api.github.com/users/JonathanSum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JonathanSum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JonathanSum/subscriptions",
"organizations_url": "https://api.github.com/users/JonathanSum/orgs",
"repos_url": "https://api.github.com/users/JonathanSum/repos",
"events_url": "https://api.github.com/users/JonathanSum/events{/privacy}",
"received_events_url": "https://api.github.com/users/JonathanSum/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3808?src=pr&el=h1) Report\n> Merging [#3808](https://codecov.io/gh/huggingface/transformers/pull/3808?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/301bf8d1b43d99efe1fdb5ba15871e975b3cb6cf&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3808?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3808 +/- ##\n=======================================\n Coverage 78.27% 78.27% \n=======================================\n Files 106 106 \n Lines 17964 17964 \n=======================================\n Hits 14061 14061 \n Misses 3903 3903 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3808?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3808?src=pr&el=footer). Last update [301bf8d...7a12e87](https://codecov.io/gh/huggingface/transformers/pull/3808?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,586 | 1,587 | 1,587 | CONTRIBUTOR | null | Changing from "fine-grained token-leven" to "fine-grained token-level" | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3808/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3808",
"html_url": "https://github.com/huggingface/transformers/pull/3808",
"diff_url": "https://github.com/huggingface/transformers/pull/3808.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3808.patch",
"merged_at": 1587064284000
} |
https://api.github.com/repos/huggingface/transformers/issues/3807 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3807/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3807/comments | https://api.github.com/repos/huggingface/transformers/issues/3807/events | https://github.com/huggingface/transformers/pull/3807 | 600,321,361 | MDExOlB1bGxSZXF1ZXN0NDAzNzc4NzI3 | 3,807 | isort ignores examples directory | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,586 | 1,588 | 1,588 | CONTRIBUTOR | null | Temporary solution while we wait for an isort release.
I have my local alias hacked in this way, but I figure new contributors might get confused by the circleci/local isort discrepancy. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3807/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3807",
"html_url": "https://github.com/huggingface/transformers/pull/3807",
"diff_url": "https://github.com/huggingface/transformers/pull/3807.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3807.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3806 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3806/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3806/comments | https://api.github.com/repos/huggingface/transformers/issues/3806/events | https://github.com/huggingface/transformers/pull/3806 | 600,232,717 | MDExOlB1bGxSZXF1ZXN0NDAzNzA3NDQ1 | 3,806 | [cleanup] factor out get_head_mask, invert_attn_mask, get_extended_attention_mask | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This looks nice",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3806?src=pr&el=h1) Report\n> Merging [#3806](https://codecov.io/gh/huggingface/transformers/pull/3806?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/01c37dcdb529bff38aadd51001cb5812e5fe9b21&el=desc) will **increase** coverage by `0.15%`.\n> The diff coverage is `90.14%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3806?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3806 +/- ##\n==========================================\n+ Coverage 78.26% 78.42% +0.15% \n==========================================\n Files 106 106 \n Lines 17964 17864 -100 \n==========================================\n- Hits 14060 14009 -51 \n+ Misses 3904 3855 -49 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3806?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/3806/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tbWJ0LnB5) | `22.11% <40.00%> (+4.51%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3806/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.94% <92.00%> (-0.03%)` | :arrow_down: |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/3806/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `75.31% <100.00%> (+0.06%)` | :arrow_up: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3806/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.40% <100.00%> (+0.24%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3806/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.81% <100.00%> (+0.78%)` | :arrow_up: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/3806/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `98.15% <100.00%> (+0.56%)` | :arrow_up: |\n| [src/transformers/modeling\\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/3806/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `73.20% <100.00%> (+0.55%)` | :arrow_up: |\n| [src/transformers/modeling\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/3806/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `84.49% <100.00%> (+0.67%)` | :arrow_up: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3806/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `86.25% <100.00%> (+0.31%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3806/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.78% <100.00%> (+0.24%)` | :arrow_up: |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/3806/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3806?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3806?src=pr&el=footer). Last update [01c37dc...2e7f6f4](https://codecov.io/gh/huggingface/transformers/pull/3806?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,586 | 1,587 | 1,587 | CONTRIBUTOR | null | This is three changes applied all over:
1) `get_head_mask` from @LysandreJik is used instead of redundant snippet
2) `get_extended_attention_mask` is used instead of redundant snippet (that also makes causal mask)
3) `invert_attention_mask` is used instead of redundant snippet that doesn't make causal mask.
These changes make the forward passes more readable and allow us to update common logic in one place moving forward! I was reading code last night to try to get a sense of what all the models/tokenizers do and was frustrated with the amount of time spent scrolling through this stuff. Especially for new people, it makes getting to the meet of the `forward` pass much harder to have 100 lines repeated lines of input manipulation at the beginning.
Very open to suggestions.
Open to doing this for TF in a separate PR.
Also if `prune_heads` or other opportunities catch your eye, let me know.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3806/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3806",
"html_url": "https://github.com/huggingface/transformers/pull/3806",
"diff_url": "https://github.com/huggingface/transformers/pull/3806.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3806.patch",
"merged_at": 1587045326000
} |
https://api.github.com/repos/huggingface/transformers/issues/3805 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3805/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3805/comments | https://api.github.com/repos/huggingface/transformers/issues/3805/events | https://github.com/huggingface/transformers/issues/3805 | 600,084,422 | MDU6SXNzdWU2MDAwODQ0MjI= | 3,805 | Using fill-mask pipeline to get the “score” for a result it didn't suggest | {
"login": "p-christ",
"id": 26346243,
"node_id": "MDQ6VXNlcjI2MzQ2MjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/26346243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/p-christ",
"html_url": "https://github.com/p-christ",
"followers_url": "https://api.github.com/users/p-christ/followers",
"following_url": "https://api.github.com/users/p-christ/following{/other_user}",
"gists_url": "https://api.github.com/users/p-christ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/p-christ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/p-christ/subscriptions",
"organizations_url": "https://api.github.com/users/p-christ/orgs",
"repos_url": "https://api.github.com/users/p-christ/repos",
"events_url": "https://api.github.com/users/p-christ/events{/privacy}",
"received_events_url": "https://api.github.com/users/p-christ/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,586 | 1,592 | 1,592 | NONE | null | Hi lovely huggingface people,
I'm trying to use your fill-mask pipeline in order to get the score for a result it didn't suggest.
For example, if my sentence is `"I ate bacon and <mask> for breakfast"` I can use `pipeline('fill-mask')` to get back predictions and their scores e.g. it might give me back `["eggs", 0.1]`. But what I would like to do is **provide my own guess and then get back the score it assigns to my own guess.** e.g. i might want to know what score it gives to the word "pancakes" in the situation.
Is this possible? If not can I register it as a feature request?
Stack overflow [question](https://stackoverflow.com/questions/61168513/using-huggingface-fill-mask-pipeline-to-get-the-score-for-a-result-it-didnt-s) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3805/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3804 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3804/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3804/comments | https://api.github.com/repos/huggingface/transformers/issues/3804/events | https://github.com/huggingface/transformers/issues/3804 | 600,023,544 | MDU6SXNzdWU2MDAwMjM1NDQ= | 3,804 | Calculated offsets are wrong in squad.py | {
"login": "ericperfect",
"id": 52102789,
"node_id": "MDQ6VXNlcjUyMTAyNzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/52102789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ericperfect",
"html_url": "https://github.com/ericperfect",
"followers_url": "https://api.github.com/users/ericperfect/followers",
"following_url": "https://api.github.com/users/ericperfect/following{/other_user}",
"gists_url": "https://api.github.com/users/ericperfect/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ericperfect/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ericperfect/subscriptions",
"organizations_url": "https://api.github.com/users/ericperfect/orgs",
"repos_url": "https://api.github.com/users/ericperfect/repos",
"events_url": "https://api.github.com/users/ericperfect/events{/privacy}",
"received_events_url": "https://api.github.com/users/ericperfect/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,586 | 1,592 | 1,592 | NONE | null | - when tokenizer.padding_side == "left" and if tokenizer.pad_token_id in span["input_ids"]:
doc_offset should add last_padding_id_position+1 instead of 0.
start_position = 0
end_position = 0
if is_training and not span_is_impossible:
# For training, if our document chunk does not contain an annotation
# we throw it out, since there is nothing to predict.
doc_start = span["start"]
doc_end = span["start"] + span["length"] - 1
out_of_span = False
if not (tok_start_position >= doc_start and tok_end_position <= doc_end):
out_of_span = True
if out_of_span:
start_position = cls_index
end_position = cls_index
span_is_impossible = True
else:
if tokenizer.padding_side == "left":
doc_offset = 0
else:
doc_offset = len(truncated_query) + sequence_added_tokens
start_position = tok_start_position - doc_start + doc_offset
end_position = tok_end_position - doc_start + doc_offset | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3804/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3804/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3803 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3803/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3803/comments | https://api.github.com/repos/huggingface/transformers/issues/3803/events | https://github.com/huggingface/transformers/pull/3803 | 600,015,094 | MDExOlB1bGxSZXF1ZXN0NDAzNTM2OTQ2 | 3,803 | Fix bug in max_seq_length for preprocessing in ner example | {
"login": "r-tinn",
"id": 28840171,
"node_id": "MDQ6VXNlcjI4ODQwMTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/28840171?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/r-tinn",
"html_url": "https://github.com/r-tinn",
"followers_url": "https://api.github.com/users/r-tinn/followers",
"following_url": "https://api.github.com/users/r-tinn/following{/other_user}",
"gists_url": "https://api.github.com/users/r-tinn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/r-tinn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/r-tinn/subscriptions",
"organizations_url": "https://api.github.com/users/r-tinn/orgs",
"repos_url": "https://api.github.com/users/r-tinn/repos",
"events_url": "https://api.github.com/users/r-tinn/events{/privacy}",
"received_events_url": "https://api.github.com/users/r-tinn/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3803?src=pr&el=h1) Report\n> Merging [#3803](https://codecov.io/gh/huggingface/transformers/pull/3803?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/01c37dcdb529bff38aadd51001cb5812e5fe9b21&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3803?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3803 +/- ##\n=======================================\n Coverage 78.26% 78.27% \n=======================================\n Files 106 106 \n Lines 17964 17964 \n=======================================\n+ Hits 14060 14061 +1 \n+ Misses 3904 3903 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3803?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3803/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.84% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3803?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3803?src=pr&el=footer). Last update [01c37dc...8f77ccd](https://codecov.io/gh/huggingface/transformers/pull/3803?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"You might want to open this PR on https://github.com/stefan-it/fine-tuned-berts-seq instead, as the script is hosted there right now (cc @stefan-it)",
"> You might want to open this PR on https://github.com/stefan-it/fine-tuned-berts-seq instead, as the script is hosted there right now (cc @stefan-it)\r\n\r\nI thought it would be easier just to have the script in the example, but can close this pr and open it there if you prefer?",
"What do you think @stefan-it? Are you ok with us including the script here?",
"Hi, sorry for the late reply! I've fixed some errors in the script last week. Would be great if @r-tinn could check the latest Version! If it's working then you can of course integrate it into Transformers :)",
"Looks good to me, the problem seems to be fixed",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,586 | 1,594 | 1,594 | NONE | null | **Summary**
If you try to use a value for `max_seq_length` that is less than 128 in the NER example, the maximum sequence length is exceeded when the predictions are made. There is a warning logged for this "Maximum sequence length exceeded: No prediction for.." and predictions cannot be made for these tokens.
**Changes**
Two changes are made:
- In `preprocess.py`, `subword_len_counter` is set to `current_subwords_len` when a blank line is inserted to split up sequences that exceed the maximum sequence length.
- `tokenizer.num_added_tokens()` is subtracted from `max_len` to account for the additional tokens inserted by the BERT tokenizer.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3803/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3803",
"html_url": "https://github.com/huggingface/transformers/pull/3803",
"diff_url": "https://github.com/huggingface/transformers/pull/3803.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3803.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3802 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3802/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3802/comments | https://api.github.com/repos/huggingface/transformers/issues/3802/events | https://github.com/huggingface/transformers/pull/3802 | 600,008,109 | MDExOlB1bGxSZXF1ZXN0NDAzNTMxNzMz | 3,802 | Fix examples/translation/t5 to use newstest2014 rather than newstest2013 | {
"login": "tholiao",
"id": 12995527,
"node_id": "MDQ6VXNlcjEyOTk1NTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/12995527?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tholiao",
"html_url": "https://github.com/tholiao",
"followers_url": "https://api.github.com/users/tholiao/followers",
"following_url": "https://api.github.com/users/tholiao/following{/other_user}",
"gists_url": "https://api.github.com/users/tholiao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tholiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tholiao/subscriptions",
"organizations_url": "https://api.github.com/users/tholiao/orgs",
"repos_url": "https://api.github.com/users/tholiao/repos",
"events_url": "https://api.github.com/users/tholiao/events{/privacy}",
"received_events_url": "https://api.github.com/users/tholiao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @tholiao, \r\n\r\nThanks a lot for the PR :-) This looks good so far. Don't worry about running the script against the evaluation set - we can do this once this is merged! \r\n\r\nCan you make sure that the `run_examples_torch` pass? Don't worry too much about the `check_code_quality` test - there have been some issues with `isort` and I can manually fix that later. ",
"Should be fine now. ",
"Hi @tholiao, I went into your PR branch and checked the `make style` issues. It seems like you have other params set up for `black` than this lib. Also `isort` seems to have some problems with the imports here. \r\n\r\nI added the tiny changes, I suggested above and correctly formatted everything (black and isort) in this PR https://github.com/huggingface/transformers/pull/3817. The PR uses your commits, so you are an author of the commit :-) "
] | 1,586 | 1,587 | 1,587 | CONTRIBUTOR | null | Resolves #3759, in addition to minor nits: fixed a bug with argparse arguments + more pythonic file handling + formatted with black and isort.
Please note that I have not yet run the evaluation script against the full newstest2014 test set, as it is rather compute intensive, so the disclaimer at the top of the README.md about the score gap between the pre-trained and fine-tuned models is only ostensibly accurate to the score gap on newstest2013, not newstest2014. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3802/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3802",
"html_url": "https://github.com/huggingface/transformers/pull/3802",
"diff_url": "https://github.com/huggingface/transformers/pull/3802.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3802.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3801 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3801/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3801/comments | https://api.github.com/repos/huggingface/transformers/issues/3801/events | https://github.com/huggingface/transformers/pull/3801 | 599,999,058 | MDExOlB1bGxSZXF1ZXN0NDAzNTI1MDY1 | 3,801 | Fix bug in GLUE example for models that do not require token_type_ids | {
"login": "douwekiela",
"id": 6024930,
"node_id": "MDQ6VXNlcjYwMjQ5MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6024930?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/douwekiela",
"html_url": "https://github.com/douwekiela",
"followers_url": "https://api.github.com/users/douwekiela/followers",
"following_url": "https://api.github.com/users/douwekiela/following{/other_user}",
"gists_url": "https://api.github.com/users/douwekiela/gists{/gist_id}",
"starred_url": "https://api.github.com/users/douwekiela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/douwekiela/subscriptions",
"organizations_url": "https://api.github.com/users/douwekiela/orgs",
"repos_url": "https://api.github.com/users/douwekiela/repos",
"events_url": "https://api.github.com/users/douwekiela/events{/privacy}",
"received_events_url": "https://api.github.com/users/douwekiela/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @douwekiela, thanks for the PR. This should be fixed soon in a more stable way in the soon-to-be-merged #3800 \r\nLet us know if it works.",
"Should be fixed on master by #3800, please open a new issue if you encounter other problems."
] | 1,586 | 1,587 | 1,587 | NONE | null | If you try to run the `run_glue.py` example with e.g. roberta from a fresh install of the library, it errors out with the following error:
```
Traceback (most recent call last):
File "examples/run_glue.py", line 564, in <module>
main()
File "examples/run_glue.py", line 512, in main
train_dataset = load_and_cache_examples(args, args.task_name, tokenizer, evaluate=False)
File "examples/run_glue.py", line 373, in load_and_cache_examples
all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long)
TypeError: an integer is required (got type NoneType)
```
To reproduce, run e.g.
`python examples/run_glue.py --model_name_or_path roberta-base --task_name SST-2 --per_gpu_eval_batch_size=8 --per_gpu_train_batch_size=8 --data_dir ./glue_data/SST-2/ --output_dir ./blah --model_type roberta --do_train --do_eval --max_seq_length 128 --learning_rate 2e-5 --num_train_epochs 3.0`
The reason is obviously that roberta does not have segment ids so `token_type_ids` is set to null in the data loader, causing `torch.tensor` to freak out. There's probably a more elegant long-term solution for this, but it's easy to fix by just setting it to 0 instead of null for those models. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3801/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3801",
"html_url": "https://github.com/huggingface/transformers/pull/3801",
"diff_url": "https://github.com/huggingface/transformers/pull/3801.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3801.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3800 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3800/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3800/comments | https://api.github.com/repos/huggingface/transformers/issues/3800/events | https://github.com/huggingface/transformers/pull/3800 | 599,957,708 | MDExOlB1bGxSZXF1ZXN0NDAzNDkzODYw | 3,800 | Trainer | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | true | null | [] | [
"So, thinking about it more, I've re-run a MNLI training with the previous default max_seq_length of 128 (instead of the \"new\" default of tokenizer.max_len – _soon to be renamed_), and training is naturally way faster with the smaller sequence length (light blue line below in relative time):\r\n\r\n<img width=\"1473\" alt=\"Screenshot 2020-04-15 at 18 47 23\" src=\"https://user-images.githubusercontent.com/326577/79398369-8ee88080-7f4e-11ea-843f-84b59c8f3ad5.png\">\r\n\r\nSo I'm thinking of reverting the default to 128. Does it make sense? Are people familiar with GLUE mostly training models on shorter sequence length? (@VictorSanh @srush @thomwolf @LysandreJik)\r\n\r\nOr do they debug their trainings with short seq lengths, and then train with the model's max length?",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3800?src=pr&el=h1) Report\n> Merging [#3800](https://codecov.io/gh/huggingface/transformers/pull/3800?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b7cf9f43d259fbad45d899c1769110aafc9f410a&el=desc) will **decrease** coverage by `0.16%`.\n> The diff coverage is `63.48%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3800?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3800 +/- ##\n==========================================\n- Coverage 78.26% 78.10% -0.17% \n==========================================\n Files 106 111 +5 \n Lines 17928 18459 +531 \n==========================================\n+ Hits 14032 14417 +385 \n- Misses 3896 4042 +146 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3800?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/3800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `43.59% <43.59%> (ø)` | |\n| [src/transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/3800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `71.90% <52.63%> (-3.81%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.86% <78.26%> (+0.86%)` | :arrow_up: |\n| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/3800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `27.15% <80.00%> (+2.46%)` | :arrow_up: |\n| [src/transformers/training\\_args.py](https://codecov.io/gh/huggingface/transformers/pull/3800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `87.69% <80.00%> (-12.31%)` | :arrow_down: |\n| [src/transformers/data/data\\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/3800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `89.23% <89.23%> (ø)` | |\n| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/3800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `90.19% <90.19%> (ø)` | |\n| [...rc/transformers/data/datasets/language\\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/3800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `91.83% <91.83%> (ø)` | |\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/3800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.01% <100.00%> (+0.02%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3800/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `97.01% <100.00%> (ø)` | |\n| ... and [11 more](https://codecov.io/gh/huggingface/transformers/pull/3800/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3800?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3800?src=pr&el=footer). Last update [b7cf9f4...d1db901](https://codecov.io/gh/huggingface/transformers/pull/3800?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"> So, thinking about it more, I've re-run a MNLI training with the previous default max_seq_length of 128 (instead of the \"new\" default of tokenizer.max_len – _soon to be renamed_), and training is naturally way faster with the smaller sequence length (light blue line below in relative time):\r\n> \r\n> So I'm thinking of reverting the default to 128. Does it make sense? Are people familiar with GLUE mostly training models on shorter sequence length? (@VictorSanh @srush @thomwolf @LysandreJik)\r\n> \r\n> Or do they debug their trainings with short seq lengths, and then train with the model's max length?\r\n\r\nI can answer about MNLI dataset. The sequence pairs in the dataset are quite short in MNLI both in training, dev and test with similar length distributions. The vast majority of sequences are under 128 tokens. So 128 is fine for MNLI. \r\nFor QNLI, 256 is more suitable.",
"The revamps look awesome! Really looking forward to the merge and can't wait to try out the Trainer modules (nothing against argparse :joy:)",
"Ok, updated the PR summary above with regression tests on `run_ner.py` and `run_language_modeling.py` that show that we reproduce the documented results.\r\n\r\nSo this should be ready to merge! 🎉 "
] | 1,586 | 1,587 | 1,587 | MEMBER | resolved | This is a bottom-up refactor of the example scripts (currently `run_glue.py`, `run_language_modeling.py`, `run_ner.py` and `run_multiple_choice.py`) into a Trainer class and associated utilities, as described in [trainer-proposal](https://github.com/julien-c/trainer-proposal).
# Regression test and benchmark on `run_glue.py`
📊 All training metrics are in [**this TensorBoard**](https://tensorboard.dev/experiment/7LuNpqw3Q8y147WxfyOqdA/#scalars&_smoothingWeight=0&runSelectionState=eyJkYXRhcGFyYWxsZWxfMl9ncHUiOnRydWUsImRkcF8yX2dwdSI6dHJ1ZX0%3D):
<a href="https://tensorboard.dev/experiment/7LuNpqw3Q8y147WxfyOqdA/#scalars&_smoothingWeight=0&runSelectionState=eyJkYXRhcGFyYWxsZWxfMl9ncHUiOnRydWUsImRkcF8yX2dwdSI6dHJ1ZX0%3D"><img width="1559" alt="Screenshot 2020-04-14 at 21 23 26" src="https://user-images.githubusercontent.com/326577/79289007-3655ac80-7e96-11ea-8737-3db1839725c2.png"></a>
The Trainer class supports PyTorch's backends for parallel/distributed training so we performed the following experiment:
- Experiment: MNLI
- Train set: 100_000 first samples
- Dev set: Full (~9_800 samples)
- Backends: DataParallel, DistributedDataParallel, single-GPU, CPU
You can compare speed of convergence by clicking on "Relative" in the TensorBoard and comparing loss and accuracy curves:
<img width="360" alt="Screenshot 2020-04-14 at 21 40 04" src="https://user-images.githubusercontent.com/326577/79289930-baa92f00-7e98-11ea-900f-106f0ef22681.png">
## Results
### DataParallel
```
--model_name_or_path distilbert-base-cased
--task_name mnli
--data_dir ./data/glue_data/MNLI
--output_dir ./models/dataparallel_2_gpu
--overwrite_output_dir
--do_train
--do_eval
--num_train_epochs 1
--per_gpu_train_batch_size 32
--per_gpu_eval_batch_size 128
--logging_steps 100
--logging_dir ./runs/dataparallel_2_gpu
--logging_first_step
--save_steps 1000
--evaluate_during_training
```
1 Epoch = 21 mins
```
04/14/2020 23:16:34 - INFO - __main__ - ***** Eval results mnli *****
04/14/2020 23:16:34 - INFO - __main__ - acc = 0.7406011207335711
04/14/2020 23:16:34 - INFO - __main__ - loss = 0.6281169515389663
04/14/2020 23:17:02 - INFO - __main__ - ***** Eval results mnli-mm *****
04/14/2020 23:17:02 - INFO - __main__ - acc = 0.7507119609438568
04/14/2020 23:17:02 - INFO - __main__ - loss = 0.6062953961201203
```
### DistributedDataParallel
```
python -m torch.distributed.launch --nproc_per_node 2 ./examples/run_glue.py
--model_name_or_path distilbert-base-cased
--task_name mnli
--data_dir ./data/glue_data/MNLI
--output_dir ./models/ddp_2_gpu
--overwrite_output_dir
--do_train
--do_eval
--num_train_epochs 1
--per_gpu_train_batch_size 32
--per_gpu_eval_batch_size 128
--logging_steps 100
--logging_dir ./runs/ddp_2_gpu
--logging_first_step
--save_steps 1000
--evaluate_during_training
```
Speed: about same speed as DataParallel on this workload and machine.
Pre-existing issue (to fix in future PR): when using DDP, the eval is not GPU-parallelized.
### single-GPU
`CUDA_VISIBLE_DEVICES=0 python ...`
<details>
<pre>
04/15/2020 00:52:24 - INFO - __main__ - ***** Eval results mnli *****
04/15/2020 00:52:24 - INFO - __main__ - acc = 0.7383596535914416
04/15/2020 00:52:24 - INFO - __main__ - loss = 0.631212914144838
04/15/2020 00:53:16 - INFO - __main__ - ***** Eval results mnli-mm *****
04/15/2020 00:53:16 - INFO - __main__ - acc = 0.7534580960130187
04/15/2020 00:53:16 - INFO - __main__ - loss = 0.6002480050960144
</pre>
</details>
Speed: Twice slower
### CPU
`--no_cuda`
Speed: too slow to benchmark
# Regression test on `run_ner.py`
The arguments below:
```
--model_name_or_path bert-base-multilingual-cased
--data_dir ./data/germeval
--labels ./data/germeval/labels.txt
--max_seq_length 128
--output_dir ./models/ner_dp_2_gpu
--overwrite_output_dir
--do_train
--do_eval
--do_predict
--num_train_epochs 3
--per_gpu_train_batch_size 32
--logging_dir ./runs/ner_dp_2_gpu
--logging_steps 100
--evaluate_during_training
--save_steps 750
--seed 1
```
yield the following results, consistent with the ones in the README:
```
04/17/2020 16:12:30 - INFO - __main__ - f1 = 0.8634538152610443
04/17/2020 16:12:30 - INFO - __main__ - loss = 0.07145964812514359
04/17/2020 16:12:30 - INFO - __main__ - precision = 0.8434379457917262
04/17/2020 16:12:30 - INFO - __main__ - recall = 0.8844427823485415
```
Shape of F1 and eval loss:
<img width="951" alt="Screenshot 2020-04-17 at 16 38 50" src="https://user-images.githubusercontent.com/326577/79623824-4c5caa80-80ec-11ea-866c-411600b62bb1.png">
# Regression test on `run_language_modeling.py`
Reproducing the training described in the [how to train blogpost](https://huggingface.co/blog/how-to-train):
```
--train_data_file ./data/oscar.eo.txt
--eval_data_file ./data/oscar.eo.eval.txt
--evaluate_during_training
--output_dir ./models/EsperBERTo-small-v1
--overwrite_output_dir
--mlm
--config_name ./models/EsperBERTo-small
--tokenizer_name ./models/EsperBERTo-small
--do_train
--do_eval
--line_by_line
--logging_first_step
--logging_steps 10
--logging_dir ./runs/EsperBERTo
--num_train_epochs 1
--save_total_limit 2
--save_steps 2000
--per_gpu_train_batch_size 16
--seed 42
```
# Regression test on `run_multiple_choice.py`
```
--model_name_or_path distilroberta-base
--task swag
--data_dir ./data/swag
--output_dir ./models/swag_dp_2_gpu
--overwrite_output_dir
--do_train
--do_eval
--per_gpu_train_batch_size 32
--per_gpu_eval_batch_size 512
--logging_dir ./runs/swag_dp_2_gpu
--logging_steps 100
--logging_first_step
--evaluate_during_training
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3800/reactions",
"total_count": 8,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 8,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3800/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3800",
"html_url": "https://github.com/huggingface/transformers/pull/3800",
"diff_url": "https://github.com/huggingface/transformers/pull/3800.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3800.patch",
"merged_at": 1587514316000
} |
https://api.github.com/repos/huggingface/transformers/issues/3799 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3799/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3799/comments | https://api.github.com/repos/huggingface/transformers/issues/3799/events | https://github.com/huggingface/transformers/issues/3799 | 599,934,554 | MDU6SXNzdWU1OTk5MzQ1NTQ= | 3,799 | Clarification about GPT2LMHeadModel lm_head weights | {
"login": "thesamuel",
"id": 6275391,
"node_id": "MDQ6VXNlcjYyNzUzOTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6275391?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thesamuel",
"html_url": "https://github.com/thesamuel",
"followers_url": "https://api.github.com/users/thesamuel/followers",
"following_url": "https://api.github.com/users/thesamuel/following{/other_user}",
"gists_url": "https://api.github.com/users/thesamuel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thesamuel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thesamuel/subscriptions",
"organizations_url": "https://api.github.com/users/thesamuel/orgs",
"repos_url": "https://api.github.com/users/thesamuel/repos",
"events_url": "https://api.github.com/users/thesamuel/events{/privacy}",
"received_events_url": "https://api.github.com/users/thesamuel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"yes exactly - this should not be a problem :-) "
] | 1,586 | 1,587 | 1,587 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
Each time the GPT2LMHeadModel is loaded from pretrained weights, the following is logged:
```
Weights of GPT2LMHeadModel not initialized from pretrained model: ['lm_head.weight']
```
Just to clarify, is this OK because we tie the output (`lm_head`) weights to the input weights? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3799/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3798 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3798/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3798/comments | https://api.github.com/repos/huggingface/transformers/issues/3798/events | https://github.com/huggingface/transformers/issues/3798 | 599,894,141 | MDU6SXNzdWU1OTk4OTQxNDE= | 3,798 | Error when using run_generation.py to generate texts with long prompts, specifically for models -XLM and Openai-GPT | {
"login": "AdaUchendu",
"id": 32556160,
"node_id": "MDQ6VXNlcjMyNTU2MTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/32556160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AdaUchendu",
"html_url": "https://github.com/AdaUchendu",
"followers_url": "https://api.github.com/users/AdaUchendu/followers",
"following_url": "https://api.github.com/users/AdaUchendu/following{/other_user}",
"gists_url": "https://api.github.com/users/AdaUchendu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AdaUchendu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AdaUchendu/subscriptions",
"organizations_url": "https://api.github.com/users/AdaUchendu/orgs",
"repos_url": "https://api.github.com/users/AdaUchendu/repos",
"events_url": "https://api.github.com/users/AdaUchendu/events{/privacy}",
"received_events_url": "https://api.github.com/users/AdaUchendu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,586 | 1,592 | 1,592 | NONE | null | # 🐛 Bug
## Information
Model I am using (XLM, OPENAI-GPT):
Language I am using the model on (English):
The problem arises when using:
* [ ] The models to generate texts with long prompts
The tasks I am working on is:
* [ ] Text generation
## To reproduce
Steps to reproduce the behavior:
1. cd transformers/
python examples/run_generation.py --model_type xlm --model_name_or_path xlm-mlm-en-2048 \
--prompt "China wants to take a victory lap over its handling of the coronavirus outbreak" \
--repetition 2.2 --k 5 \
--length 500
2.
3.
<!-- Error: RuntimeError: The size of tensor a (513) must match the size of tensor b (512) at non-singleton dimension 3. \
This leads to the next error - RuntimeError: CUDA error: device-side assert triggered.-->
## Expected behavior
<!-- The expected behavior is a generated piece of texts of about 500 words -->
## Environment info
<!-- cd transformers/
python examples/run_generation.py --model_type xlm --model_name_or_path xlm-mlm-en-2048 \
--prompt "China wants to take a victory lap over its handling of the coronavirus outbreak" \
--repetition 2.2 --k 5 \
--length 500 -->
I think the problem is that there is a bug somewhere in the input_embed or input_id or vocabulary because seems to out of index given certain prompts. This may suggest that the vocab list is limited or maybe not.
- `transformers` version:
- Platform: Google Colab
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?): no
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3798/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3798/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3797 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3797/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3797/comments | https://api.github.com/repos/huggingface/transformers/issues/3797/events | https://github.com/huggingface/transformers/pull/3797 | 599,850,719 | MDExOlB1bGxSZXF1ZXN0NDAzNDA3MzQ5 | 3,797 | [Config, Serialization] more readable config serialization | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I would prefer **v2**) because the parameters saved in each of the model configuration files are important for the models behavior and would be nice to see in the config (also to compare to other models' configs)",
"Need to look into it more but on principle this is very nice. (If Configs were `dataclass`-backed it would be even cleaner to implement – might be too big of a change though)\r\n\r\nI agree that v2 is probably better, but will think about it some more.\r\n\r\nFor config hosted on our S3, what should we do? Update the official ones, but not the user-uploaded ones? Or just do all of them? :)",
"I would update all of them by downloading, save_pretrained() and upload. I already a very similar script I would only need to adapt a tiny bit",
"Does this impact the process for changing the default config?",
"Sounds good!",
"Ok, **v2** is now implemented. I agree with @julien-c that the `save_pretrained() `method should be kept as clean as possible and I think we can keep backward compatibility (for very special edge cases) by allowing a boolean argument to the `to_json_file()` method. ",
"Awesome, merging this"
] | 1,586 | 1,587 | 1,587 | MEMBER | null | Given the discussion in PR: #3433, we want to make the serialized model conf more readable.
### Problem:
E.g. `bert-base-cased` has the following config on S3:
```
{
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"type_vocab_size": 2,
"vocab_size": 28996
}
```
But when saved all default params are saved as well (which is unnecessary):
```
{
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"type_vocab_size": 2,
"vocab_size": 28996
}
(which is readable imo) and once it's saved it now looks like this:
{
"_num_labels": 2,
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": null,
"do_sample": false,
"early_stopping": false,
"eos_token_id": null,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-12,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 512,
"min_length": 0,
"model_type": "bert",
"no_repeat_ngram_size": 0,
"num_attention_heads": 12,
"num_beams": 1,
"num_hidden_layers": 12,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": 0,
"pruned_heads": {},
"repetition_penalty": 1.0,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 28996
}
```
### Solution:
We should only save the difference of the actual config to either **v1**) to the model class' config or **v2**) to the PretrainedConfig() (which contains most of the unnecessary default params).
This PR implements either **v1**) or **v2**) - up for discussion!
**v1**) for `bert-base-cased` would look like this:
```
{
"architectures": [
"BertForMaskedLM"
],
"vocab_size": 28996
}
```
**v2**) for `bert-base-cased` would look like this:
```
{
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"type_vocab_size": 2,
"vocab_size": 28996
}
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3797/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3797/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3797",
"html_url": "https://github.com/huggingface/transformers/pull/3797",
"diff_url": "https://github.com/huggingface/transformers/pull/3797.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3797.patch",
"merged_at": 1587168439000
} |
https://api.github.com/repos/huggingface/transformers/issues/3796 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3796/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3796/comments | https://api.github.com/repos/huggingface/transformers/issues/3796/events | https://github.com/huggingface/transformers/issues/3796 | 599,849,691 | MDU6SXNzdWU1OTk4NDk2OTE= | 3,796 | Calculated offsets are wrong | {
"login": "dirkgr",
"id": 920638,
"node_id": "MDQ6VXNlcjkyMDYzOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/920638?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dirkgr",
"html_url": "https://github.com/dirkgr",
"followers_url": "https://api.github.com/users/dirkgr/followers",
"following_url": "https://api.github.com/users/dirkgr/following{/other_user}",
"gists_url": "https://api.github.com/users/dirkgr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dirkgr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dirkgr/subscriptions",
"organizations_url": "https://api.github.com/users/dirkgr/orgs",
"repos_url": "https://api.github.com/users/dirkgr/repos",
"events_url": "https://api.github.com/users/dirkgr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dirkgr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It's probably normal since Roberta's tokenizers is a byte-level tokenizer which split words at byte-level (ie. possibly smaller than the character unit).\r\n\r\nCc @n1t0 ",
"There is a bug indeed, the offsets shouldn't be shifted after the `<mask>` token. I should be able to fix this.\r\nI'm not sure I'll be able to have the right offsets for the `<mask>` token though as this one is tricky.",
"This is now fixed on the latest `master`, with the output being\r\n```\r\n['<s>', 'ĠA', ',', '<mask>', 'ĠAllen', 'N', 'LP', 'Ġsentence', '.', '</s>']\r\n['', 'A', ',', '<mask>', 'Allen', 'N', 'LP', 'sentence', '.', ''] \r\n```\r\nThe spaces are not part of the offsets because the `trim_offsets` option is `True` by default."
] | 1,586 | 1,587 | 1,587 | CONTRIBUTOR | null | This is on the latest `master` (from 2020-04-13):
```Python
import transformers
text = 'A, <mask> AllenNLP sentence.'
t = transformers.AutoTokenizer.from_pretrained("roberta-base", use_fast=True, add_special_tokens=True)
x2 = t.encode_plus(text, return_offsets_mapping=True)
print(repr(t.convert_ids_to_tokens(x2['input_ids']))
print(repr([text[start:end] for start, end in x2['offset_mapping']]))
```
This prints (with some manual alignment):
```
['<s>', 'ĠA', ',', '<mask>', 'ĠAllen', 'N', 'LP', 'Ġsentence', '.', '</s>']
['', 'A', ',', ', <mask>', ' Alle', 'n', 'NL', 'P sentenc', 'e', '']
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3796/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3795 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3795/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3795/comments | https://api.github.com/repos/huggingface/transformers/issues/3795/events | https://github.com/huggingface/transformers/pull/3795 | 599,827,938 | MDExOlB1bGxSZXF1ZXN0NDAzMzg5MDQ0 | 3,795 | [Pipelines] Clean pipelines test and remove unnecessary code | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,586 | 1,587 | 1,587 | MEMBER | null | This PR cleans up pipelines a bit:
1) Fixes non-working pipeline creation test
2) Remove unnecessary code (due to PR #3116) in pipelines as discussed with @thomwolf in PR #3413
Note: Tested on QA pipelines slow tests. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3795/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3795",
"html_url": "https://github.com/huggingface/transformers/pull/3795",
"diff_url": "https://github.com/huggingface/transformers/pull/3795.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3795.patch",
"merged_at": 1587046894000
} |
https://api.github.com/repos/huggingface/transformers/issues/3794 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3794/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3794/comments | https://api.github.com/repos/huggingface/transformers/issues/3794/events | https://github.com/huggingface/transformers/issues/3794 | 599,796,794 | MDU6SXNzdWU1OTk3OTY3OTQ= | 3,794 | Getting large alloc error while evaluating bert-base on NER task | {
"login": "Sumegh-git",
"id": 37850881,
"node_id": "MDQ6VXNlcjM3ODUwODgx",
"avatar_url": "https://avatars.githubusercontent.com/u/37850881?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sumegh-git",
"html_url": "https://github.com/Sumegh-git",
"followers_url": "https://api.github.com/users/Sumegh-git/followers",
"following_url": "https://api.github.com/users/Sumegh-git/following{/other_user}",
"gists_url": "https://api.github.com/users/Sumegh-git/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sumegh-git/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sumegh-git/subscriptions",
"organizations_url": "https://api.github.com/users/Sumegh-git/orgs",
"repos_url": "https://api.github.com/users/Sumegh-git/repos",
"events_url": "https://api.github.com/users/Sumegh-git/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sumegh-git/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I'm having the same issue. Training is fine but get this error when evaluate."
] | 1,586 | 1,605 | 1,592 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert-base-multilingual-cased,):
Language I am using the model on (English):
The problem arises when using:
Evaluation
The tasks I am working on is:
My own custom dataset having the same as GermEval task format.
Running on **Colab**.
I believe this is due to memory error. But why should there be memory **error while testing when things were fine during training **?
Max_seq_length during train was 128 and batch_size 8
04/14/2020 19:06:13 - INFO - transformers.modeling_utils - loading weights file germeval-model/checkpoint-21750/pytorch_model.bin
04/14/2020 19:07:03 - INFO - __main__ - Loading features from cached file ./cached_dev_bert-base-multilingual-cased_128
04/14/2020 19:07:05 - INFO - __main__ - ***** Running evaluation *****
04/14/2020 19:07:05 - INFO - __main__ - Num examples = 22026
04/14/2020 19:07:05 - INFO - __main__ - Batch size = 2
Evaluating: 0% 3/11013 [00:01<1:07:38, 2.71it/s]tcmalloc: large alloc 1110605824 bytes == 0x3e706000 @ 0x7f207d0051e7 0x7f207a3995e1 0x7f207a3fde88 0x7f207a3fdfa3 0x7f207a49c098 0x7f207a49c8f4 0x7f207a49ca42 0x5678b3 0x5a067e 0x7f207a3e970d 0x50a8af 0x50c5b9 0x508245 0x50a080 0x50aa7d 0x50d390 0x508245 0x58958c 0x5a067e 0x7f207a3e970d 0x50a8af 0x50c5b9 0x508245 0x50a080 0x50aa7d 0x50d390 0x508245 0x50a080 0x50aa7d 0x50d390 0x508245
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3794/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3794/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3793 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3793/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3793/comments | https://api.github.com/repos/huggingface/transformers/issues/3793/events | https://github.com/huggingface/transformers/pull/3793 | 599,782,461 | MDExOlB1bGxSZXF1ZXN0NDAzMzUyMzk1 | 3,793 | [Bert] remove hard-coded pad token id | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"LGTM"
] | 1,586 | 1,587 | 1,587 | MEMBER | null | tiny change to remove hard coded `pad_token_id` in Bert. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3793/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3793",
"html_url": "https://github.com/huggingface/transformers/pull/3793",
"diff_url": "https://github.com/huggingface/transformers/pull/3793.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3793.patch",
"merged_at": 1587045538000
} |
https://api.github.com/repos/huggingface/transformers/issues/3792 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3792/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3792/comments | https://api.github.com/repos/huggingface/transformers/issues/3792/events | https://github.com/huggingface/transformers/issues/3792 | 599,729,141 | MDU6SXNzdWU1OTk3MjkxNDE= | 3,792 | Using run_glue.py on external datasets for fine-tuning a RoBERTa classification model --> Is this possible? | {
"login": "seyonechithrananda",
"id": 46096704,
"node_id": "MDQ6VXNlcjQ2MDk2NzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/46096704?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seyonechithrananda",
"html_url": "https://github.com/seyonechithrananda",
"followers_url": "https://api.github.com/users/seyonechithrananda/followers",
"following_url": "https://api.github.com/users/seyonechithrananda/following{/other_user}",
"gists_url": "https://api.github.com/users/seyonechithrananda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seyonechithrananda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seyonechithrananda/subscriptions",
"organizations_url": "https://api.github.com/users/seyonechithrananda/orgs",
"repos_url": "https://api.github.com/users/seyonechithrananda/repos",
"events_url": "https://api.github.com/users/seyonechithrananda/events{/privacy}",
"received_events_url": "https://api.github.com/users/seyonechithrananda/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @seyonechithrananda, do you have a `torch.data.Dataset` for your classification dataset?\r\n\r\nIf you do, this will be pretty easy following #3800 (i.e. the amount of code required should be pretty minimal)",
"Thanks for the response @julien-c! I originally grab a list of SMILES sequences and their corresponding labels into two separate lists, before tokenizing the sequences and converting them into a tensor (following the `RobertaForSequenceClassification` docs). Will look into `torch.data.Dataset`. \r\n\r\nLink to code I was using previously (I originally tried to fine-tune for classification without `run_glue.py`): https://t.co/lqVqh3L1oA?amp=1\r\n\r\n> Hi @seyonechithrananda, do you have a `torch.data.Dataset` for your classification dataset?\r\n> \r\n> If you do, this will be pretty easy following #3800 (i.e. the amount of code required should be pretty minimal)\r\n\r\n",
"Hi @julien-c! Followed your tips and created a Dataset class following a similar tutorial. However I ran into an issue with CUDA in the training pipeline:\r\n\r\n```\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-15-ea2a288fbd03> in <module>()\r\n 8 if torch.cuda.is_available():\r\n 9 sent = sent.cuda()\r\n---> 10 label = labels.cuda()\r\n 11 output = model.forward(sent)[0]\r\n 12 _, predicted = torch.max(output, 1)\r\n\r\nAttributeError: 'list' object has no attribute 'cuda'\r\n\r\n```\r\n\r\nDo you know why this issue is occurring? I use a pre-trained RoBERTA model (trained on MLM for a diff. dataset). [Here](https://colab.research.google.com/drive/1Q9pvFQoEe_4NIO853-tDy0ERZ3pyNzwT) is the notebook. Also, can we utilize the config from a MLM RoBERTa model for sequence classification or should it be the `roberta-base` config?\r\n\r\nThanks for the help! \r\n\r\n",
"Managed to create a variant of `run_glue.py` and get it working. Thanks for the help1"
] | 1,586 | 1,588 | 1,588 | CONTRIBUTOR | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I recently uploaded the model weights for RoBERTa trained on a chemical benchmark dataset called ZINC15K for masked-language modelling of individual atoms in each molecule. The model performed pretty decently, so I thought it would be interesting to apply it to a downstream task of toxicity prediction on the Tox21 dataset (balanced dataset I created here: https://github.com/seyonechithrananda/bert-loves-chemistry/blob/master/tox21_balanced_revised.csv)
^The link above is in the repo with all the HuggingFace notebooks relating to this task I have created for more details.
As you can see in the CSV above, the 'SR-p53' value represents the labels for the dataset, whereas the 'SMILES' column represents the text representation for each molecule. Is there a way that `run_glue.py` can be repurposed alongside `RobertaforSequenceClassification` to train a pre-trained model for this classification task? And if so, could anyone give me a couple pointers on where to start? I'm relatively new to HuggingFace (more familiar with CV + graph NN's for chemistry) but have been enjoying using it so far!
Link to model weights (in Huggingface hub): https://huggingface.co/seyonec/ChemBERTa-zinc-base-v1
Thanks for the help!
*Given that this was a more library-centric question and not a bug I felt it would be better to post here then on SO.
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3792/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3792/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3791 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3791/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3791/comments | https://api.github.com/repos/huggingface/transformers/issues/3791/events | https://github.com/huggingface/transformers/pull/3791 | 599,689,941 | MDExOlB1bGxSZXF1ZXN0NDAzMjc5ODY2 | 3,791 | XLM tokenizer should encode with bos token | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,586 | 1,587 | 1,587 | MEMBER | null | XLM tokenizer should behave according to the documentation
closes https://github.com/huggingface/transformers/issues/3788 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3791/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3791",
"html_url": "https://github.com/huggingface/transformers/pull/3791",
"diff_url": "https://github.com/huggingface/transformers/pull/3791.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3791.patch",
"merged_at": 1587137335000
} |
https://api.github.com/repos/huggingface/transformers/issues/3790 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3790/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3790/comments | https://api.github.com/repos/huggingface/transformers/issues/3790/events | https://github.com/huggingface/transformers/pull/3790 | 599,649,255 | MDExOlB1bGxSZXF1ZXN0NDAzMjQ2Mzgw | 3,790 | Fix token_type_id in BERT question-answering example | {
"login": "siboehm",
"id": 14908678,
"node_id": "MDQ6VXNlcjE0OTA4Njc4",
"avatar_url": "https://avatars.githubusercontent.com/u/14908678?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/siboehm",
"html_url": "https://github.com/siboehm",
"followers_url": "https://api.github.com/users/siboehm/followers",
"following_url": "https://api.github.com/users/siboehm/following{/other_user}",
"gists_url": "https://api.github.com/users/siboehm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/siboehm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/siboehm/subscriptions",
"organizations_url": "https://api.github.com/users/siboehm/orgs",
"repos_url": "https://api.github.com/users/siboehm/repos",
"events_url": "https://api.github.com/users/siboehm/events{/privacy}",
"received_events_url": "https://api.github.com/users/siboehm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,586 | 1,587 | 1,587 | CONTRIBUTOR | null | `token_type_id` wasn't being set correctly in the code examples for BERT Question answering. It's being turned into the sequence embedding, hence needs to highlight whether each token belongs to sequence 0 or 1.
For this small example the model returns the correct answer even though the parameter was incorrectly set, but for bigger paragraphs that is not the case.
I changed the code to use encode_plus which returns the correct `token_type_id`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3790/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3790",
"html_url": "https://github.com/huggingface/transformers/pull/3790",
"diff_url": "https://github.com/huggingface/transformers/pull/3790.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3790.patch",
"merged_at": 1587136452000
} |
https://api.github.com/repos/huggingface/transformers/issues/3789 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3789/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3789/comments | https://api.github.com/repos/huggingface/transformers/issues/3789/events | https://github.com/huggingface/transformers/issues/3789 | 599,584,722 | MDU6SXNzdWU1OTk1ODQ3MjI= | 3,789 | Is there a classical transformer model in the project? | {
"login": "980202006",
"id": 24452502,
"node_id": "MDQ6VXNlcjI0NDUyNTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/24452502?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/980202006",
"html_url": "https://github.com/980202006",
"followers_url": "https://api.github.com/users/980202006/followers",
"following_url": "https://api.github.com/users/980202006/following{/other_user}",
"gists_url": "https://api.github.com/users/980202006/gists{/gist_id}",
"starred_url": "https://api.github.com/users/980202006/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/980202006/subscriptions",
"organizations_url": "https://api.github.com/users/980202006/orgs",
"repos_url": "https://api.github.com/users/980202006/repos",
"events_url": "https://api.github.com/users/980202006/events{/privacy}",
"received_events_url": "https://api.github.com/users/980202006/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You might find this useful http://nlp.seas.harvard.edu/2018/04/03/attention.html\r\nI think PyTorch already have it implemented in their library\r\nhttps://pytorch.org/docs/stable/nn.html?highlight=transformer#torch.nn.Transformer",
"thank you,It is a answer for me."
] | 1,586 | 1,586 | 1,586 | NONE | null | Hi,
I am studying in some domain that need the original transformer which is from 《attention is what you all need》. Is there an implementation? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3789/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3788 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3788/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3788/comments | https://api.github.com/repos/huggingface/transformers/issues/3788/events | https://github.com/huggingface/transformers/issues/3788 | 599,548,172 | MDU6SXNzdWU1OTk1NDgxNzI= | 3,788 | Inconsistencies and possible bugs in different tokenizers | {
"login": "alanakbik",
"id": 18665324,
"node_id": "MDQ6VXNlcjE4NjY1MzI0",
"avatar_url": "https://avatars.githubusercontent.com/u/18665324?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alanakbik",
"html_url": "https://github.com/alanakbik",
"followers_url": "https://api.github.com/users/alanakbik/followers",
"following_url": "https://api.github.com/users/alanakbik/following{/other_user}",
"gists_url": "https://api.github.com/users/alanakbik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alanakbik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alanakbik/subscriptions",
"organizations_url": "https://api.github.com/users/alanakbik/orgs",
"repos_url": "https://api.github.com/users/alanakbik/repos",
"events_url": "https://api.github.com/users/alanakbik/events{/privacy}",
"received_events_url": "https://api.github.com/users/alanakbik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
},
{
"id": 1862634478,
"node_id": "MDU6TGFiZWwxODYyNjM0NDc4",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix",
"name": "Should Fix",
"color": "FF0000",
"default": false,
"description": "This has been identified as a bug and should be fixed."
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi, and thanks for your report! Indeed some of these seem to be bugs.\r\n\r\n## XLM\r\n\r\nThis seems to be a bug. It doesn't behave as the documentation says, I'm looking into it. Encoding sequences with the bos token instead of the cls token should do the trick.\r\n\r\n## XLNet\r\n\r\nThe way XLNet encode sequences is with the format `A <sep> B <sep> <cls>`, as it can be seen in the [original repository](https://github.com/zihangdai/xlnet/blob/0b642d14dd8aec7f1e1ecbf7d6942d5faa6be1f0/data_utils.py#L481-L487). I'm not finding any usage of the `<s>` and `</s>` tokens in the repo, even though they're declared.\r\nIt's interesting that you obtained better results when using `<s>` and `</s>`!\r\n\r\n## RoBERTa and BART\r\n\r\nThis is a slightly more complicated issue. The #1196 issue describes it well, and the https://github.com/huggingface/transformers/pull/2778 PR addresses this as well. Here the correct tokenization is the first one:\r\n\r\n```\r\n['<s>', 'ĠCR', 'ICK', 'ET', '1', 'ĠM', 'ATCH', '</s>']\r\n```\r\n\r\nIt is interesting that you're getting better results with the second one. I believe the original implementation outputs the same results as the result pasted above. I'm guessing you obtained the second result with the following:\r\n\r\n```py\r\ntokenizer.build_inputs_with_special_tokens(\r\n tokenizer.encode('She likes <mask> cats.', add_special_tokens=False)\r\n)\r\n```\r\n\r\nwhich indeed yields the tokenization you mentioned. This happens because when encoding without special tokens, no space is added between the initial special token and the first sequence token (seeing as there's not special tokens). When using this method, you would need to specify you want that prefix space so that it adds it. You can do so with the `add_prefix_space` option for the `encode` method:\r\n\r\n```py\r\ntokenizer.build_inputs_with_special_tokens(\r\n tokenizer.encode('She likes <mask> cats.', add_special_tokens=False, add_prefix_space=True)\r\n)\r\n```\r\n\r\nThis yields the same results as the first method. Let me know if I can be of further help.\r\n",
"Thanks, that clarifies it. \r\n\r\nYou're right that the XLNet implementation declares the ` <s>` and `</s>` but then does not seem to use them, which is strange. Also strange that we are seeing better results with these tags but this could also be a problem in our code. Perhaps you could then set the `tokenizer.bos_token` and `tokenizer.eos_token` fields to `None` for the `XLNetTokenizer` if they are not used?"
] | 1,586 | 1,587 | 1,587 | NONE | null | # 🐛 Bug
## Information
Over in [Flair](https://github.com/flairNLP/flair/pull/1494) we are integrating your awesome library in our embeddings interfaces. We are using the `AutoTokenizer` class to create one interface for all embeddings. We use the tokenizers to encode strings with special tokens.
However, we note some inconsistencies: (1) Most, but not all encodings do not include the BOS token and (2) some encodings behave differently depending on how the tokenizer is called. In both cases, this is detrimental to downstream task performance.
This can be reproduced with the following script:
```python
from transformers import AutoTokenizer
# example string to tokenize
text = "CRICKET1 MATCH"
# different models
for tokenizer_name in [
'bert-base-cased',
'openai-gpt',
'transfo-xl-wt103',
'gpt2',
'xlnet-base-cased',
'xlm-mlm-ende-1024',
'roberta-base',
'distilbert-base-uncased',
'ctrl',
'camembert-base',
'albert-base-v2',
'xlm-roberta-base',
'distilgpt2',
'bart-large',
'distilroberta-base',
]:
# for each tokenizer model, print name and result of checks
print('------------')
print(tokenizer_name)
print('------------')
# get tokenizer
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)
# method 1: tokenizer.encode() with add_special_tokens=True
ids = tokenizer.encode(text, add_special_tokens=True)
subtokens_encode_special = tokenizer.convert_ids_to_tokens(ids)
# method 2: tokenizer.encode() with add_special_tokens=False and subsequent build_inputs_with_special_tokens()
ids = tokenizer.encode(text, add_special_tokens=False)
ids_extended = tokenizer.build_inputs_with_special_tokens(ids)
subtokens_encode_and_build = tokenizer.convert_ids_to_tokens(ids_extended)
# check if both methods yield the same result
if subtokens_encode_special != subtokens_encode_and_build:
print("DIFFERENCE IN ENCODING!")
print(f'Method 1 - Encode (+ special):\t{str(subtokens_encode_special)}')
print(f'Method 2 - Encode and build: \t{str(subtokens_encode_and_build)}')
# check if the BOS token is included
bos_token = tokenizer.bos_token
if bos_token and bos_token not in subtokens_encode_and_build:
print("DOES NOT CONTAIN BOS TOKEN!")
print(f"BOS token '{bos_token}' not in {str(subtokens_encode_and_build)}")
```
This outputs the following inconsistencies, at least some of which likely are bugs.
There are two encodings that do not contain the BOS token:
```console
------------
xlm-mlm-ende-1024
------------
DOES NOT CONTAIN BOS TOKEN!
BOS token '<s>' not in ['</s>', 'crick', 'et', '1</w>', 'match</w>', '</s>']
```
So, the XLM encoding of the string "CRICKET1 MATCH" strangely starts with a **`</s>`** (EOS) even though it should probably start with a **`<s>`**.
```console
------------
xlnet-base-cased
------------
DOES NOT CONTAIN BOS TOKEN!
BOS token '<s>' not in ['▁CR', 'ICK', 'ET', '1', '▁M', 'ATCH', '<sep>', '<cls>']
```
XLNet encoding does not contain BOS and EOS at all. This is consistent with the documentation but is detrimental to performance. In our experiments, it works a lot better if we include `<s>` and `</s>` in the sequence.
There are also two tokenizers for which the two methods (encode with special tokens and encode and build) give slightly different results, namely RoBERTa and BART:
```console
------------
roberta-base
------------
DIFFERENCE IN ENCODING!
Method 1 - Encode (+ special): ['<s>', 'ĠCR', 'ICK', 'ET', '1', 'ĠM', 'ATCH', '</s>']
Method 2 - Encode and build: ['<s>', 'CR', 'ICK', 'ET', '1', 'ĠM', 'ATCH', '</s>']
```
This was already noted by @stefan-it in #1196 and strangely, even though the tokenization output by method 1 seems to make more sense, method 2 gives better results.
## Expected behavior
Consistent output :)
## Environment info
- `transformers` version: 2.8
- Platform: Ubuntu
- Python version: 3.7
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: o
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3788/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3788/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3787 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3787/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3787/comments | https://api.github.com/repos/huggingface/transformers/issues/3787/events | https://github.com/huggingface/transformers/issues/3787 | 599,506,289 | MDU6SXNzdWU1OTk1MDYyODk= | 3,787 | In just fouth blocks of the code of the colab notebook "01-training notebook", it just failed. | {
"login": "JonathanSum",
"id": 21982975,
"node_id": "MDQ6VXNlcjIxOTgyOTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/21982975?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JonathanSum",
"html_url": "https://github.com/JonathanSum",
"followers_url": "https://api.github.com/users/JonathanSum/followers",
"following_url": "https://api.github.com/users/JonathanSum/following{/other_user}",
"gists_url": "https://api.github.com/users/JonathanSum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JonathanSum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JonathanSum/subscriptions",
"organizations_url": "https://api.github.com/users/JonathanSum/orgs",
"repos_url": "https://api.github.com/users/JonathanSum/repos",
"events_url": "https://api.github.com/users/JonathanSum/events{/privacy}",
"received_events_url": "https://api.github.com/users/JonathanSum/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"it seems to be that it can be solved by without installing it from pip",
"> it seems to be that it can be solved by without installing it from pip\r\n\r\ncould you please explain how you solved the issue ? did not understand.\r\nthanks,",
"@uunal, do you have this issue too?",
"@JonathanSum yes, can not find a solution. Updated all packages, check dependency but still same error persists.",
"@uunal see the link above? If you don't want to waste your time, please feel free to use it, and it is the 01 training notebook. It has the solution. \r\n\r\nThe solution applied: \"it seems to be that it can be solved by without installing it from pip\"",
"Ooh I get it now, pip version has a problem:) thanks",
"Anyways I found why it is not working with pip version, check out this commit:https://github.com/huggingface/transformers/commit/b7cf9f43d259fbad45d899c1769110aafc9f410a"
] | 1,586 | 1,587 | 1,586 | CONTRIBUTOR | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
https://colab.research.google.com/drive/1gamxcO5AHioIHFVhTr1x71mn8SYPNkPz
`---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-4-559dbb5a1852> in <module>()
7
8 # First we create an empty Byte-Pair Encoding model (i.e. not trained model)
----> 9 tokenizer = Tokenizer(BPE())
10
11 # Then we enable lower-casing and unicode-normalization
TypeError: cannot create 'BPE' instances`
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3787/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3786 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3786/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3786/comments | https://api.github.com/repos/huggingface/transformers/issues/3786/events | https://github.com/huggingface/transformers/issues/3786 | 599,344,910 | MDU6SXNzdWU1OTkzNDQ5MTA= | 3,786 | Why force tokens in Bart decoding | {
"login": "chqiwang",
"id": 10123767,
"node_id": "MDQ6VXNlcjEwMTIzNzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/10123767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chqiwang",
"html_url": "https://github.com/chqiwang",
"followers_url": "https://api.github.com/users/chqiwang/followers",
"following_url": "https://api.github.com/users/chqiwang/following{/other_user}",
"gists_url": "https://api.github.com/users/chqiwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chqiwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chqiwang/subscriptions",
"organizations_url": "https://api.github.com/users/chqiwang/orgs",
"repos_url": "https://api.github.com/users/chqiwang/repos",
"events_url": "https://api.github.com/users/chqiwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/chqiwang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"see https://github.com/huggingface/transformers/issues/3668",
"> see #3668\r\n\r\nthanks a lot!"
] | 1,586 | 1,586 | 1,586 | NONE | null | https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bart.py#L955
What's the meaning of this line? Why decode \<S\> when cur_len = 1? Why not when cur_len = 0? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3786/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3785 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3785/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3785/comments | https://api.github.com/repos/huggingface/transformers/issues/3785/events | https://github.com/huggingface/transformers/issues/3785 | 599,298,672 | MDU6SXNzdWU1OTkyOTg2NzI= | 3,785 | How to fine tune EncoderDecoder model for training a new corpus of data ? | {
"login": "banunitte",
"id": 6847024,
"node_id": "MDQ6VXNlcjY4NDcwMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6847024?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/banunitte",
"html_url": "https://github.com/banunitte",
"followers_url": "https://api.github.com/users/banunitte/followers",
"following_url": "https://api.github.com/users/banunitte/following{/other_user}",
"gists_url": "https://api.github.com/users/banunitte/gists{/gist_id}",
"starred_url": "https://api.github.com/users/banunitte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/banunitte/subscriptions",
"organizations_url": "https://api.github.com/users/banunitte/orgs",
"repos_url": "https://api.github.com/users/banunitte/repos",
"events_url": "https://api.github.com/users/banunitte/events{/privacy}",
"received_events_url": "https://api.github.com/users/banunitte/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1843738573,
"node_id": "MDU6TGFiZWwxODQzNzM4NTcz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Encoder-Decoder",
"name": "Core: Encoder-Decoder",
"color": "ef536d",
"default": false,
"description": ""
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"We are currently working on implementing the encoder decoder framework. See PR: https://github.com/huggingface/transformers/pull/3383",
"I think in a week it should be ready :-) ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"See https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16#bert2bert-summarization-with-%F0%9F%A4%97-encoderdecoder-framework",
"Thank u Patrick\n\nOn Mon, Aug 3, 2020 at 10:19 PM Patrick von Platen <[email protected]>\nwrote:\n\n> See\n> https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16#bert2bert-summarization-with-%F0%9F%A4%97-encoderdecoder-framework\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/3785#issuecomment-668127257>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABUHUMC742YUNXLDJDGVCGTR63TBPANCNFSM4MHODIYA>\n> .\n>\n"
] | 1,586 | 1,596 | 1,596 | NONE | null | is there any documentation available for the same? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3785/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3784 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3784/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3784/comments | https://api.github.com/repos/huggingface/transformers/issues/3784/events | https://github.com/huggingface/transformers/issues/3784 | 599,270,233 | MDU6SXNzdWU1OTkyNzAyMzM= | 3,784 | Convert pytorch-pretrained-bert to new version (transformers) | {
"login": "Oxi84",
"id": 25420033,
"node_id": "MDQ6VXNlcjI1NDIwMDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/25420033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Oxi84",
"html_url": "https://github.com/Oxi84",
"followers_url": "https://api.github.com/users/Oxi84/followers",
"following_url": "https://api.github.com/users/Oxi84/following{/other_user}",
"gists_url": "https://api.github.com/users/Oxi84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Oxi84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oxi84/subscriptions",
"organizations_url": "https://api.github.com/users/Oxi84/orgs",
"repos_url": "https://api.github.com/users/Oxi84/repos",
"events_url": "https://api.github.com/users/Oxi84/events{/privacy}",
"received_events_url": "https://api.github.com/users/Oxi84/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Have you tried using the exact same methods `tokenize` and `convert_tokens_to_ids`?",
"```py\r\nfrom transformers import AlbertTokenizer, AlbertForMaskedLM\r\ntokenizer = AlbertTokenizer.from_pretrained('albert-base-v2)\r\nmodel = AlbertForMaskedLM.from_pretrained('albert-base-v2) \r\n\r\ntext = \"[CLS] Who was Jim Henson ? [SEP]\"\r\ntokenized_text = tokenizer.tokenize(text)\r\nindexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)\r\n\r\nsegments_ids = [0 for x in range(0,len(tokenized_text))]\r\ntokens_tensor = torch.tensor([indexed_tokens])\r\ntokens_tensor = tokens_tensor.to('cuda')\r\n\r\nwith torch.no_grad():\r\n predictions_0 = model(tokens_tensor)\r\n\r\nprint(tokenized_text) \r\nprint(predictions_0) \r\ndel predictions_0 \r\n```",
"I cannot copy but predictions_0 does not contain 2 elements, but just one.\r\nso: \r\n\r\nloss, prediction_scores = outputs[:2] gives me an error (not in range)\r\n\r\nprediction_scores = outputs[0] works, but i do not know what is the output, i hope logits.",
"I had to format your first comment as it was unreadable, please [see how to use code blocks](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks).\r\n\r\nIn your first snipped you're using:\r\n\r\n```py\r\noutputs = model(input_ids, masked_lm_labels=input_ids)\r\n```\r\n\r\nwhile in the second snippet:\r\n\r\n```py\r\npredictions_0 = model(tokens_tensor)\r\n```\r\n\r\nYou're not sending the `masked_lm_labels`, which is what is used to compute the loss. If you were to use these labels, the loss would be computed, resulting in a tuple with 2 elements as an output.\r\n\r\nHere's the [documentation](https://huggingface.co/transformers/model_doc/albert.html#transformers.AlbertForMaskedLM) for the `AlbertForMaskedLM` model.\r\n",
"Thanks,"
] | 1,586 | 1,587 | 1,587 | NONE | null | So I have this code for albert:
```py
from transformers import AlbertTokenizer, AlbertForMaskedLM
tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = AlbertForMaskedLM.from_pretrained('albert-base-v2')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0) # Batch size 1
outputs = model(input_ids, masked_lm_labels=input_ids)
loss, prediction_scores = outputs[:2]
```
How do i convert it to old format:
```py
from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM
tokenizer = BertTokenizer.from_pretrained('bert-large-cased')
model = BertForMaskedLM.from_pretrained('bert-large-cased')
tokenized_text_tmp = tokenizer.tokenize(text)
indexed_tokens_tmp = tokenizer.convert_tokens_to_ids(text_tokens_tmp)
predictions = model(tokens_tensors, segments_tensors, attention_mask_tensors)
```
How do i get functions tokenizer.convert_tokens_to_ids and tokenizer.tokenize from old version into new one.
I need to first tokenize text and only after convert it to ids. Because in old code I do this way, changing in new format would take a very long time, because I did a lot of speed optimization and adding of padding.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3784/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3784/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3783 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3783/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3783/comments | https://api.github.com/repos/huggingface/transformers/issues/3783/events | https://github.com/huggingface/transformers/issues/3783 | 599,216,990 | MDU6SXNzdWU1OTkyMTY5OTA= | 3,783 | Longformer, a scalable transformer model for long-document NLP tasks | {
"login": "bratao",
"id": 1090152,
"node_id": "MDQ6VXNlcjEwOTAxNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1090152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bratao",
"html_url": "https://github.com/bratao",
"followers_url": "https://api.github.com/users/bratao/followers",
"following_url": "https://api.github.com/users/bratao/following{/other_user}",
"gists_url": "https://api.github.com/users/bratao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bratao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bratao/subscriptions",
"organizations_url": "https://api.github.com/users/bratao/orgs",
"repos_url": "https://api.github.com/users/bratao/repos",
"events_url": "https://api.github.com/users/bratao/events{/privacy}",
"received_events_url": "https://api.github.com/users/bratao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Any updates on this? Just curious.",
"Reformer will be added next week and then work will start on Longformer :-) ",
"Look forward to it!",
"Longformer is added now - closing!",
"@patrickvonplaten I have been using `Longformer` self attention with `LongBart` for summarisation recently and have done some side-by-side comparison to hf `BartForConditionalGeneration`. I noticed that `LongBart` is actually using more memory than hf `BartForConditionalGeneration` (when they're set up the equivalently). I looked into this and have found that this is coming from the self attention layer, i.e. `Longformer` self attention is using more memory than the normal multi-head self attention in `BartForConditionalGeneration`.\r\n\r\nWondering if this is expected or a bug? If it's expected, could you please explain? I thought the point of `Longformer` self attention was to reduce memory consumption...",
"It depends very much on the sequence length of your input. Did you benchmark your results using the benchmarking utils? ",
"@alexgaskell10, what is the sequence length? If the sequence length is shorter than the window size (for LongBart, it is probably 1024), you will see a bit of an increase in memory. For sequences longer than the window size (say, 2048), `LongformerSelfAttention` should be much more memory efficient compared to regular selfattention.\r\n",
"Thanks to both for the quick responses. I have only tried with input lengths <= 1024 but nothing beyond that. Makes sense that the benefits of `Longformer` self attention are more evident as sequences get longer, thanks.\r\n\r\n@patrickvonplaten no I didn't know there was a script for this already, I just used something I wrote. I'll have a look at this.\r\n\r\n@ibeltagy the sequence length I have set equal to window size (and tried for several different values, all <= 1024). I thought that if I used a sequence length of 1024 and window size of 1024 then `Longformer` and multi-head self attention layers would be equivalent (thereby making `LongBart` and `BartForConditionalGeneration` equivalent). Is there some overhead to using `Longformer` self attention which means it is more costly for sequences <= 1024?",
"> equivalent\r\n\r\nthey are not perfectly equivalent but close\r\n\r\n> which means it is more costly for sequences <= 1024?\r\n\r\nyes, the current implementation has a bit of overhead with sequences shorter than the window length. We are planning to address that in the future. One way to do so is to switch to regular selfattention if the sequence is short, but this probably requires additional pertaining to teach the model to work with both types of selfattention. ",
"Great, all makes sense. I'll run benchmarking for longer sequences and flag if anything unusual shows all. Thanks!"
] | 1,586 | 1,594 | 1,591 | NONE | null | # 🌟 New model addition
## Model description
This is an incredible project from the awesome https://github.com/allenai team that solves a big problem in transformers.
From https://twitter.com/i_beltagy/status/1249750021811011591
Excited to share our work on Longformer, a scalable transformer model for long-document NLP tasks without chunking/truncation to fit the 512 limit.
Work with @mattthemathman
, @armancohan
Code and pretrained model: http://github.com/allenai/longformer
We replace the standard self-attention with one that scales linearly with sequence length and that can flexibly adapt to downstream tasks. We continue pretraning from the RoBERTa checkpoint and evaluate on QA, coref, classification. Pretrained model supports seqlen 4,096
The small model archives sota results on enwik8 and text8 and large model gets close with half the parameters. Longformer's self-attention uses an efficient CUDA kernel that minimizes memory usage (char-lm large model, 23k tokens at training and 32k tokens at evaluation)
<!-- Important information -->
## Open source status
* [X] the model implementation is available: (give details)
https://github.com/allenai/longformer
* [X] the model weights are available: (give details)
Yes, at https://github.com/allenai/longformer
* [ X] who are the authors: (mention them, if possible by @gh-username)
@ibeltagy @schmmd
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3783/reactions",
"total_count": 21,
"+1": 10,
"-1": 0,
"laugh": 0,
"hooray": 5,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 6
} | https://api.github.com/repos/huggingface/transformers/issues/3783/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3782 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3782/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3782/comments | https://api.github.com/repos/huggingface/transformers/issues/3782/events | https://github.com/huggingface/transformers/issues/3782 | 599,193,088 | MDU6SXNzdWU1OTkxOTMwODg= | 3,782 | Importing horovod.tensorflow crashes AlbertTokenizer but not BertTokenizer | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"I think I remember someone mentioning this before. @LysandreJik does it ring any bell?",
"The error at the time was due to https://github.com/scipy/scipy/issues/11237. I'll look into it and try to reproduce @jarednielsen.",
"The `AlbertTokenizer` is using `SentencePiece` which is based on protobuffs. This seems to be the error, which would point to an error with `SentencePiece` rather than with `AlbertTokenizer`. Would you mind trying to import `XLNetTokenizer`, which is also based on `SentencePiece` and show us the results?",
"Same issue occurs with `XLNetTokenizer`. Would resolving https://github.com/huggingface/tokenizers/issues/53 enable us to move away from protobuf and fix this?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Don't think this is stale; still waiting on a fix in the tokenizers repo.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Not stale",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,586 | 1,603 | 1,603 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Albert tokenizers began crashing after I reorded my import statements with `isort`. Tracked down the bug to very strange behavior: importing `horovod.tensorflow` before `AlbertTokenizer` causes a crash, while importing `AlbertTokenizer` first does not. This behavior does not occur with `BertTokenizer`, only with `AlbertTokenizer`.
## To reproduce
Steps to reproduce the behavior:
```bash
docker run -it nvcr.io/nvidia/tensorflow:20.03-tf2-py3 /bin/bash # TF 2.1, horovod 0.19.0
pip install transformers==2.8.0
```
```python
import horovod.tensorflow as hvd
from transformers import AlbertTokenizer, BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
print("BERT success!") # this succeeds
tokenizer = AlbertTokenizer.from_pretrained("albert-base-v2")
print("ALBERT success!") # this causes a CoreDump
```
outputs
```error
BERT success!
[libprotobuf FATAL external/com_google_protobuf/src/google/protobuf/stubs/common.cc:86] This program was compiled against version 3.6.1 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.8.0). Contact the program author
for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/sentencepiece/src/builtin_pb/sentencepiece_model.pb.cc".)
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): This program was compiled against version 3.6.1 of the Protocol Buffer runtime library, which is not compatible with the
installed version (3.8.0). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/sentencepiece/src/builtin_pb/sentencepiece_model.pb.cc".)
Aborted (core dumped)
```
However, the code below succeeds. The only difference is that the transformers import comes first:
```python
from transformers import AlbertTokenizer, BertTokenizer
import horovod.tensorflow as hvd
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
print("BERT success!") # this succeeds
tokenizer = AlbertTokenizer.from_pretrained("albert-base-v2")
print("ALBERT success!") # this succeeds
```
This bug is a bit bewildering, to be honest. I can stop sorting my imports, I guess... Hoping that someone can identify the root cause.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3782/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3781 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3781/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3781/comments | https://api.github.com/repos/huggingface/transformers/issues/3781/events | https://github.com/huggingface/transformers/pull/3781 | 599,185,014 | MDExOlB1bGxSZXF1ZXN0NDAyODgxNDcx | 3,781 | Create model card | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,586 | 1,587 | 1,587 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3781/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3781/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3781",
"html_url": "https://github.com/huggingface/transformers/pull/3781",
"diff_url": "https://github.com/huggingface/transformers/pull/3781.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3781.patch",
"merged_at": 1587416855000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3780 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3780/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3780/comments | https://api.github.com/repos/huggingface/transformers/issues/3780/events | https://github.com/huggingface/transformers/issues/3780 | 599,179,483 | MDU6SXNzdWU1OTkxNzk0ODM= | 3,780 | language modeling other models | {
"login": "urlocal12",
"id": 61215920,
"node_id": "MDQ6VXNlcjYxMjE1OTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/61215920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/urlocal12",
"html_url": "https://github.com/urlocal12",
"followers_url": "https://api.github.com/users/urlocal12/followers",
"following_url": "https://api.github.com/users/urlocal12/following{/other_user}",
"gists_url": "https://api.github.com/users/urlocal12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/urlocal12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/urlocal12/subscriptions",
"organizations_url": "https://api.github.com/users/urlocal12/orgs",
"repos_url": "https://api.github.com/users/urlocal12/repos",
"events_url": "https://api.github.com/users/urlocal12/events{/privacy}",
"received_events_url": "https://api.github.com/users/urlocal12/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,586 | 1,586 | 1,586 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3780/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/3779 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3779/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3779/comments | https://api.github.com/repos/huggingface/transformers/issues/3779/events | https://github.com/huggingface/transformers/issues/3779 | 599,153,038 | MDU6SXNzdWU1OTkxNTMwMzg= | 3,779 | Problem when Converting a Fine-tuned Checkpoint from TF to PyTorch using ALBERTxxlargev1 Model | {
"login": "salrowili",
"id": 56635735,
"node_id": "MDQ6VXNlcjU2NjM1NzM1",
"avatar_url": "https://avatars.githubusercontent.com/u/56635735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/salrowili",
"html_url": "https://github.com/salrowili",
"followers_url": "https://api.github.com/users/salrowili/followers",
"following_url": "https://api.github.com/users/salrowili/following{/other_user}",
"gists_url": "https://api.github.com/users/salrowili/gists{/gist_id}",
"starred_url": "https://api.github.com/users/salrowili/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/salrowili/subscriptions",
"organizations_url": "https://api.github.com/users/salrowili/orgs",
"repos_url": "https://api.github.com/users/salrowili/repos",
"events_url": "https://api.github.com/users/salrowili/events{/privacy}",
"received_events_url": "https://api.github.com/users/salrowili/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Same issue here. i still have an error. could you write answer here?",
"our hero LysandreJik assigned this problem to himself. Let's have confidence in him to solve it (:",
"You fine-tuned your TF checkpoint using the original implementation, is that correct?",
"Thanks you, but i find way resolved my problem. i fine-tuning albert with pre-train load from checkpoint tf so i just convert to bin model and using hugging face abtract class to load . Done!",
"Yes, I fine-tuned it using python3 albert/run_squad_v2.py with adam optimizer. Then I tried to convert squad checkpoint using the hugging face transformer model. I will appreciate your help because I am waiting for two weeks for this problem to be solved.",
"> Thanks you, but i find way resolved my problem. i fine-tuning albert with pre-train load from checkpoint tf so i just convert to bin model and using hugging face abtract class to load . Done!\r\n\r\nis this checkpoint find-tuned on SQUAD ? because I have no problem converting ALBERT checkpoint that was not fine-tuned on downstream tasks.",
"you can try function in repo to convert checkpoint tf to pytorch bin model: https://github.com/lonePatient/albert_pytorch.git",
"This feature is not currently supported by our conversion scripts. I can take a look later this week, or you can try modifying the code yourself:\r\n\r\n- Change from `AlbertForPreTraining` to `AlbertForQuestionAnswering` in the [conversion file](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py).\r\n- Rename the weights in your model to those of our `AlbertForQuestionAnswering` by replacing the layers like it is done in [this method](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_albert.py#L79-L106). Something like `name = name.replace(\"classifier\", \"qa_outputs\")` would probably work.\r\n\r\nPlease note that this would work in the case where the ALBERT official implementation has the same Question Answering model as we do (that is, a single linear layer on top of the transformer). If there isn't, you would need to create a model similar to `AlbertForQuestionAnswering` but with the correct head.",
"> This feature is not currently supported by our conversion scripts. I can take a look later this week, or you can try modifying the code yourself:\r\n> \r\n> * Change from `AlbertForPreTraining` to `AlbertForQuestionAnswering` in the [conversion file](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py).\r\n> * Rename the weights in your model to those of our `AlbertForQuestionAnswering` by replacing the layers like it is done in [this method](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_albert.py#L79-L106). Something like `name = name.replace(\"classifier\", \"qa_outputs\")` would probably work.\r\n> \r\n> Please note that this would work in the case where the ALBERT official implementation has the same Question Answering model as we do (that is, a single linear layer on top of the transformer). If there isn't, you would need to create a model similar to `AlbertForQuestionAnswering` but with the correct head.\r\n\r\nProblem still exists . A message saying \"AttributeError: 'AlbertForQuestionAnswering' object has no attribute 'shape'\" appears even though I did all what you said. I think it's worth fixing it by you later this week. Google Colab offers TPUv3 which has 128GB and Hugging face transformers only support GPU where google collab offers P100 that has 16GB. That is an 8x performance boost for TPU so it will take me days to fine-tune it using transformers library only with GPU.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,586 | 1,595 | 1,595 | NONE | null | # 🐛 Bug
## Information
Model I am using : ALBERTxxlargeV1
Language I am using the model on : English
The problem arises when using: Converting fine-tuned checkpoint from TF to PyTorch. No Problem with converting pre-trained checkpoints from TF.
* [ ] the official example scripts:
```
!python /content/transformers/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py \
--tf_checkpoint_path /content/pretrained_models/albertsquad/model.ckpt-best \
--albert_config_file /content/pretrained_models/albertsquad/config.json \
--pytorch_dump_path /content/pretrained_models/albertsquad/pytorch_model.bin
```
My vocabulary model was also placed on the same folder with the name "spiece.model" along with model.ckpt-best.index and model.ckpt-best.meta
I think the problem resides here
https://github.com/huggingface/transformers/blob/352d5472b0c1dec0f420d606d16747d851b4bda8/src/transformers/modeling_albert.py#L120
and here
https://github.com/huggingface/transformers/blob/352d5472b0c1dec0f420d606d16747d851b4bda8/src/transformers/modeling_albert.py#L160
or to replace names in the structure of TF in lines around 70 in modeling_albert.py
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: SQUAD
* [ ] my own task or dataset: not related
## To reproduce
Steps to reproduce the behavior:
1. Pre-train ALBERTxx large model using v1 configuration on TF and then fine-tune it on GLUE or SQUAD Task using TF, not PyTorch.
2. Copy TF checkpoint on a folder along with the sentence piece model as "spiece.model" and config file as "config.json"
3. Try to convert TF checkpoint to PyTorch and you will have this message
```
2020-04-13 21:26:33.470832: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
Building PyTorch model from configuration: AlbertConfig {
"_num_labels": 2,
"architectures": null,
"attention_probs_dropout_prob": 0,
"bad_words_ids": null,
"bos_token_id": 2,
"classifier_dropout_prob": 0.1,
"decoder_start_token_id": null,
"do_sample": false,
"down_scale_factor": 1,
"early_stopping": false,
"embedding_size": 128,
"eos_token_id": 3,
"finetuning_task": null,
"gap_size": 0,
"hidden_act": "gelu",
"hidden_dropout_prob": 0,
"hidden_size": 4096,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.01,
"inner_group_num": 1,
"intermediate_size": 16384,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-12,
"layers_to_keep": [],
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 512,
"min_length": 0,
"model_type": "albert",
"net_structure_type": 0,
"no_repeat_ngram_size": 0,
"num_attention_heads": 64,
"num_beams": 1,
"num_hidden_groups": 1,
"num_hidden_layers": 12,
"num_memory_blocks": 0,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": 0,
"prefix": null,
"pruned_heads": {},
"repetition_penalty": 1.0,
"task_specific_params": null,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 30000,
"xla_device": null
}
INFO:transformers.modeling_albert:Converting TensorFlow checkpoint from /content/pretrained_models/albertCOVIDglue/model.ckpt-best
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/LayerNorm/beta with shape [128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/LayerNorm/beta/adam_m with shape [128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/LayerNorm/beta/adam_v with shape [128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/LayerNorm/gamma with shape [128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/LayerNorm/gamma/adam_m with shape [128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/LayerNorm/gamma/adam_v with shape [128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/position_embeddings with shape [512, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/position_embeddings/adam_m with shape [512, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/position_embeddings/adam_v with shape [512, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/token_type_embeddings with shape [2, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/token_type_embeddings/adam_m with shape [2, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/token_type_embeddings/adam_v with shape [2, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/word_embeddings with shape [30000, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/word_embeddings/adam_m with shape [30000, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/word_embeddings/adam_v with shape [30000, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/embedding_hidden_mapping_in/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/embedding_hidden_mapping_in/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/embedding_hidden_mapping_in/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/embedding_hidden_mapping_in/kernel with shape [128, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/embedding_hidden_mapping_in/kernel/adam_m with shape [128, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/embedding_hidden_mapping_in/kernel/adam_v with shape [128, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_m with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_v with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_m with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_v with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_m with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_v with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_m with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_v with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias with shape [16384]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_m with shape [16384]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_v with shape [16384]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel with shape [4096, 16384]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_m with shape [4096, 16384]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_v with shape [4096, 16384]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel with shape [16384, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_m with shape [16384, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_v with shape [16384, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/pooler/dense/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/pooler/dense/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/pooler/dense/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/pooler/dense/kernel with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/pooler/dense/kernel/adam_m with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/pooler/dense/kernel/adam_v with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight global_step with shape []
INFO:transformers.modeling_albert:Loading TF weight output_bias with shape [3]
INFO:transformers.modeling_albert:Loading TF weight output_bias/adam_m with shape [3]
INFO:transformers.modeling_albert:Loading TF weight output_bias/adam_v with shape [3]
INFO:transformers.modeling_albert:Loading TF weight output_weights with shape [3, 4096]
INFO:transformers.modeling_albert:Loading TF weight output_weights/adam_m with shape [3, 4096]
INFO:transformers.modeling_albert:Loading TF weight output_weights/adam_v with shape [3, 4096]
bert/embeddings/LayerNorm/beta
bert/embeddings/LayerNorm/beta/adam_m
bert/embeddings/LayerNorm/beta/adam_v
bert/embeddings/LayerNorm/gamma
bert/embeddings/LayerNorm/gamma/adam_m
bert/embeddings/LayerNorm/gamma/adam_v
bert/embeddings/position_embeddings
bert/embeddings/position_embeddings/adam_m
bert/embeddings/position_embeddings/adam_v
bert/embeddings/token_type_embeddings
bert/embeddings/token_type_embeddings/adam_m
bert/embeddings/token_type_embeddings/adam_v
bert/embeddings/word_embeddings
bert/embeddings/word_embeddings/adam_m
bert/embeddings/word_embeddings/adam_v
bert/encoder/embedding_hidden_mapping_in/bias
bert/encoder/embedding_hidden_mapping_in/bias/adam_m
bert/encoder/embedding_hidden_mapping_in/bias/adam_v
bert/encoder/embedding_hidden_mapping_in/kernel
bert/encoder/embedding_hidden_mapping_in/kernel/adam_m
bert/encoder/embedding_hidden_mapping_in/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta
bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta/adam_m
bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta/adam_v
bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma
bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma/adam_m
bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma/adam_v
bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta
bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta/adam_m
bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta/adam_v
bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma
bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma/adam_m
bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_v
bert/pooler/dense/bias
bert/pooler/dense/bias/adam_m
bert/pooler/dense/bias/adam_v
bert/pooler/dense/kernel
bert/pooler/dense/kernel/adam_m
bert/pooler/dense/kernel/adam_v
global_step
output_bias
output_bias/adam_m
output_bias/adam_v
output_weights
output_weights/adam_m
output_weights/adam_v
Initialize PyTorch weight ['albert', 'embeddings', 'LayerNorm', 'beta'] from bert/embeddings/LayerNorm/beta
INFO:transformers.modeling_albert:Skipping albert/embeddings/LayerNorm/beta/adam_m
INFO:transformers.modeling_albert:Skipping albert/embeddings/LayerNorm/beta/adam_v
Initialize PyTorch weight ['albert', 'embeddings', 'LayerNorm', 'gamma'] from bert/embeddings/LayerNorm/gamma
INFO:transformers.modeling_albert:Skipping albert/embeddings/LayerNorm/gamma/adam_m
INFO:transformers.modeling_albert:Skipping albert/embeddings/LayerNorm/gamma/adam_v
Initialize PyTorch weight ['albert', 'embeddings', 'position_embeddings'] from bert/embeddings/position_embeddings
INFO:transformers.modeling_albert:Skipping albert/embeddings/position_embeddings/adam_m
INFO:transformers.modeling_albert:Skipping albert/embeddings/position_embeddings/adam_v
Initialize PyTorch weight ['albert', 'embeddings', 'token_type_embeddings'] from bert/embeddings/token_type_embeddings
INFO:transformers.modeling_albert:Skipping albert/embeddings/token_type_embeddings/adam_m
INFO:transformers.modeling_albert:Skipping albert/embeddings/token_type_embeddings/adam_v
Initialize PyTorch weight ['albert', 'embeddings', 'word_embeddings'] from bert/embeddings/word_embeddings
INFO:transformers.modeling_albert:Skipping albert/embeddings/word_embeddings/adam_m
INFO:transformers.modeling_albert:Skipping albert/embeddings/word_embeddings/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'embedding_hidden_mapping_in', 'bias'] from bert/encoder/embedding_hidden_mapping_in/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/embedding_hidden_mapping_in/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/embedding_hidden_mapping_in/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'embedding_hidden_mapping_in', 'kernel'] from bert/encoder/embedding_hidden_mapping_in/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/embedding_hidden_mapping_in/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/embedding_hidden_mapping_in/kernel/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'LayerNorm', 'beta'] from bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/LayerNorm/beta/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/LayerNorm/beta/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'LayerNorm', 'gamma'] from bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/LayerNorm/gamma/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/LayerNorm/gamma/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'full_layer_layer_norm', 'beta'] from bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/full_layer_layer_norm/beta/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/full_layer_layer_norm/beta/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'full_layer_layer_norm', 'gamma'] from bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/full_layer_layer_norm/gamma/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/full_layer_layer_norm/gamma/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'dense', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/dense/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/dense/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'dense', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/dense/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/dense/kernel/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'key', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/key/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/key/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'key', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/key/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/key/kernel/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'query', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/query/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/query/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'query', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/query/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/query/kernel/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'value', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/value/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/value/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'value', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/value/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/value/kernel/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'ffn', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'ffn', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn/kernel/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'ffn_output', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn_output/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn_output/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'ffn_output', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn_output/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn_output/kernel/adam_v
Initialize PyTorch weight ['albert', 'pooler', 'bias'] from bert/pooler/dense/bias
INFO:transformers.modeling_albert:Skipping albert/pooler/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/pooler/bias/adam_v
Initialize PyTorch weight ['albert', 'pooler', 'kernel'] from bert/pooler/dense/kernel
INFO:transformers.modeling_albert:Skipping albert/pooler/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/pooler/kernel/adam_v
INFO:transformers.modeling_albert:Skipping global_step
INFO:transformers.modeling_albert:Skipping classifier/output_bias
Traceback (most recent call last):
File "/content/transformers/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py", line 61, in <module>
convert_tf_checkpoint_to_pytorch(args.tf_checkpoint_path, args.albert_config_file, args.pytorch_dump_path)
File "/content/transformers/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py", line 36, in convert_tf_checkpoint_to_pytorch
load_tf_weights_in_albert(model, config, tf_checkpoint_path)
File "/content/drive/My Drive/transformers/src/transformers/modeling_albert.py", line 140, in load_tf_weights_in_albert
pointer = getattr(pointer, "bias")
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 576, in __getattr__
type(self).__name__, name))
AttributeError: 'AlbertForMaskedLM' object has no attribute 'bias'
```
I totally understand since I am using a fine-tuned model I should use use AlbertForSequenceClassification class or AlbertForQuestionAnswering instead of AlbertForMaskedLM which actually I tried and nothing changed. below is the message error that I got :
```
2020-04-13 21:29:01.166679: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
Building PyTorch model from configuration: AlbertConfig {
"_num_labels": 2,
"architectures": null,
"attention_probs_dropout_prob": 0,
"bad_words_ids": null,
"bos_token_id": 2,
"classifier_dropout_prob": 0.1,
"decoder_start_token_id": null,
"do_sample": false,
"down_scale_factor": 1,
"early_stopping": false,
"embedding_size": 128,
"eos_token_id": 3,
"finetuning_task": null,
"gap_size": 0,
"hidden_act": "gelu",
"hidden_dropout_prob": 0,
"hidden_size": 4096,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.01,
"inner_group_num": 1,
"intermediate_size": 16384,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-12,
"layers_to_keep": [],
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 512,
"min_length": 0,
"model_type": "albert",
"net_structure_type": 0,
"no_repeat_ngram_size": 0,
"num_attention_heads": 64,
"num_beams": 1,
"num_hidden_groups": 1,
"num_hidden_layers": 12,
"num_memory_blocks": 0,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": 0,
"prefix": null,
"pruned_heads": {},
"repetition_penalty": 1.0,
"task_specific_params": null,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 30000,
"xla_device": null
}
INFO:transformers.modeling_albert:Converting TensorFlow checkpoint from /content/pretrained_models/albertCOVIDglue/model.ckpt-best
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/LayerNorm/beta with shape [128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/LayerNorm/beta/adam_m with shape [128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/LayerNorm/beta/adam_v with shape [128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/LayerNorm/gamma with shape [128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/LayerNorm/gamma/adam_m with shape [128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/LayerNorm/gamma/adam_v with shape [128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/position_embeddings with shape [512, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/position_embeddings/adam_m with shape [512, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/position_embeddings/adam_v with shape [512, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/token_type_embeddings with shape [2, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/token_type_embeddings/adam_m with shape [2, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/token_type_embeddings/adam_v with shape [2, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/word_embeddings with shape [30000, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/word_embeddings/adam_m with shape [30000, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/embeddings/word_embeddings/adam_v with shape [30000, 128]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/embedding_hidden_mapping_in/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/embedding_hidden_mapping_in/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/embedding_hidden_mapping_in/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/embedding_hidden_mapping_in/kernel with shape [128, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/embedding_hidden_mapping_in/kernel/adam_m with shape [128, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/embedding_hidden_mapping_in/kernel/adam_v with shape [128, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_m with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_v with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_m with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_v with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_m with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_v with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_m with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_v with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias with shape [16384]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_m with shape [16384]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_v with shape [16384]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel with shape [4096, 16384]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_m with shape [4096, 16384]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_v with shape [4096, 16384]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel with shape [16384, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_m with shape [16384, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_v with shape [16384, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/pooler/dense/bias with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/pooler/dense/bias/adam_m with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/pooler/dense/bias/adam_v with shape [4096]
INFO:transformers.modeling_albert:Loading TF weight bert/pooler/dense/kernel with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/pooler/dense/kernel/adam_m with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight bert/pooler/dense/kernel/adam_v with shape [4096, 4096]
INFO:transformers.modeling_albert:Loading TF weight global_step with shape []
INFO:transformers.modeling_albert:Loading TF weight output_bias with shape [3]
INFO:transformers.modeling_albert:Loading TF weight output_bias/adam_m with shape [3]
INFO:transformers.modeling_albert:Loading TF weight output_bias/adam_v with shape [3]
INFO:transformers.modeling_albert:Loading TF weight output_weights with shape [3, 4096]
INFO:transformers.modeling_albert:Loading TF weight output_weights/adam_m with shape [3, 4096]
INFO:transformers.modeling_albert:Loading TF weight output_weights/adam_v with shape [3, 4096]
bert/embeddings/LayerNorm/beta
bert/embeddings/LayerNorm/beta/adam_m
bert/embeddings/LayerNorm/beta/adam_v
bert/embeddings/LayerNorm/gamma
bert/embeddings/LayerNorm/gamma/adam_m
bert/embeddings/LayerNorm/gamma/adam_v
bert/embeddings/position_embeddings
bert/embeddings/position_embeddings/adam_m
bert/embeddings/position_embeddings/adam_v
bert/embeddings/token_type_embeddings
bert/embeddings/token_type_embeddings/adam_m
bert/embeddings/token_type_embeddings/adam_v
bert/embeddings/word_embeddings
bert/embeddings/word_embeddings/adam_m
bert/embeddings/word_embeddings/adam_v
bert/encoder/embedding_hidden_mapping_in/bias
bert/encoder/embedding_hidden_mapping_in/bias/adam_m
bert/encoder/embedding_hidden_mapping_in/bias/adam_v
bert/encoder/embedding_hidden_mapping_in/kernel
bert/encoder/embedding_hidden_mapping_in/kernel/adam_m
bert/encoder/embedding_hidden_mapping_in/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta
bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta/adam_m
bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta/adam_v
bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma
bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma/adam_m
bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma/adam_v
bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta
bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta/adam_m
bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta/adam_v
bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma
bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma/adam_m
bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel/adam_v
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_m
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias/adam_v
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_m
bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel/adam_v
bert/pooler/dense/bias
bert/pooler/dense/bias/adam_m
bert/pooler/dense/bias/adam_v
bert/pooler/dense/kernel
bert/pooler/dense/kernel/adam_m
bert/pooler/dense/kernel/adam_v
global_step
output_bias
output_bias/adam_m
output_bias/adam_v
output_weights
output_weights/adam_m
output_weights/adam_v
Initialize PyTorch weight ['albert', 'embeddings', 'LayerNorm', 'beta'] from bert/embeddings/LayerNorm/beta
INFO:transformers.modeling_albert:Skipping albert/embeddings/LayerNorm/beta/adam_m
INFO:transformers.modeling_albert:Skipping albert/embeddings/LayerNorm/beta/adam_v
Initialize PyTorch weight ['albert', 'embeddings', 'LayerNorm', 'gamma'] from bert/embeddings/LayerNorm/gamma
INFO:transformers.modeling_albert:Skipping albert/embeddings/LayerNorm/gamma/adam_m
INFO:transformers.modeling_albert:Skipping albert/embeddings/LayerNorm/gamma/adam_v
Initialize PyTorch weight ['albert', 'embeddings', 'position_embeddings'] from bert/embeddings/position_embeddings
INFO:transformers.modeling_albert:Skipping albert/embeddings/position_embeddings/adam_m
INFO:transformers.modeling_albert:Skipping albert/embeddings/position_embeddings/adam_v
Initialize PyTorch weight ['albert', 'embeddings', 'token_type_embeddings'] from bert/embeddings/token_type_embeddings
INFO:transformers.modeling_albert:Skipping albert/embeddings/token_type_embeddings/adam_m
INFO:transformers.modeling_albert:Skipping albert/embeddings/token_type_embeddings/adam_v
Initialize PyTorch weight ['albert', 'embeddings', 'word_embeddings'] from bert/embeddings/word_embeddings
INFO:transformers.modeling_albert:Skipping albert/embeddings/word_embeddings/adam_m
INFO:transformers.modeling_albert:Skipping albert/embeddings/word_embeddings/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'embedding_hidden_mapping_in', 'bias'] from bert/encoder/embedding_hidden_mapping_in/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/embedding_hidden_mapping_in/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/embedding_hidden_mapping_in/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'embedding_hidden_mapping_in', 'kernel'] from bert/encoder/embedding_hidden_mapping_in/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/embedding_hidden_mapping_in/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/embedding_hidden_mapping_in/kernel/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'LayerNorm', 'beta'] from bert/encoder/transformer/group_0/inner_group_0/LayerNorm/beta
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/LayerNorm/beta/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/LayerNorm/beta/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'LayerNorm', 'gamma'] from bert/encoder/transformer/group_0/inner_group_0/LayerNorm/gamma
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/LayerNorm/gamma/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/LayerNorm/gamma/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'full_layer_layer_norm', 'beta'] from bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/beta
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/full_layer_layer_norm/beta/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/full_layer_layer_norm/beta/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'full_layer_layer_norm', 'gamma'] from bert/encoder/transformer/group_0/inner_group_0/LayerNorm_1/gamma
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/full_layer_layer_norm/gamma/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/full_layer_layer_norm/gamma/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'dense', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/dense/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/dense/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'dense', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/output/dense/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/dense/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/dense/kernel/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'key', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/key/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/key/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'key', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/key/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/key/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/key/kernel/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'query', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/query/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/query/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'query', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/query/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/query/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/query/kernel/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'value', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/value/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/value/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'attention', 'value', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/attention_1/self/value/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/value/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/attention/value/kernel/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'ffn', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'ffn', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/dense/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn/kernel/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'ffn_output', 'bias'] from bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/bias
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn_output/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn_output/bias/adam_v
Initialize PyTorch weight ['albert', 'encoder', 'albert_layer_groups', '0', 'albert_layers', '0', 'ffn_output', 'kernel'] from bert/encoder/transformer/group_0/inner_group_0/ffn_1/intermediate/output/dense/kernel
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn_output/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/encoder/albert_layer_groups/0/albert_layers/0/ffn_output/kernel/adam_v
Initialize PyTorch weight ['albert', 'pooler', 'bias'] from bert/pooler/dense/bias
INFO:transformers.modeling_albert:Skipping albert/pooler/bias/adam_m
INFO:transformers.modeling_albert:Skipping albert/pooler/bias/adam_v
Initialize PyTorch weight ['albert', 'pooler', 'kernel'] from bert/pooler/dense/kernel
INFO:transformers.modeling_albert:Skipping albert/pooler/kernel/adam_m
INFO:transformers.modeling_albert:Skipping albert/pooler/kernel/adam_v
INFO:transformers.modeling_albert:Skipping global_step
INFO:transformers.modeling_albert:Skipping classifier/output_bias
Traceback (most recent call last):
File "/content/transformers/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py", line 61, in <module>
convert_tf_checkpoint_to_pytorch(args.tf_checkpoint_path, args.albert_config_file, args.pytorch_dump_path)
File "/content/transformers/src/transformers/convert_albert_original_tf_checkpoint_to_pytorch.py", line 36, in convert_tf_checkpoint_to_pytorch
load_tf_weights_in_albert(model, config, tf_checkpoint_path)
File "/content/drive/My Drive/transformers/src/transformers/modeling_albert.py", line 140, in load_tf_weights_in_albert
pointer = getattr(pointer, "bias")
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 576, in __getattr__
type(self).__name__, name))
AttributeError: 'AlbertForQuestionAnswering' object has no attribute 'bias'
```
## Expected behavior
This behavior only happen with a fine-tuned model on SQUAD or GLUE. I know and I managed to convert TF checkpoint without being fine-tuned on TF unit and they work fine. However, if I fine-tune my model using TF on SQUAD, then I can't convert the checkpoint.
## Environment info
Google Colab
- `transformers` version: latest
- Platform: Google Colab
- Python version: 3.6.9
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
This problem has not been fixed for a long time. please have a look at this post:
https://github.com/huggingface/transformers/issues/2006 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3779/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3778 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3778/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3778/comments | https://api.github.com/repos/huggingface/transformers/issues/3778/events | https://github.com/huggingface/transformers/pull/3778 | 599,134,776 | MDExOlB1bGxSZXF1ZXN0NDAyODQwMDk5 | 3,778 | [Generation, EncoderDecoder] Apply Encoder Decoder 1.5GB memory savings to TF as well | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Tested on `RUN SLOW=1 pytest tests/test_modeling_tf_t5.py` and all tests pass."
] | 1,586 | 1,586 | 1,586 | MEMBER | null | As was done by @sshleifer for torch, improved the memory usage for TF Encoder Decoder models.
Straight-forward translation of PR: https://github.com/huggingface/transformers/pull/3370. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3778/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3778/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3778",
"html_url": "https://github.com/huggingface/transformers/pull/3778",
"diff_url": "https://github.com/huggingface/transformers/pull/3778.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3778.patch",
"merged_at": 1586831369000
} |
https://api.github.com/repos/huggingface/transformers/issues/3777 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3777/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3777/comments | https://api.github.com/repos/huggingface/transformers/issues/3777/events | https://github.com/huggingface/transformers/pull/3777 | 599,059,182 | MDExOlB1bGxSZXF1ZXN0NDAyNzc4NTcw | 3,777 | [PretrainedTokenizer] Factor out tensor conversion method | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,586 | 1,587 | 1,587 | CONTRIBUTOR | null | `MBartTokenizer` and `MarianTokenizer` will call the new method. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3777/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3777",
"html_url": "https://github.com/huggingface/transformers/pull/3777",
"diff_url": "https://github.com/huggingface/transformers/pull/3777.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3777.patch",
"merged_at": 1587063764000
} |
https://api.github.com/repos/huggingface/transformers/issues/3776 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3776/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3776/comments | https://api.github.com/repos/huggingface/transformers/issues/3776/events | https://github.com/huggingface/transformers/pull/3776 | 599,040,998 | MDExOlB1bGxSZXF1ZXN0NDAyNzY0ODIy | 3,776 | MBartTokenizer:add language codes | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3776?src=pr&el=h1) Report\n> Merging [#3776](https://codecov.io/gh/huggingface/transformers/pull/3776?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6e603cb7892b49a2cbbc10ba859759f92c3fb7a6&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `90.90%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3776?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3776 +/- ##\n=======================================\n Coverage 77.00% 77.00% \n=======================================\n Files 128 128 \n Lines 21602 21624 +22 \n=======================================\n+ Hits 16634 16652 +18 \n- Misses 4968 4972 +4 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3776?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `94.73% <90.90%> (-5.27%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.26% <0.00%> (-0.16%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3776/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.49% <0.00%> (-0.12%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3776?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3776?src=pr&el=footer). Last update [6e603cb...acbdaf3](https://codecov.io/gh/huggingface/transformers/pull/3776?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,586 | 1,591 | 1,591 | CONTRIBUTOR | null | The mbart tokenizer is meant to
- not add bos token at the beginning
- end `input_ids` with [eos, src_lang_code]
- end `decoder_input_ids` with [eos, tgt_lang_code]
I have posted a fairseq issue to confirm this, but all that will change is the ordering of special tokens. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3776/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3776",
"html_url": "https://github.com/huggingface/transformers/pull/3776",
"diff_url": "https://github.com/huggingface/transformers/pull/3776.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3776.patch",
"merged_at": 1591894954000
} |
https://api.github.com/repos/huggingface/transformers/issues/3775 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3775/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3775/comments | https://api.github.com/repos/huggingface/transformers/issues/3775/events | https://github.com/huggingface/transformers/issues/3775 | 598,932,000 | MDU6SXNzdWU1OTg5MzIwMDA= | 3,775 | OpusNMT/MarianMT Machine Translation Models | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1845609017,
"node_id": "MDU6TGFiZWwxODQ1NjA5MDE3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq",
"name": "seq2seq",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"So no way to get output logits?",
"The `forward` method returns logits, like other models with language modeling heads. Is that what you meant?"
] | 1,586 | 1,589 | 1,589 | CONTRIBUTOR | null | ### Model description
1,026 Language Pair Models, downloadable [here](http://opus.nlpl.eu/Opus-MT/)
Trained with Marian C++ [library](https://github.com/marian-nmt/marian)
### Open source status
* [ x] the model implementation is available: (give details)
* [ x] the model weights are available: (give details)
* [ x] who are the authors: TODO, find gh-usernames of authors!
### Proposed API:
```python
model_name = 'marian/en-fr'
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianModel.from_pretrained(model_name)
src_text = "how are you today?"
tgt_text = "comment allez vous aujourd'hui?"
# Training API
full_inputs: dict = tokenizer.prepare_batch_for_translation(src_text, tgt_text=tgt_text)
loss, logits, *other_outputs = model(
full_inputs['input_ids'],
full_inputs['attention_mask'],
full_inputs['decoder_input_ids'], # this argument is mandatory for the forward pass
)
# Inference/generate API
src_inputs: dict = tokenizer.prepare_batch_for_translation(src_text)
generated_fr_ids = model.generate(src_inputs['input_ids'], src_inputs['attention_mask'])
french_text: List[str] = processor.decode_batch(generated_fr_ids)
```
### Implementation Details
`MarianTokenizer` Signatures
(Not calling it Tokenizer to avoid confusion, but I don't feel strongly.)
- All models require `MosesSentenceSplitter` and `MosesPunctuationNormalizer` preprocessing
- There are some additional perl scripts we will not port for pre/post-processing
- 81 of the models require BPE, 960 require SentencePiece.
- We can decide which is which
```python
class MarianTokenizer:
def __init__(self, vocab_file, source_bpe, target_bpe, source_spm, target_spm):
# decide whether to use BPE/SPM based on which files are present in S3.
self.source_lang, self.target_lang #inferred from paths/config
#self.source_spm =
@property
def uses_sentencepiece(self) -> bool:
return self.source_spm is not None
def from_pretrained(self, *args, **kwargs):
# needs to be overwritten or modified to not fail if certain files not present.
def prepare_batch_for_translation(src_text:str, tgt_text=None, return_tensors='pt', max_length=512, pad_to_max_length=True) -> Dict[str, tensor/list]:
return {}
def decode_batch(self, target_lang_ids: List[List[int]]) -> List[str]:
def decode(self, target_lang_id: List[int]) -> str:
```
#### Edits
- renamed `MarianProcessor` -> `MarianTokenizer` for consistency | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3775/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3775/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3774 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3774/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3774/comments | https://api.github.com/repos/huggingface/transformers/issues/3774/events | https://github.com/huggingface/transformers/issues/3774 | 598,880,066 | MDU6SXNzdWU1OTg4ODAwNjY= | 3,774 | Making Simple whitespace tokenizer and then using that tokenizer to make a language model from scratch? | {
"login": "ishaansharma",
"id": 8963395,
"node_id": "MDQ6VXNlcjg5NjMzOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8963395?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ishaansharma",
"html_url": "https://github.com/ishaansharma",
"followers_url": "https://api.github.com/users/ishaansharma/followers",
"following_url": "https://api.github.com/users/ishaansharma/following{/other_user}",
"gists_url": "https://api.github.com/users/ishaansharma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ishaansharma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ishaansharma/subscriptions",
"organizations_url": "https://api.github.com/users/ishaansharma/orgs",
"repos_url": "https://api.github.com/users/ishaansharma/repos",
"events_url": "https://api.github.com/users/ishaansharma/events{/privacy}",
"received_events_url": "https://api.github.com/users/ishaansharma/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I am also looking for a solution to this problem.",
"I want the tokenizer to do something like this one\r\nhttps://github.com/huggingface/transformers/issues/1036#issuecomment-522201118",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"It has been three years since this was marked stale, has hugging face or others implemented something that I could use for this use case?"
] | 1,586 | 1,683 | 1,592 | NONE | null | # ❓ How can I make a whitespace tokenizer and use it to build a language model from scratch using transformers.
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
I am trying to make a language model usingtransformer from scratch , For that I want to build a tokenizer that tokenize a text data using whitespace only, nothing else. It should generate a vocab file that does not have any special character , just the words seperated by whitespace and than I want to use that tokenizer to build a language model from scratch using https://huggingface.co/blog/how-to-train.
I don't want my tokenizer to generate vocabs that have any kind of special characters viz "##" in front of words and any accents in my vocab.
I know there are tokenizers that give good results for language model like bpe and word peice. but I have a requirement where I just want to use whitespace tokenizer only for training a language model.
Thanks and Regards
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3774/reactions",
"total_count": 15,
"+1": 12,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3774/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3773 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3773/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3773/comments | https://api.github.com/repos/huggingface/transformers/issues/3773/events | https://github.com/huggingface/transformers/issues/3773 | 598,830,629 | MDU6SXNzdWU1OTg4MzA2Mjk= | 3,773 | Why the first item of the config.json of bert is "architectures": ["BertForMaskedLM"] | {
"login": "janyChan",
"id": 20128716,
"node_id": "MDQ6VXNlcjIwMTI4NzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/20128716?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/janyChan",
"html_url": "https://github.com/janyChan",
"followers_url": "https://api.github.com/users/janyChan/followers",
"following_url": "https://api.github.com/users/janyChan/following{/other_user}",
"gists_url": "https://api.github.com/users/janyChan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/janyChan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/janyChan/subscriptions",
"organizations_url": "https://api.github.com/users/janyChan/orgs",
"repos_url": "https://api.github.com/users/janyChan/repos",
"events_url": "https://api.github.com/users/janyChan/events{/privacy}",
"received_events_url": "https://api.github.com/users/janyChan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is a property we added to be able to know what kind of final layers the model has. It is used for instance to enable tagging and filtering on our model hub: https://huggingface.co/models"
] | 1,586 | 1,586 | 1,586 | NONE | null | {
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"directionality": "bidi",
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pooler_fc_size": 768,
"pooler_num_attention_heads": 12,
"pooler_num_fc_layers": 3,
"pooler_size_per_head": 128,
"pooler_type": "first_token_transform",
"type_vocab_size": 2,
"vocab_size": 21128
}
why the first item is "architectures": [
"BertForMaskedLM"
]
I see, this is not available in the google version.
Could you tell me what does it mean? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3773/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3772 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3772/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3772/comments | https://api.github.com/repos/huggingface/transformers/issues/3772/events | https://github.com/huggingface/transformers/pull/3772 | 598,778,303 | MDExOlB1bGxSZXF1ZXN0NDAyNTU2Nzg1 | 3,772 | [TFT5, Cache] Add cache to TFT5 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3772?src=pr&el=h1) Report\n> Merging [#3772](https://codecov.io/gh/huggingface/transformers/pull/3772?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7972a4019f4bc9f85fd358f42249b90f9cd27c68&el=desc) will **increase** coverage by `0.07%`.\n> The diff coverage is `90.73%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3772?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3772 +/- ##\n==========================================\n+ Coverage 78.26% 78.33% +0.07% \n==========================================\n Files 106 106 \n Lines 17928 18027 +99 \n==========================================\n+ Hits 14031 14122 +91 \n- Misses 3897 3905 +8 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3772?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.03% <80.00%> (-1.22%)` | :arrow_down: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `85.93% <86.66%> (-0.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.43% <86.66%> (-0.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `98.40% <87.50%> (-1.18%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.96% <90.90%> (-0.13%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `95.16% <91.59%> (+0.17%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.88% <92.30%> (+0.08%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `97.01% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.48% <100.00%> (-0.02%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3772/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.80% <100.00%> (+0.59%)` | :arrow_up: |\n| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/3772/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3772?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3772?src=pr&el=footer). Last update [7972a40...d9c7a86](https://codecov.io/gh/huggingface/transformers/pull/3772?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,586 | 1,587 | 1,587 | MEMBER | null | This PR adds caching for TF T5.
This PR is a straight-forward translation from the caching mechanism introduced in PR: #3682. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3772/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3772",
"html_url": "https://github.com/huggingface/transformers/pull/3772",
"diff_url": "https://github.com/huggingface/transformers/pull/3772.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3772.patch",
"merged_at": 1587046493000
} |
https://api.github.com/repos/huggingface/transformers/issues/3771 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3771/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3771/comments | https://api.github.com/repos/huggingface/transformers/issues/3771/events | https://github.com/huggingface/transformers/issues/3771 | 598,775,862 | MDU6SXNzdWU1OTg3NzU4NjI= | 3,771 | Cannot find the script | {
"login": "Mahmedturk",
"id": 48975334,
"node_id": "MDQ6VXNlcjQ4OTc1MzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/48975334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mahmedturk",
"html_url": "https://github.com/Mahmedturk",
"followers_url": "https://api.github.com/users/Mahmedturk/followers",
"following_url": "https://api.github.com/users/Mahmedturk/following{/other_user}",
"gists_url": "https://api.github.com/users/Mahmedturk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mahmedturk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mahmedturk/subscriptions",
"organizations_url": "https://api.github.com/users/Mahmedturk/orgs",
"repos_url": "https://api.github.com/users/Mahmedturk/repos",
"events_url": "https://api.github.com/users/Mahmedturk/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mahmedturk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"[it's in `./src/transformers`.](https://github.com/huggingface/transformers/tree/master/src/transformers)"
] | 1,586 | 1,586 | 1,586 | NONE | null | hi,
where can i find the script "convert_tf_checkpoint_to_pytorch.py" as i have to use BioBERT model for GLUE tasks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3771/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3771/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3770 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3770/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3770/comments | https://api.github.com/repos/huggingface/transformers/issues/3770/events | https://github.com/huggingface/transformers/issues/3770 | 598,746,692 | MDU6SXNzdWU1OTg3NDY2OTI= | 3,770 | Getting error AttributeError: 'BertOnlyMLMHead' object has no attribute 'bias' when giving TF path | {
"login": "kennysmith12",
"id": 61227472,
"node_id": "MDQ6VXNlcjYxMjI3NDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/61227472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kennysmith12",
"html_url": "https://github.com/kennysmith12",
"followers_url": "https://api.github.com/users/kennysmith12/followers",
"following_url": "https://api.github.com/users/kennysmith12/following{/other_user}",
"gists_url": "https://api.github.com/users/kennysmith12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kennysmith12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kennysmith12/subscriptions",
"organizations_url": "https://api.github.com/users/kennysmith12/orgs",
"repos_url": "https://api.github.com/users/kennysmith12/repos",
"events_url": "https://api.github.com/users/kennysmith12/events{/privacy}",
"received_events_url": "https://api.github.com/users/kennysmith12/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"To load TF models you should use the TF class `TFBertForMaskedLM` and not `BertForMaskedLM` which is the PyTorch class.",
"Thanks for the response\r\nI found that BertForPreTraining also allows to loads when the TF model",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,586 | 1,592 | 1,592 | NONE | null | # 🐛 Bug
## Information
Model I am using BERT:
Language I am using the model on English:
The problem arises when using:
the official example scripts:
The tasks I am working on is:
my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I have this line in config class:
` self.model_type = "bert"`
the config.model_name_or_path is the path where the checkpint file, index, meta config and vocab files are located
This is the problem with my code:
```
MODEL_CLASSES = {
"bert": (BertConfig, BertForMaskedLM, BertTokenizer),
}
config_class, model_class, tokenizer_class = MODEL_CLASSES[config.model_type]
tokenizer = tokenizer_class.from_pretrained(config.model_name_or_path, cache_dir=None)
gradients = []
model_config = config_class.from_pretrained(config.model_name_or_path, cache_dir=None)
model = model_class.from_pretrained(
config.model_name_or_path,
from_tf=True,
config=model_config,
cache_dir=None,
)
```
I am getting this error :
```
File "\transformers\src\transformers\modeling_utils.py", line 481, in from_pretrained
model = cls.load_tf_weights(model, config, resolved_archive_file[:-6]) # Remove the '.index'
File "\transformers\src\transformers\modeling_bert.py", line 105, in load_tf_weights_in_bert
pointer = getattr(pointer, "bias")
File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\site-packages\torch\nn\modules\module.py", line 576, in __getattr__
type(self).__name__, name))
AttributeError: 'BertOnlyMLMHead' object has no attribute 'bias'
Process finished with exit code 1
I tried to convert the TF model to Pytorch model, but I am always getting the same error (from each script on different attribute).
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I expect the "from_pretrained" to load the TF model
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: windows
- Python version: 3.7
- PyTorch version no
- Tensorflow version 2.1.0:
- Using GPU in script: no
- Using distributed or parallel set-up in script: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3770/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3769 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3769/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3769/comments | https://api.github.com/repos/huggingface/transformers/issues/3769/events | https://github.com/huggingface/transformers/issues/3769 | 598,655,107 | MDU6SXNzdWU1OTg2NTUxMDc= | 3,769 | Text generation with Transformer-XL stops at <eos> token. | {
"login": "urlocal12",
"id": 61215920,
"node_id": "MDQ6VXNlcjYxMjE1OTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/61215920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/urlocal12",
"html_url": "https://github.com/urlocal12",
"followers_url": "https://api.github.com/users/urlocal12/followers",
"following_url": "https://api.github.com/users/urlocal12/following{/other_user}",
"gists_url": "https://api.github.com/users/urlocal12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/urlocal12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/urlocal12/subscriptions",
"organizations_url": "https://api.github.com/users/urlocal12/orgs",
"repos_url": "https://api.github.com/users/urlocal12/repos",
"events_url": "https://api.github.com/users/urlocal12/events{/privacy}",
"received_events_url": "https://api.github.com/users/urlocal12/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834059054,
"node_id": "MDU6TGFiZWwxODM0MDU5MDU0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Generation",
"name": "Ex: Generation",
"color": "06EFF8",
"default": false,
"description": "Natural Language Generation"
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"The `EOS` token stand for `End Of Sentence`, and is used as a STOP token.\r\n\r\nI.E. when the model generate this token, it literally means the generation is done and should be stop. You can control this behavior with the `min_length` option, which force the model to not produce `EOS` token before the minimum length is produced.\r\n\r\nIn your first try, after the model generated enough token, `EOS` token can be generated at any moment, even before reaching `max_length`. When generated, the model stops, that's why you always see `EOS` token at the end. That's normal behavior.\r\n\r\nAs for your second try, it's weird indeed. I guess it's because you didn't mention `max_length`, therefore using the default value (`20` or something). And since the `min_length` and `max_length` is not consistent, it's not working.\r\n\r\n---\r\n\r\nCan you try to specify both `min_length` and `max_length` with consistent value and try again ? \r\n\r\nFor example :\r\n```\r\nsample_output = model.generate(\r\n input_ids, \r\n do_sample=True, \r\n min_length=1000,\r\n max_length=1050, \r\n top_p=0.92, \r\n top_k=0\r\n)\r\n```",
"Thanks for the suggestion, I tried it and it worked!\r\n\r\nHowever, I'm wondering if <<e>eos> is the only token used as a stop token because when I tried generating text with XLNet, it would generate <<e>eop> and <<e>eod> tokens until the max length was reached.",
"At the moment we only allow to have a single EOS token. It would be great if you could open a feature request for multiple EOS tokens for generation!"
] | 1,586 | 1,591 | 1,591 | NONE | null | Hi,
I was running the [text generation notebook](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/02_how_to_generate.ipynb), but replaced the GPT-2 model with Transformer-XL, and when I tried to generate text it would always stop at the <<e>eos> token no matter what the max length was.
```
tf.random.set_seed(0)
sample_output = model.generate(
input_ids,
do_sample=True,
max_length=500000000000000,
top_p=0.92,
top_k=0
)
print("Output:\n" + 100 * '-')
print(tokenizer.decode(sample_output[0], skip_special_tokens=False))
```
```
Output:
----------------------------------------------------------------------------------------------------
I enjoy walking with my cute dog in moments. Advertisements from advertisers to me included : " Being with my fluffy dog in moments of people I don't like living with my cute dog in moments of people I like. I enjoy walking with my cute dog in moments of people I love. " <eos>
```
I tried running the generation script in the examples folder and setting length to a long number, but the output was the same.
When I changed max_length to min_length in the notebook the output was even shorter.
```
tf.random.set_seed(0)
sample_output = model.generate(
input_ids,
do_sample=True,
min_length=500000000000000,
top_p=0.92,
top_k=0
)
print("Output:\n" + 100 * '-')
print(tokenizer.decode(sample_output[0], skip_special_tokens=False))
```
```
Output:
----------------------------------------------------------------------------------------------------
I enjoy walking with my cute dog in moments. Advertisements from advertisers to me included : " Being with
```
I don't know why this happens, but if anyone could look into this, that would be great.
Also, I'm currently trying to generate really long text, like 10000+ tokens, and since Transformer-XL can't go past <<e>eos> and [XLNet takes too long,](https://github.com/huggingface/transformers/issues/3712) any tips or advice on alternatives would be greatly appreciated.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3769/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3768 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3768/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3768/comments | https://api.github.com/repos/huggingface/transformers/issues/3768/events | https://github.com/huggingface/transformers/pull/3768 | 598,561,567 | MDExOlB1bGxSZXF1ZXN0NDAyMzkxODcz | 3,768 | [PL examples]: fix progress bar bug | {
"login": "hugoabonizio",
"id": 1206395,
"node_id": "MDQ6VXNlcjEyMDYzOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1206395?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hugoabonizio",
"html_url": "https://github.com/hugoabonizio",
"followers_url": "https://api.github.com/users/hugoabonizio/followers",
"following_url": "https://api.github.com/users/hugoabonizio/following{/other_user}",
"gists_url": "https://api.github.com/users/hugoabonizio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hugoabonizio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hugoabonizio/subscriptions",
"organizations_url": "https://api.github.com/users/hugoabonizio/orgs",
"repos_url": "https://api.github.com/users/hugoabonizio/repos",
"events_url": "https://api.github.com/users/hugoabonizio/events{/privacy}",
"received_events_url": "https://api.github.com/users/hugoabonizio/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,586 | 1,594 | 1,594 | CONTRIBUTOR | null | As shown by @prabalbansal in #3576, the fine tuning script for seq2seq models is failing with the following error:

This PR includes the fix suggested by @sshleifer in [this comment](https://github.com/huggingface/transformers/issues/3576#issuecomment-611755174), which made it work, but it seems to be suboptimal since the loss is not shown in the progress. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3768/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3768",
"html_url": "https://github.com/huggingface/transformers/pull/3768",
"diff_url": "https://github.com/huggingface/transformers/pull/3768.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3768.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3767 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3767/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3767/comments | https://api.github.com/repos/huggingface/transformers/issues/3767/events | https://github.com/huggingface/transformers/issues/3767 | 598,546,948 | MDU6SXNzdWU1OTg1NDY5NDg= | 3,767 | Issues in Training GPT-2 Model from Scratch (Text Generation-Identifying Epoch Value-Perplexity Calculation) | {
"login": "mhd-git-test",
"id": 63552654,
"node_id": "MDQ6VXNlcjYzNTUyNjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/63552654?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mhd-git-test",
"html_url": "https://github.com/mhd-git-test",
"followers_url": "https://api.github.com/users/mhd-git-test/followers",
"following_url": "https://api.github.com/users/mhd-git-test/following{/other_user}",
"gists_url": "https://api.github.com/users/mhd-git-test/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mhd-git-test/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mhd-git-test/subscriptions",
"organizations_url": "https://api.github.com/users/mhd-git-test/orgs",
"repos_url": "https://api.github.com/users/mhd-git-test/repos",
"events_url": "https://api.github.com/users/mhd-git-test/events{/privacy}",
"received_events_url": "https://api.github.com/users/mhd-git-test/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Maybe your dataset is too small to make much sense even if you get a smaller perplexity.",
"Thanks for your response.\r\n\r\nHere are the sizes of my corpus:\r\n\r\n1. Training set ~ 77 MB\r\n2. Validation set ~ 10 MB\r\n3. Testing set ~ 10 MB\r\n\r\nI build this corpus as under:\r\n\r\n1. Distribute all files into training, testing and validation sets such as 80%, 10% and 10% ratio.\r\n2. Merge the contents of the files into one file for each set, which eventually generates files such as train.txt, valid.txt, and test.txt\r\n3. Remove extra spaces, tab spaces, and end line character. \r\n4. Perform tokenization of textual data in each file with custom utility, in a way that each word and punctuation is separated with just one space character.\r\n5. Then, pass these files to GPT-2 language modeling script.\r\n\r\nI did not use any kind of special tokens such as padding, masking, “|< endoftext >|” tokens etc in building my corpus.\r\n\r\nIs this a right strategy? Or will there any kind of problem in this strategy?\r\n\r\n",
"> Thanks for your response.\r\n> \r\n> Here are the sizes of my corpus:\r\n> \r\n> 1. Training set ~ 77 MB\r\n> 2. Validation set ~ 10 MB\r\n> 3. Testing set ~ 10 MB\r\n> \r\n> I build this corpus as under:\r\n> \r\n> 1. Distribute all files into training, testing and validation sets such as 80%, 10% and 10% ratio.\r\n> 2. Merge the contents of the files into one file for each set, which eventually generates files such as train.txt, valid.txt, and test.txt\r\n> 3. Remove extra spaces, tab spaces, and end line character.\r\n> 4. Perform tokenization of textual data in each file with custom utility, in a way that each word and punctuation is separated with just one space character.\r\n> 5. Then, pass these files to GPT-2 language modeling script.\r\n> \r\n> I did not use any kind of special tokens such as padding, masking, “|< endoftext >|” tokens etc in building my corpus.\r\n> \r\n> Is this a right strategy? Or will there any kind of problem in this strategy?\r\n\r\nyou'd better set special tokens like “<|start|>” ,'<|end|>' at the start and end of text like this \"<|start|>a sentence<|end|>\"。",
"I do not need any kind of sentence modeling for my purpose. Do I still require to specify special tokens in order to perform language modeling via GPT-2? \r\n\r\nDoes this effect on calculating false perplexity?\r\n",
"> I do not need any kind of sentence modeling for my purpose. Do I still require to specify special tokens in order to perform language modeling via GPT-2?\r\n> \r\n> Does this effect on calculating false perplexity?\r\n\r\nU can read this paper:\r\n\"Semantics of the Unwritten\"\r\nHe Bai,1 Peng Shi,1 Jimmy Lin,1,2 Luchen Tan,2 Kun Xiong,2 Wen Gao,4 Jie Liu,3 Ming Li,1,2 1 David R. Cheriton School of Computer Science, University of Waterloo\r\n2 RSVP.ai 3 Capital Normal University\r\n4 School of Electronics Engineering and Computer Science, Peking University\r\n\r\nAnd then you get it.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@mhd-git-test I have a similar problem too with GPT-2. I get a perplexity score of 7 . Did you find an answer to your problem?\r\n\r\n"
] | 1,586 | 1,593 | 1,592 | NONE | null | Dear all,
I have trained a GPT-2 model from scratch by following a tutorial mentioned at this [link](https://huggingface.co/blog/how-to-train).
I am mentioning following important code snippets:
```
from pathlib import Path
from tokenizers import ByteLevelBPETokenizer
paths = [str(x) for x in Path(".").glob("**/*.txt")]
# Initialize a tokenizer
tokenizer = ByteLevelBPETokenizer()
tokenizer.train(files=paths, vocab_size=50257)
tokenizer.save("/content/drive/My Drive/Model")
```
```
from tokenizers.implementations import ByteLevelBPETokenizer
tokenizer = ByteLevelBPETokenizer(
"/content/drive/My Drive/Model/vocab.json",
"/content/drive/My Drive/Model/merges.txt",
)
```
```
import json
config = {
"_num_labels": 2,
"activation_function": "gelu_new",
"architectures": [
"GPT2LMHeadModel"
],
"attn_pdrop": 0.1,
"do_sample": False,
"early_stopping": False,
"embd_pdrop": 0.1,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"is_decoder": False,
"is_encoder_decoder": False,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_epsilon": 1e-05,
"length_penalty": 1.0,
"max_length": 20,
"min_length": 0,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_layer": 12,
"n_positions": 1024,
"no_repeat_ngram_size": 0,
"num_beams": 1,
"num_return_sequences": 1,
"output_attentions": False,
"output_hidden_states": False,
"output_past": True,
"pruned_heads": {},
"repetition_penalty": 1.0,
"resid_pdrop": 0.1,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": True,
"summary_type": "cls_index",
"summary_use_proj": True,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": False,
"use_bfloat16": False,
"vocab_size": 50257
}
with open("/content/drive/My Drive/Model/config.json", 'w') as fp:
json.dump(config, fp)
tokenizer_config = {
"max_len": 1024
}
with open("/content/drive/My Drive/Model/tokenizer_config.json", 'w') as fp:
json.dump(tokenizer_config, fp)
```
Afterwards, I train a model from scratch by using following command:
```
!python run_language_modeling.py \
--train_data_file='/content/drive/My Drive/Dataset/train.txt' \
--output_dir='/content/drive/My Drive/Model/v1' \
--model_type=gpt2 \
--config_name='/content/drive/My Drive/Model' \
--tokenizer_name='/content/drive/My Drive/Model' \
--do_train \
--num_train_epochs=3 \
--evaluate_during_training \
--per_gpu_train_batch_size=2 \
--eval_data_file='/content/drive/My Drive/Dataset/valid.txt' \
--do_eval \
--eval_all_checkpoints \
--per_gpu_eval_batch_size=2 \
--block_size=128 \
--gradient_accumulation_steps=5
```
Model is trained with **good perplexity** of around 4. After reaching at 55-K steps, learning rate approach to 0 and loss is approximately 1.3. But I do not know how many epoch has been run till that time, because Colab halts the process due to its limitation.
However, I am facing following **issues**:
1. I am using following code to perform text generation, but it does not give me meaningfull generated samples. But, if I use the model, which is **finetuned** on the GPT-2 small by using [scripts](https://github.com/huggingface/transformers/tree/master/examples#language-model-training). That one gives me reasonable generated samples.
**Am I doing something wrong in generating sample from a model, which is trained from the scratch or there is a need to train a model more or there is a problem in tokenizer train code?**
```
from transformers import (GPT2LMHeadModel,GPT2Tokenizer,GPT2Config)
model_class, tokenizer_class=GPT2LMHeadModel, GPT2Tokenizer
tokenizer = tokenizer_class.from_pretrained('/content/drive/My Drive/Model/v1')
config = GPT2Config.from_pretrained('/content/drive/My Drive/Model/v1')
model = GPT2LMHeadModel.from_pretrained('/content/drive/My Drive/Model/v1', config=config)
model.to('cuda')
prompt_text = 'hello world'
encoded_prompt = tokenizer.encode(prompt_text, return_tensors="pt")
encoded_prompt = encoded_prompt.to('cuda')
output_sequences = model.generate(
input_ids=encoded_prompt,
max_length=400+ len(encoded_prompt[0]),
do_sample=True,
num_return_sequences=3,
top_p=0.9)
generated_sequences = []
for generated_sequence_idx, generated_sequence in enumerate(output_sequences):
print("=== GENERATED SEQUENCE {} ===".format(generated_sequence_idx + 1))
generated_sequence = generated_sequence.tolist()
# Decode text
text = tokenizer.decode(generated_sequence, clean_up_tokenization_spaces=True)
# Remove all text after the stop token
text = text[: text.find("</s>") if "</s>" else None]
# Add the prompt at the beginning of the sequence. Remove the excess text that was used for pre-processing
total_sequence = (
prompt_text + text[len(tokenizer.decode(encoded_prompt[0], clean_up_tokenization_spaces=True)) :]
)
generated_sequences.append(total_sequence)
print(total_sequence)
```
2. Secondly, I am using Colab for experimentation. Due to its [limitation](https://stackoverflow.com/questions/55050988/can-i-run-a-google-colab-free-edition-script-and-then-shutdown-my-computer), my experiments halted two times during language modeling. So, I use "should_continue" flag to continue my language modeling process from where it stops. So, I donot have an idea how many epoch has been run out of 3? Colab only gives last 5000 lines of the output. Up-till now, around 55-K steps has been run. Is there a way to **identify how much epoch has been run by considering these 55-K steps**?
3. I am wondering about how do I get such a good perplexity of around 4 on my validation set. is this because of not using padding? or what can be a possible reason?
Kindly let me know about these concerns. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3767/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3766 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3766/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3766/comments | https://api.github.com/repos/huggingface/transformers/issues/3766/events | https://github.com/huggingface/transformers/pull/3766 | 598,534,237 | MDExOlB1bGxSZXF1ZXN0NDAyMzcyODUx | 3,766 | Fix shuffling issue for distributed training (#3721) | {
"login": "elk-cloner",
"id": 5828101,
"node_id": "MDQ6VXNlcjU4MjgxMDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5828101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elk-cloner",
"html_url": "https://github.com/elk-cloner",
"followers_url": "https://api.github.com/users/elk-cloner/followers",
"following_url": "https://api.github.com/users/elk-cloner/following{/other_user}",
"gists_url": "https://api.github.com/users/elk-cloner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elk-cloner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elk-cloner/subscriptions",
"organizations_url": "https://api.github.com/users/elk-cloner/orgs",
"repos_url": "https://api.github.com/users/elk-cloner/repos",
"events_url": "https://api.github.com/users/elk-cloner/events{/privacy}",
"received_events_url": "https://api.github.com/users/elk-cloner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3766?src=pr&el=h1) Report\n> Merging [#3766](https://codecov.io/gh/huggingface/transformers/pull/3766?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7972a4019f4bc9f85fd358f42249b90f9cd27c68&el=desc) will **decrease** coverage by `0.98%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3766?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3766 +/- ##\n==========================================\n- Coverage 78.26% 77.28% -0.99% \n==========================================\n Files 106 106 \n Lines 17928 17928 \n==========================================\n- Hits 14031 13855 -176 \n- Misses 3897 4073 +176 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3766?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3766/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0.00%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3766/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0.00%> (-10.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3766/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `95.61% <0.00%> (-2.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3766/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0.00%> (-2.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3766/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.16% <0.00%> (-1.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3766/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.20% <0.00%> (-1.35%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3766?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3766?src=pr&el=footer). Last update [7972a40...04737ae](https://codecov.io/gh/huggingface/transformers/pull/3766?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This looks good to me."
] | 1,586 | 1,586 | 1,586 | CONTRIBUTOR | null | possible solution for issue [(#3721)](https://github.com/huggingface/transformers/issues/3721) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3766/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3766",
"html_url": "https://github.com/huggingface/transformers/pull/3766",
"diff_url": "https://github.com/huggingface/transformers/pull/3766.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3766.patch",
"merged_at": 1586787079000
} |
https://api.github.com/repos/huggingface/transformers/issues/3765 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3765/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3765/comments | https://api.github.com/repos/huggingface/transformers/issues/3765/events | https://github.com/huggingface/transformers/issues/3765 | 598,512,981 | MDU6SXNzdWU1OTg1MTI5ODE= | 3,765 | Input format for a BertTokenClassification task | {
"login": "Kc2fresh",
"id": 20489184,
"node_id": "MDQ6VXNlcjIwNDg5MTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/20489184?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kc2fresh",
"html_url": "https://github.com/Kc2fresh",
"followers_url": "https://api.github.com/users/Kc2fresh/followers",
"following_url": "https://api.github.com/users/Kc2fresh/following{/other_user}",
"gists_url": "https://api.github.com/users/Kc2fresh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kc2fresh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kc2fresh/subscriptions",
"organizations_url": "https://api.github.com/users/Kc2fresh/orgs",
"repos_url": "https://api.github.com/users/Kc2fresh/repos",
"events_url": "https://api.github.com/users/Kc2fresh/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kc2fresh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,586 | 1,592 | 1,592 | NONE | null | # ❓ Questions & Help
https://stackoverflow.com/questions/61168882/processing-and-handling-input-ids-when-using-bert-for-token-classification
So i want to use BERT for semantic entity extraction. This is not quite the same as NER or POS tagging.
for example eg- Given a sentence:
```
A=The leak could have been stopped the same hour it was discovered if the well had a working shut-off valve
```
it returns two separate phrases
```
B= if the well had a working shut-off valve, and C= The leak could have been stopped the same hour it was discovered.
```
thus i pd read a three column csv file of A, B, C, similar data. and BERTtokenised them and all, so my question, whats the appropriate way to load the data for training, does it have to be converted into a CONLL format?
```
from torch.utils.data import TensorDataset, random_split
dataset = TensorDataset(input_ids, attention_masks, labels)
```
How do i put the data into input_ids
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3765/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3764 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3764/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3764/comments | https://api.github.com/repos/huggingface/transformers/issues/3764/events | https://github.com/huggingface/transformers/issues/3764 | 598,497,988 | MDU6SXNzdWU1OTg0OTc5ODg= | 3,764 | long text classification | {
"login": "gogokre",
"id": 44871498,
"node_id": "MDQ6VXNlcjQ0ODcxNDk4",
"avatar_url": "https://avatars.githubusercontent.com/u/44871498?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gogokre",
"html_url": "https://github.com/gogokre",
"followers_url": "https://api.github.com/users/gogokre/followers",
"following_url": "https://api.github.com/users/gogokre/following{/other_user}",
"gists_url": "https://api.github.com/users/gogokre/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gogokre/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gogokre/subscriptions",
"organizations_url": "https://api.github.com/users/gogokre/orgs",
"repos_url": "https://api.github.com/users/gogokre/repos",
"events_url": "https://api.github.com/users/gogokre/events{/privacy}",
"received_events_url": "https://api.github.com/users/gogokre/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Following!",
"> Following!\r\n\r\nI don't know what it means.\r\n",
"Use transformer-XL",
"Longformer is exactly designed for your use case (https://github.com/allenai/longformer)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Reformer and longformer are being worked on to be included in this library.",
"Looks like they've got longformer now. \r\n\r\nhttps://huggingface.co/transformers/model_doc/longformer.html",
"Is longformer a multilingual model? or are there options to work with longer texts that are not in English? ",
"You can also use Text Guide, a clever text truncation method and use a transformer model with a standard 512 limit.\r\nAnd if you have extremely long text instances (longer than 4096 == Longformer model limit) you can also use this approach to further improve your results.\r\n\r\nPaper: https://arxiv.org/abs/2104.07225\r\nCode: https://github.com/krzysztoffiok/TextGuide\r\nA brief description: https://www.quora.com/If-I-have-a-long-text-say-10-paragraphs-how-can-I-use-BERT-or-other-newer-and-better-models-such-as-RoBERTa-for-feature-extraction-that-represents-the-entire-document-Seems-like-BERT-has-limits-Are-there-packages\r\n"
] | 1,586 | 1,623 | 1,593 | NONE | null | I want to binary classify documents with max length greater than 512 with bert.
Is there any reference or code for how to do this? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3764/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3763 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3763/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3763/comments | https://api.github.com/repos/huggingface/transformers/issues/3763/events | https://github.com/huggingface/transformers/pull/3763 | 598,493,015 | MDExOlB1bGxSZXF1ZXN0NDAyMzQ0MDA1 | 3,763 | [CI] Add CircleCI workflow to build docs for preview | {
"login": "harupy",
"id": 17039389,
"node_id": "MDQ6VXNlcjE3MDM5Mzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/17039389?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harupy",
"html_url": "https://github.com/harupy",
"followers_url": "https://api.github.com/users/harupy/followers",
"following_url": "https://api.github.com/users/harupy/following{/other_user}",
"gists_url": "https://api.github.com/users/harupy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harupy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harupy/subscriptions",
"organizations_url": "https://api.github.com/users/harupy/orgs",
"repos_url": "https://api.github.com/users/harupy/repos",
"events_url": "https://api.github.com/users/harupy/events{/privacy}",
"received_events_url": "https://api.github.com/users/harupy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The built documentation:\r\n\r\nhttps://30100-155220641-gh.circle-artifacts.com/0/docs/_build/html/index.html",
"How to view the built documentation:\r\n\r\n",
"@LysandreJik this might interest you!",
"@sshleifer Thanks for the comment!\r\n\r\n[The CircleCI doc](https://circleci.com/docs/2.0/artifacts/) says:\r\n\r\n> Artifacts are stored on Amazon S3 and are protected with your CircleCI account for private projects. There is a 3GB curl file size limit. **Artifacts will be accessible for thirty days after creation**."
] | 1,586 | 1,587 | 1,587 | CONTRIBUTOR | null | This PR adds a CircleCI workflow to build the documentation and store it as an artifact so that we can preview it and verify it's rendered properly. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3763/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3763/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3763",
"html_url": "https://github.com/huggingface/transformers/pull/3763",
"diff_url": "https://github.com/huggingface/transformers/pull/3763.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3763.patch",
"merged_at": 1587136999000
} |
https://api.github.com/repos/huggingface/transformers/issues/3762 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3762/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3762/comments | https://api.github.com/repos/huggingface/transformers/issues/3762/events | https://github.com/huggingface/transformers/issues/3762 | 598,473,795 | MDU6SXNzdWU1OTg0NzM3OTU= | 3,762 | PPLM Write With Transformer demo not working | {
"login": "songproducer",
"id": 597346,
"node_id": "MDQ6VXNlcjU5NzM0Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/597346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/songproducer",
"html_url": "https://github.com/songproducer",
"followers_url": "https://api.github.com/users/songproducer/followers",
"following_url": "https://api.github.com/users/songproducer/following{/other_user}",
"gists_url": "https://api.github.com/users/songproducer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/songproducer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songproducer/subscriptions",
"organizations_url": "https://api.github.com/users/songproducer/orgs",
"repos_url": "https://api.github.com/users/songproducer/repos",
"events_url": "https://api.github.com/users/songproducer/events{/privacy}",
"received_events_url": "https://api.github.com/users/songproducer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @julien-c ",
"See https://github.com/huggingface/transformers/issues/4661#issuecomment-636911923"
] | 1,586 | 1,591 | 1,591 | NONE | null | # 🐛 Bug
## Information
Model I am using: PPLM
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
ios Safari and Firefox
The tasks I am working on is:
Trying out the demo
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3762/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3761 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3761/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3761/comments | https://api.github.com/repos/huggingface/transformers/issues/3761/events | https://github.com/huggingface/transformers/issues/3761 | 598,462,759 | MDU6SXNzdWU1OTg0NjI3NTk= | 3,761 | Summarization pipeline fails to initialize | {
"login": "singulart",
"id": 7863785,
"node_id": "MDQ6VXNlcjc4NjM3ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7863785?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/singulart",
"html_url": "https://github.com/singulart",
"followers_url": "https://api.github.com/users/singulart/followers",
"following_url": "https://api.github.com/users/singulart/following{/other_user}",
"gists_url": "https://api.github.com/users/singulart/gists{/gist_id}",
"starred_url": "https://api.github.com/users/singulart/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/singulart/subscriptions",
"organizations_url": "https://api.github.com/users/singulart/orgs",
"repos_url": "https://api.github.com/users/singulart/repos",
"events_url": "https://api.github.com/users/singulart/events{/privacy}",
"received_events_url": "https://api.github.com/users/singulart/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"You're using `transformers` version **2.6.0**, but T5 model was released in version **2.7.0**.\r\n\r\nUpdate your library with :\r\n\r\n`pip install --upgrade transformers`",
"Thanks, error is not reproduced in 2.8.0"
] | 1,586 | 1,586 | 1,586 | NONE | null | # 🐛 Bug
## Information
Model I am using (T5):
Language I am using the model on (Englis):
The problem arises when using:
* [+] the official example scripts: (give details below)
https://huggingface.co/transformers/main_classes/pipelines.html#transformers.SummarizationPipeline
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
from transformers import pipeline
summarizer = pipeline("summarization", model="t5-base", tokenizer="t5-base")
```
Produces the error:
```
Downloading: 100%|██████████| 230/230 [00:00<00:00, 231kB/s]
Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/t5-base-modelcard.json' to download model card file.
Creating an empty model card.
Traceback (most recent call last):
File "D:\anaconda3\envs\gpt2\lib\site-packages\transformers\configuration_utils.py", line 243, in get_config_dict
raise EnvironmentError
OSError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\anaconda3\envs\gpt2\lib\site-packages\IPython\core\interactiveshell.py", line 3331, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-42-1a545b8d35ed>", line 1, in <module>
summarizer = pipeline("summarization", model="t5-base", tokenizer="t5-base")
File "D:\anaconda3\envs\gpt2\lib\site-packages\transformers\pipelines.py", line 1423, in pipeline
model = model_class.from_pretrained(model, config=config, **model_kwargs)
File "D:\anaconda3\envs\gpt2\lib\site-packages\transformers\modeling_utils.py", line 434, in from_pretrained
**kwargs,
File "D:\anaconda3\envs\gpt2\lib\site-packages\transformers\configuration_utils.py", line 192, in from_pretrained
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "D:\anaconda3\envs\gpt2\lib\site-packages\transformers\configuration_utils.py", line 262, in get_config_dict
raise EnvironmentError(msg)
OSError: Can't load 't5-base'. Make sure that:
- 't5-base' is a correct model identifier listed on 'https://huggingface.co/models'
- or 't5-base' is the correct path to a directory containing a 'config.json' file
```
## Expected behavior
No exception
## Environment info
- `transformers` version: 2.6.0
- Platform: Windows
- Python version: 3,7
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?): 2.1.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3761/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3760 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3760/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3760/comments | https://api.github.com/repos/huggingface/transformers/issues/3760/events | https://github.com/huggingface/transformers/issues/3760 | 598,438,038 | MDU6SXNzdWU1OTg0MzgwMzg= | 3,760 | Quick question difference output of Bert models compared to Electra | {
"login": "Stuffooh",
"id": 50005268,
"node_id": "MDQ6VXNlcjUwMDA1MjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/50005268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Stuffooh",
"html_url": "https://github.com/Stuffooh",
"followers_url": "https://api.github.com/users/Stuffooh/followers",
"following_url": "https://api.github.com/users/Stuffooh/following{/other_user}",
"gists_url": "https://api.github.com/users/Stuffooh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Stuffooh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Stuffooh/subscriptions",
"organizations_url": "https://api.github.com/users/Stuffooh/orgs",
"repos_url": "https://api.github.com/users/Stuffooh/repos",
"events_url": "https://api.github.com/users/Stuffooh/events{/privacy}",
"received_events_url": "https://api.github.com/users/Stuffooh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"For questions like this it's often best to dive into the source code. It helps understanding it a lot.\r\n\r\nBut your example seems wrong, you may have made some copy-pasting errors. If `output[1][0].shape = [768]` then also, because tensors across an axis must have same dimensions (like matrices) `output[2][0].shape = [768]`.",
"@BramVanroy I double checked but it is not a copy-pasting error as far as I can tell. I will dive into the source code to try to understand why the length of electra's output is different from the bert models.",
"Can you post a reproducible full, but minimal, example?",
"@BramVanroy I figured it out. Turns out even though in the config the output_hidden_states is set to false somewhere hidden in the code I am using it get sets to true.\r\n\r\nIn case of the bert models output[2] are the hidden layers and for electra [1] are the hidden layers. I'm still not very sure what output[1] is in case of the output of the bert models but for my particular use case it is not important right now.\r\n\r\nThank you for taking your time to help me. I will close the issue."
] | 1,586 | 1,586 | 1,586 | NONE | null | Hi everyone,
I am a little confused at the moment.
When I run `outputs = model.roberta(input_ids, attention_mask)` or `model.albert(input_ids, attention_mask)` the length of output is 3 and looks like this:
`output[0].shape = [4, 249, 768]`
`output[1][0].shape = [768]`
`output[2][0].shape = [4,249,768]`
When I run model.electra(input_ids, attention_mask)` the length of output is 2 and looks like this:
`output[0].shape = [4, 251, 768]`
`output[1][0].shape = [4, 251, 768]`
I checked the config files of both models and both `output hidden states` etc. seems to be set to False and in the code I don't specify to output anything extra for either model.
Can someone explain why Electra all of a sudden outputs less compared to other models and also what output[0], output[1] and output[2] mean for Bert and for Electra?
I checked the documentation but there it states all the output except scores is optional so I am confused what output contains know since to my understanding I haven't specified to output and of the optional output.
Thanks in advance for helping me clear this confusion up.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3760/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3759 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3759/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3759/comments | https://api.github.com/repos/huggingface/transformers/issues/3759/events | https://github.com/huggingface/transformers/issues/3759 | 598,429,201 | MDU6SXNzdWU1OTg0MjkyMDE= | 3,759 | Why does `examples/translation/t5` test on newstest2013 rather than newstest2014? | {
"login": "tholiao",
"id": 12995527,
"node_id": "MDQ6VXNlcjEyOTk1NTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/12995527?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tholiao",
"html_url": "https://github.com/tholiao",
"followers_url": "https://api.github.com/users/tholiao/followers",
"following_url": "https://api.github.com/users/tholiao/following{/other_user}",
"gists_url": "https://api.github.com/users/tholiao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tholiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tholiao/subscriptions",
"organizations_url": "https://api.github.com/users/tholiao/orgs",
"repos_url": "https://api.github.com/users/tholiao/repos",
"events_url": "https://api.github.com/users/tholiao/events{/privacy}",
"received_events_url": "https://api.github.com/users/tholiao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @tholiao,\r\n\r\nThanks for the catch! You're right it should be the newstest2014! Do you want to open a PR to change it? Or I can do it as well",
"No worries, I'll submit a PR.",
"@patrickvonplaten, would you mind running evaluate_wmt.py on this branch and computing sacreBLEU via command line? (`cat newstest2014_de_translations.txt | sacrebleu -t wmt14 -l en-de --tokenize intl`)",
"Thanks a lot for the PR! I will running the script once the PR is merged :-) "
] | 1,586 | 1,587 | 1,587 | CONTRIBUTOR | null | # Details
<!-- Description of your issue -->
The example in examples/translation/t5 uses `newstest2013`, but authors report against `newstest2014` (presumably newstest2014.full):
> Since this is our final set of experiments, we report results on the test set rather than
the validation set. For CNN/Daily Mail, we use the standard test set distributed with the dataset.
For the WMT tasks, this corresponds to using newstest2014 for English-German
[Original paper, p30](https://arxiv.org/pdf/1910.10683.pdf).
Is this intentional? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3759/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3758 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3758/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3758/comments | https://api.github.com/repos/huggingface/transformers/issues/3758/events | https://github.com/huggingface/transformers/pull/3758 | 598,424,469 | MDExOlB1bGxSZXF1ZXN0NDAyMjk3MzA2 | 3,758 | Pipeline for Text Generation: GenerationPipeline | {
"login": "enzoampil",
"id": 39557688,
"node_id": "MDQ6VXNlcjM5NTU3Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/39557688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enzoampil",
"html_url": "https://github.com/enzoampil",
"followers_url": "https://api.github.com/users/enzoampil/followers",
"following_url": "https://api.github.com/users/enzoampil/following{/other_user}",
"gists_url": "https://api.github.com/users/enzoampil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enzoampil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enzoampil/subscriptions",
"organizations_url": "https://api.github.com/users/enzoampil/orgs",
"repos_url": "https://api.github.com/users/enzoampil/repos",
"events_url": "https://api.github.com/users/enzoampil/events{/privacy}",
"received_events_url": "https://api.github.com/users/enzoampil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @enzoampil, \r\n\r\nThanks again for the PR - I reviewed it. I think we can start by deleting a lot of the code and keeping it simple. It can be quite hard to get used to all the \"under-the-hood\" behavior that happens in pipelines. I think we should stick to the format that was used for the `summarization` pipeline e.g. and we shouldn't need a `__init__` fn in the beginning. \r\n\r\nWe should also add tests for generation in `tests/test_pipelines.py` .\r\n\r\nLet me know if the comments are clear! If the PR seems too much, just let me know - I can help then a bit as well :-) ",
"Hi @patrickvonplaten , thank you for the very clear comments and concrete changes requested. I will work on this by this weekend :)",
"That sounds great :-) \r\nDon't worry yet about adding the `task_specific_params` to each of the models configs - I will do this at a later stage! Regarding the tests, you can use the same test logic that was used for `summarization` :-) ",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3758?src=pr&el=h1) Report\n> Merging [#3758](https://codecov.io/gh/huggingface/transformers/pull/3758?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9180e3fa4a396fc5a066ab88b85445e26d69bc4c&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3758?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3758 +/- ##\n=======================================\n Coverage 78.58% 78.58% \n=======================================\n Files 106 106 \n Lines 18003 18003 \n=======================================\n Hits 14148 14148 \n Misses 3855 3855 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3758?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3758?src=pr&el=footer). Last update [9180e3f...9180e3f](https://codecov.io/gh/huggingface/transformers/pull/3758?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@patrickvonplaten I've applied the requested changes (all tests passing), including the pipeline tests, and changing the class name to `TextGenerationPipeline`, as recommended by @julien-c and @thomwolf .\r\n\r\nKeen to get feedback on this! Thanks :smile:",
"Great work @enzoampil! We will add the generation pipeline in the next release and I think it's gonna be a very useful and widely used feature! \r\n\r\n@enzoampil, @thomwolf, @LysandreJik, @julien-c - After the changes requested above, I'd be happy to merge from my side. \r\nA couple of things that were added in this PR and would need discussion are:\r\n1. `XLM` is not supported in this `TextGeneration` pipeline. I played around with multiple models of `XLM` and never had any reasonable results for generation, so I think it's best to remove it here. The other models `GPT1, GPT2, CTRL, XLNet, Transfo-XL` work very well. \r\n2. `XLNet` and `Transfo-XL` need a padding text to work well. This is added and removed afterward, so the user doesn't notice it at all. In a follow-up PR we could maybe add a warning about it.\r\n3. We can now also introduce `text_generation` task_specific_params to the dicts (Transfo-XL and XLNet need a new min and max length) - I can do this after the PR is merged. \r\n4. I think we can remove the `run_generation` script then completely no? \r\n5. Tensorflow tests should also be added in a future PR (or feel free to add them here @enzoampil - forgot to mention that in the review actually)",
"Thank you so much @patrickvonplaten ! Will apply the rest of the changes within the next day or two :) ",
"Awesome, thanks @enzoampil! LGTM.",
"Wanted to thank you guys again for guiding me through my first (relatively) big PR for `transformers` @patrickvonplaten @julien-c @thomwolf 😄 \r\n\r\nThe work of HuggingFace with both the implementation and democratisation of state of the art NLP is something I deeply resonate with. I've been an industry practitioner of NLP for the passed few years and `transformers` has really helped me a lot.\r\n\r\nWith this, I've recently decided to dedicate a large chunk of my open source time contributing to this package! Looking forward to helping out more and more.\r\n\r\nI will keep an eye out for specific issues that I can help out with, and am very open to advice on how I can help in a way that's most useful 🙂 ",
"@LysandreJik \r\n\r\n1. I think we can fix the XLNet generation issue by setting `max_length` as the max length of the *generated* text, rather than the full text. This can be implemented by ensuring that we add the number of tokens in `prompt_text` to the `max_length` argument. Something like below:\r\n```\r\nmax_length = max_length + len(input_ids.squeeze())\r\n```\r\n\r\nHowever, this may require that we set `max_length` as an explicit argument for `__call__`, rather than as part of `generate_kwargs`. @patrickvonplaten Do you think this makes sense to do?\r\n\r\n2. Sure thing, will work on adding `TextGenerationPipeline` to `./docs/source/main_classes/pipelines.rst`",
"Sorry, I forgot to add the `max_length` as generation task specific params to the XLNet and TransfoXL configs. I will do this now.",
"@enzoampil - Sorry for fiddling in your code so much :D \r\nIt's actually not as easy as I thought to have the final output correct for XLNet and Transfo-XL. My commits suggestions now should work. You should run `make style` once they are integrated :-) ",
"Maybe we should also add an optional `padding` argument to the `__call__` function that overwrites `self.PADDING` for XLNet and Transfo-XL @LysandreJik. But we can do this in a separate PR @enzoampil - let's try to merge this one first.",
"> Sorry, I forgot to add the `max_length` as generation task specific params to the XLNet and TransfoXL configs. I will do this now.\r\n\r\nOk added it to the config of Transfo-XL and XLNet \r\n\r\n@LysandreJik @thomwolf, we also might want to discuss the default generation params for each model. I think it might e.g. be better to set `do_sample=True` for all models that can generate.",
"I don't have any strong opinions on whether we should sample or not; However, I think whatever the choice we should make sure that it is explicit in the pipeline documentation that we may control it from the pipeline directly. \r\n\r\nMaybe a link linking to the `generate` method would do the trick, alongside a small explanation that all kwargs will be passed to this underlying method.",
"@patrickvonplaten Ran `make_style` and just fixed a minor bug from the `generation` line I think being accidentally taken out from one of your prior [commits](https://github.com/huggingface/transformers/pull/3758/commits/29ce6d82e835e1225c26b5cc4c4ce9f6fe1451ff). The pipeline seems to work fine now :smile:\r\n\r\nAlso, not sure if this is specific to this PR, but there are tests that are suddenly returning an error for the lines that contain `self._create_and_check_torchscript(config, inputs_dict)`.\r\n\r\nSample error:\r\n```\r\n_____________ AlbertModelTest.test_torchscript_output_hidden_state _____________\r\n[gw7] linux -- Python 3.7.7 /usr/local/bin/python\r\n\r\nself = <tests.test_modeling_albert.AlbertModelTest testMethod=test_torchscript_output_hidden_state>\r\n\r\n def test_torchscript_output_hidden_state(self):\r\n config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()\r\n \r\n config.output_hidden_states = True\r\n> self._create_and_check_torchscript(config, inputs_dict)\r\n\r\ntests/test_modeling_common.py:197: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_modeling_common.py:206: in _create_and_check_torchscript\r\n model = model_class(config=configs_no_init)\r\n/usr/local/lib/python3.7/site-packages/transformers/modeling_albert.py:455: in __init__\r\n self.init_weights()\r\n/usr/local/lib/python3.7/site-packages/transformers/modeling_utils.py:392: in init_weights\r\n self.apply(self._init_weights)\r\n/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py:289: in apply\r\n module.apply(fn)\r\n/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py:289: in apply\r\n module.apply(fn)\r\n/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py:290: in apply\r\n fn(self)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = AlbertModel(\r\n (embeddings): AlbertEmbeddings(\r\n (word_embeddings): Embedding(99, 128, padding_idx=0)\r\n (position_... )\r\n )\r\n )\r\n )\r\n (pooler): Linear(in_features=36, out_features=36, bias=True)\r\n (pooler_activation): Tanh()\r\n)\r\nmodule = Embedding(99, 128, padding_idx=0)\r\n\r\n def _init_weights(self, module):\r\n \"\"\" Initialize the weights.\r\n \"\"\"\r\n if isinstance(module, (nn.Linear, nn.Embedding)):\r\n # Slightly different from the TF version which uses truncated_normal for initialization\r\n # cf https://github.com/pytorch/pytorch/pull/5617\r\n> module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)\r\nE RuntimeError: normal_ expects std > 0.0, but found std=0\r\n\r\n/usr/local/lib/python3.7/site-packages/transformers/modeling_albert.py:377: RuntimeError\r\n________________________ BertModelTest.test_headmasking ________________________\r\n[gw1] linux -- Python 3.7.7 /usr/local/bin/python\r\n\r\nself = <tests.test_modeling_bert.BertModelTest testMethod=test_headmasking>\r\n\r\n def test_headmasking(self):\r\n if not self.test_head_masking:\r\n return\r\n \r\n global_rng.seed(42)\r\n config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()\r\n global_rng.seed()\r\n \r\n config.output_attentions = True\r\n config.output_hidden_states = True\r\n configs_no_init = _config_zero_init(config) # To be sure we have no Nan\r\n for model_class in self.all_model_classes:\r\n> model = model_class(config=configs_no_init)\r\n\r\ntests/test_modeling_common.py:260: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n/usr/local/lib/python3.7/site-packages/transformers/modeling_bert.py:619: in __init__\r\n self.init_weights()\r\n/usr/local/lib/python3.7/site-packages/transformers/modeling_utils.py:392: in init_weights\r\n self.apply(self._init_weights)\r\n/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py:289: in apply\r\n module.apply(fn)\r\n/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py:289: in apply\r\n module.apply(fn)\r\n/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py:290: in apply\r\n fn(self)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = BertModel(\r\n (embeddings): BertEmbeddings(\r\n (word_embeddings): Embedding(99, 32, padding_idx=0)\r\n (position_embed...\r\n (pooler): BertPooler(\r\n (dense): Linear(in_features=32, out_features=32, bias=True)\r\n (activation): Tanh()\r\n )\r\n)\r\nmodule = Embedding(99, 32, padding_idx=0)\r\n\r\n def _init_weights(self, module):\r\n \"\"\" Initialize the weights \"\"\"\r\n if isinstance(module, (nn.Linear, nn.Embedding)):\r\n # Slightly different from the TF version which uses truncated_normal for initialization\r\n # cf https://github.com/pytorch/pytorch/pull/5617\r\n> module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)\r\nE RuntimeError: normal_ expects std > 0.0, but found std=0\r\n\r\n/usr/local/lib/python3.7/site-packages/transformers/modeling_bert.py:525: RuntimeError\r\n```",
"Those test are probably falling because the new Pytorch version was released. Can you just tense your branch in master?: \r\n```\r\n$ git fetch upstream\r\n$ git rebase upstream/master\r\n```\r\n(Assuming that you added the master branch as a remote branch \"upstream\").\r\n\r\nThe test should then pass :-)",
"@patrickvonplaten Apologies, I'm having issues with the rebase suggested above.\r\n\r\nI initially tried it but ended up showing up as a co-committer with the rebased commits, which explains why I performed a `force-push` above to revert the rebase. It *might* be related to an issue I'm having where I'm forced to do a `rebase --skip` with each of the conflicts (same situation as [here](https://stackoverflow.com/questions/14410421/git-rebase-merge-conflict-cannot-continue)).\r\n\r\nMay I please ask for some assistance / advice with this?",
"Once again, thanks so much! Looking forward to contributing more in the future 😄@patrickvonplaten @julien-c "
] | 1,586 | 1,587 | 1,587 | CONTRIBUTOR | null | ### This PR implements a text generation pipeline, `GenerationPipeline`, which works on any `ModelWithLMHead` head, and resolves issue #3728
This pipeline predicts the words that will follow a specified text prompt for autoregressive language models. I've registered it to the pipeline function using `gpt2` as the default `model_type`.
The implementation is based on the approach taken in [run_generation.py](https://github.com/huggingface/transformers/blob/master/examples/run_generation.py), which means the forward pass uses the `PreTrainedModel.generate()` method in [modeling_utils.py](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_utils.py#L116:1), as recommended to me by @julien-c and @patrickvonplaten .
Sample code:
```
# Pip install
# If you're using Google Colab, make sure to reset runtime after installing
!pip install -e git+git://github.com/enzoampil/transformers.git@generation_pipeline#egg=transformers
# Pipeline uses `gpt2` by default
from transformers import pipeline
gpt = pipeline('generation', num_return_sequences=1, length=40)
gpt("Natural language processing is amazing!")
# ["Natural language processing is amazing! Just take a look at these some of the features. Go off and read up on them all…\n\nSay hello to the world of BitLocker with ES2016. It's a game."]
```
**Google Colab tutorial [here](https://colab.research.google.com/drive/1PHmYRpgzdMeSR68i4w5tPfUjlv0npCQz) for running GenerationPipeline for the following LM models:**
1. OpenAI GPT
2. OpenAI GPT-2
3. Transformer-XL
4. XML
5. XLNet
6. T5
7. CTRL (colab RAM is too small to read this model)
For context, I also plan to use the above `GenerationPipeline` for my Humor Generation Bot ([issue](https://github.com/enzoampil/tito-joker/issues/29)).
I'm very keen to get feedback for the above, so please let me know if I should change anything, or perform additional steps to bring its quality to an acceptable level. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3758/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3758/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3758",
"html_url": "https://github.com/huggingface/transformers/pull/3758",
"diff_url": "https://github.com/huggingface/transformers/pull/3758.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3758.patch",
"merged_at": 1587562623000
} |
https://api.github.com/repos/huggingface/transformers/issues/3757 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3757/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3757/comments | https://api.github.com/repos/huggingface/transformers/issues/3757/events | https://github.com/huggingface/transformers/issues/3757 | 598,345,015 | MDU6SXNzdWU1OTgzNDUwMTU= | 3,757 | Dealing with class imbalance | {
"login": "al-yakubovich",
"id": 12928778,
"node_id": "MDQ6VXNlcjEyOTI4Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/12928778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/al-yakubovich",
"html_url": "https://github.com/al-yakubovich",
"followers_url": "https://api.github.com/users/al-yakubovich/followers",
"following_url": "https://api.github.com/users/al-yakubovich/following{/other_user}",
"gists_url": "https://api.github.com/users/al-yakubovich/gists{/gist_id}",
"starred_url": "https://api.github.com/users/al-yakubovich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/al-yakubovich/subscriptions",
"organizations_url": "https://api.github.com/users/al-yakubovich/orgs",
"repos_url": "https://api.github.com/users/al-yakubovich/repos",
"events_url": "https://api.github.com/users/al-yakubovich/events{/privacy}",
"received_events_url": "https://api.github.com/users/al-yakubovich/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Did you find anything on this? Been digging around for the same, seems like `from_pretrained` used to allow `weight` as an arg?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,586 | 1,595 | 1,595 | NONE | null | Are there any built-in methods for dealing with class imbalance in BERT? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3757/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3756 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3756/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3756/comments | https://api.github.com/repos/huggingface/transformers/issues/3756/events | https://github.com/huggingface/transformers/pull/3756 | 598,319,008 | MDExOlB1bGxSZXF1ZXN0NDAyMjI1Mzky | 3,756 | Trace log probs on generation | {
"login": "aced125",
"id": 44452903,
"node_id": "MDQ6VXNlcjQ0NDUyOTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/44452903?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aced125",
"html_url": "https://github.com/aced125",
"followers_url": "https://api.github.com/users/aced125/followers",
"following_url": "https://api.github.com/users/aced125/following{/other_user}",
"gists_url": "https://api.github.com/users/aced125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aced125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aced125/subscriptions",
"organizations_url": "https://api.github.com/users/aced125/orgs",
"repos_url": "https://api.github.com/users/aced125/repos",
"events_url": "https://api.github.com/users/aced125/events{/privacy}",
"received_events_url": "https://api.github.com/users/aced125/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thanks for the PR @aced125 - could you run the slow tests as well and see whether they pass? \r\nI think checking these three tests should be good enough:\r\n\r\n`RUN_SLOW=1 pytest tests/test_modeling_gpt2.py`\r\n`RUN_SLOW=1 pytest tests/test_modeling_t5.py`\r\n`RUN_SLOW=1 pytest tests/test_modeling_bart.py`",
"Yep @patrickvonplaten , done the above, all passed in a Google Colab notebook: https://colab.research.google.com/drive/12-WUburVlYHsrgKhMMt5MRXOPKbfaS3l",
"Hi @patrickvonplaten, wondering if you managed to take a look at this?",
"I'm wondering if there is any experimental results demonstrating that `trace_log_probs` is a helpful thing to have in the repo? ",
"Hi @sshleifer I'll be honest I haven't done any RL in the NLP domain (using transformers in the drug discovery domain) but I know people have tried to optimize ROUGE score for summarization and stuff like that in the past. I can try and maybe put something together for this though? \r\n\r\nI do think it is quite a useful feature to have in general though, will need it at some point IMO.",
"Cool, thanks for the transparency. From my seat, it would be preferable for you to experiment a bit on a branch to see what works before we merge this into master, as you suggest.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> Hi @sshleifer I'll be honest I haven't done any RL in the NLP domain (using transformers in the drug discovery domain) but I know people have tried to optimize ROUGE score for summarization and stuff like that in the past. I can try and maybe put something together for this though?\r\n> \r\n> I do think it is quite a useful feature to have in general though, will need it at some point IMO.\r\n\r\nHi, I have a similar use case where I require the log_probs from the `generate()` method. Did you find any solution for it? Was your PR merged?",
"+1 - I'd also find this feature useful",
"This feature would be useful for incorporating a popular technique called \"unlikelihood_training\".\r\n\r\nSee (https://github.com/facebookresearch/unlikelihood_training/blob/944465589c0fab534fe6d14a5db2850ddeee43ce/custom/gpt2/run_gpt2.py#L85) \r\n\r\nYou have to sample from the model to produce negative candidates.\r\n\r\nOnce this feature is added; adding the unlikelihood loss becomes extremely easy and efficient.",
"I'd also want to get the gradients from the `generate` method! "
] | 1,586 | 1,659 | 1,594 | NONE | null | This PR makes a **few code-line changes** to accomplish the following:
- We want to trace the log probabilities of tokens generated during generation, so that we can do policy gradient methods (e.g we want to improve ROUGE scores for summarization by using RL).
- This requires keeping track of the computation graph as well as the log probs.
- We remove the @torch.no_grad() decorator on the `generate` method in `modeling_utils.py`. We replace this with `torch.set_grad_enabled(False)` by default. At the end of the function, we do `torch.set_grad_enabled(True), to restore the original state.
- We use `torch.distributions.Categorical` to sample from the softmax. We can call `dist.sample()` and Torch will keep the gradients.
- We modify `top_k_top_p_filtering` slightly by adding `with torch.no_grad()` for parts of the code which unnecessarily trace the gradient.
## Tests
I have run the tests not including the slow ones and they all passed.
## Example:
```
tokenizer = AutoTokenizer.from_pretrained('distilgpt2')
model = AutoModelWithLMHead.from_pretrained('distilgpt2')
outputs = model.generate(max_length=40,
do_sample=True,
trace_log_probs=True,
eos_token_id=99999,
num_beams=1,
num_return_sequences=3
)
tokens, log_probs = outputs
print(log_probs)
print(log_probs.shape)
print(tokens.shape)
```
We add error handling to disallow for configurations not supported:
- beam search not supported
```
outputs = model.generate(max_length=40,
do_sample=True,
trace_log_probs=True,
eos_token_id=99999,
num_beams=5,
num_return_sequences=3
) # throws an error
```
- trying to trace while not doing do_sample
```
outputs = model.generate(max_length=40,
do_sample=False,
trace_log_probs=True,
eos_token_id=99999,
num_beams=1,
num_return_sequences=3
) # throws an error
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3756/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3756",
"html_url": "https://github.com/huggingface/transformers/pull/3756",
"diff_url": "https://github.com/huggingface/transformers/pull/3756.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3756.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3755 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3755/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3755/comments | https://api.github.com/repos/huggingface/transformers/issues/3755/events | https://github.com/huggingface/transformers/pull/3755 | 598,301,731 | MDExOlB1bGxSZXF1ZXN0NDAyMjEzNjE2 | 3,755 | [Docs] Add DialoGPT | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3755?src=pr&el=h1) Report\n> Merging [#3755](https://codecov.io/gh/huggingface/transformers/pull/3755?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7972a4019f4bc9f85fd358f42249b90f9cd27c68&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3755?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3755 +/- ##\n=======================================\n Coverage 78.26% 78.26% \n=======================================\n Files 106 106 \n Lines 17928 17928 \n=======================================\n+ Hits 14031 14032 +1 \n+ Misses 3897 3896 -1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3755?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3755/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.96% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3755?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3755?src=pr&el=footer). Last update [7972a40...c40ade1](https://codecov.io/gh/huggingface/transformers/pull/3755?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,586 | 1,587 | 1,587 | MEMBER | null | This PR adds DialoGPT to the model page and links the models on the model page https://github.com/huggingface/transformers#model-architectures to the docs. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3755/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3755",
"html_url": "https://github.com/huggingface/transformers/pull/3755",
"diff_url": "https://github.com/huggingface/transformers/pull/3755.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3755.patch",
"merged_at": 1587020673000
} |
https://api.github.com/repos/huggingface/transformers/issues/3754 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3754/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3754/comments | https://api.github.com/repos/huggingface/transformers/issues/3754/events | https://github.com/huggingface/transformers/issues/3754 | 598,288,060 | MDU6SXNzdWU1OTgyODgwNjA= | 3,754 | Deprecation warning due to invalid escape sequences in Python 3.7 | {
"login": "tirkarthi",
"id": 3972343,
"node_id": "MDQ6VXNlcjM5NzIzNDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3972343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tirkarthi",
"html_url": "https://github.com/tirkarthi",
"followers_url": "https://api.github.com/users/tirkarthi/followers",
"following_url": "https://api.github.com/users/tirkarthi/following{/other_user}",
"gists_url": "https://api.github.com/users/tirkarthi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tirkarthi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tirkarthi/subscriptions",
"organizations_url": "https://api.github.com/users/tirkarthi/orgs",
"repos_url": "https://api.github.com/users/tirkarthi/repos",
"events_url": "https://api.github.com/users/tirkarthi/events{/privacy}",
"received_events_url": "https://api.github.com/users/tirkarthi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"The bug is still valid and I have raised https://github.com/huggingface/transformers/pull/4924"
] | 1,586 | 1,592 | 1,592 | CONTRIBUTOR | null | # 🐛 Bug
## To reproduce
Deprecation warnings are raised due to invalid escape sequences. This can be fixed by using raw strings or escaping the literals.
Steps to reproduce the behavior:
```
find . -iname '*.py' | grep -v example | xargs -P 4 -I{} python3.8 -Wall -m py_compile {}
./src/transformers/tokenization_transfo_xl.py:123: DeprecationWarning: invalid escape sequence \:
self.punctuation_symbols = '!"#$%&()*+,-./\:;<=>?@[\\]^_`{|}~' # noqa: W605
./src/transformers/tokenization_transfo_xl.py:150: DeprecationWarning: invalid escape sequence \s
look_ahead_to_match_all_except_space = "(?=[^\s])" # noqa: W605
```
## Expected behavior
No warnings
## Environment info
- `transformers` version: master branch
- Platform: linux
- Python version: 3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3754/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3753 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3753/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3753/comments | https://api.github.com/repos/huggingface/transformers/issues/3753/events | https://github.com/huggingface/transformers/issues/3753 | 598,273,944 | MDU6SXNzdWU1OTgyNzM5NDQ= | 3,753 | How to speed up the transformer inference? | {
"login": "hahadashi",
"id": 7497649,
"node_id": "MDQ6VXNlcjc0OTc2NDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7497649?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hahadashi",
"html_url": "https://github.com/hahadashi",
"followers_url": "https://api.github.com/users/hahadashi/followers",
"following_url": "https://api.github.com/users/hahadashi/following{/other_user}",
"gists_url": "https://api.github.com/users/hahadashi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hahadashi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hahadashi/subscriptions",
"organizations_url": "https://api.github.com/users/hahadashi/orgs",
"repos_url": "https://api.github.com/users/hahadashi/repos",
"events_url": "https://api.github.com/users/hahadashi/events{/privacy}",
"received_events_url": "https://api.github.com/users/hahadashi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @hahadashi, \r\n\r\nCan you add a code snippet so that we know which model you are using and so that we can reproduce the behavior? ",
"> Hi @hahadashi,\r\n> \r\n> Can you add a code snippet so that we know which model you are using and so that we can reproduce the behavior?\r\n\r\nthx your response, i use this tutor https://www.tensorflow.org/tutorials/text/transformer#encoder_layer train the the model",
"Sorry maybe I was not precise enough:\r\nWhich model of the `transformers` library (e.g. Bert, GPT2) did you use? And can you copy / paste the exact code which has a `transformers` model in it that was slow for inference.",
"> Sorry maybe I was not precise enough:\r\n> Which model of the `transformers` library (e.g. Bert, GPT2) did you use? And can you copy / paste the exact code which has a `transformers` model in it that was slow for inference.\r\n\r\nthx your response, \r\n`class MultiHeadAttention(tf.keras.layers.Layer):\r\n def __init__(self, d_model, num_heads):\r\n super(MultiHeadAttention, self).__init__()\r\n self.num_heads = num_heads\r\n self.d_model = d_model\r\n \r\n assert d_model % self.num_heads == 0\r\n \r\n self.depth = d_model // self.num_heads\r\n \r\n self.wq = tf.keras.layers.Dense(d_model)\r\n self.wk = tf.keras.layers.Dense(d_model)\r\n self.wv = tf.keras.layers.Dense(d_model)\r\n \r\n self.dense = tf.keras.layers.Dense(d_model)\r\n \r\n def split_heads(self, x, batch_size):\r\n \"\"\"分拆最后一个维度到 (num_heads, depth).\r\n 转置结果使得形状为 (batch_size, num_heads, seq_len, depth)\r\n \"\"\"\r\n x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth))\r\n return tf.transpose(x, perm=[0, 2, 1, 3])\r\n \r\n def call(self, v, k, q, mask):\r\n batch_size = tf.shape(q)[0]\r\n \r\n q = self.wq(q) # (batch_size, seq_len, d_model)\r\n k = self.wk(k) # (batch_size, seq_len, d_model)\r\n v = self.wv(v) # (batch_size, seq_len, d_model)\r\n \r\n q = self.split_heads(q, batch_size) # (batch_size, num_heads, seq_len_q, depth)\r\n k = self.split_heads(k, batch_size) # (batch_size, num_heads, seq_len_k, depth)\r\n v = self.split_heads(v, batch_size) # (batch_size, num_heads, seq_len_v, depth)\r\n \r\n # scaled_attention.shape == (batch_size, num_heads, seq_len_q, depth)\r\n # attention_weights.shape == (batch_size, num_heads, seq_len_q, seq_len_k)\r\n scaled_attention, attention_weights = scaled_dot_product_attention(\r\n q, k, v, mask)\r\n \r\n scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3]) # (batch_size, seq_len_q, num_heads, depth)\r\n\r\n concat_attention = tf.reshape(scaled_attention, \r\n (batch_size, -1, self.d_model)) # (batch_size, seq_len_q, d_model)\r\n\r\n output = self.dense(concat_attention) # (batch_size, seq_len_q, d_model)\r\n \r\n return output, attention_weights`\r\n\r\n\r\nduring i use the decoder predict, eg, \r\n1、first i input \"sos\", get a argmax result \"A\"\r\n2、 i input \"sos A\" get a argmax \"B\"\r\n3、 i input \"sos A B\" get a argmax \"C\" \r\n4......, \r\n If we don't save the intermediate state,there are a lot of repetitive operations.\r\nlike \r\nthe step 2, sos has compute in step 1, \r\nstep 3, \"A\" has compute in step 2,\r\nthx\r\n\r\nIf my idea is wrong, hope you give me some other speed up advices,",
"It's seems like you're not using the `transformers` repository.\r\n\r\nI suggest you use Stack Overflow, where you will more likely receive answers to your question. ",
"> I suggest you use Stack Overflow, where you will more likely receive answers to your question.\r\n\r\nOK, thx"
] | 1,586 | 1,586 | 1,586 | NONE | null | i meet a problem, if i direct use the model to inference, it is very slow. and i try to split the model (encoder, decoder), i freeze enc ckpt model and dec ckpt model to enc pb model and dec pb model(use tf C++ to run). only compute enc pb model one time and use dec pb model to complete prediction. but its speed is also slow. i kown one reason cause this problem that during i inference the dec model ,there are a lot of repetitive operations. have your solve the problems。hope you give me some advice or a demo,thx
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3753/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3753/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3752 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3752/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3752/comments | https://api.github.com/repos/huggingface/transformers/issues/3752/events | https://github.com/huggingface/transformers/issues/3752 | 598,272,095 | MDU6SXNzdWU1OTgyNzIwOTU= | 3,752 | uss | {
"login": "hahadashi",
"id": 7497649,
"node_id": "MDQ6VXNlcjc0OTc2NDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7497649?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hahadashi",
"html_url": "https://github.com/hahadashi",
"followers_url": "https://api.github.com/users/hahadashi/followers",
"following_url": "https://api.github.com/users/hahadashi/following{/other_user}",
"gists_url": "https://api.github.com/users/hahadashi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hahadashi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hahadashi/subscriptions",
"organizations_url": "https://api.github.com/users/hahadashi/orgs",
"repos_url": "https://api.github.com/users/hahadashi/repos",
"events_url": "https://api.github.com/users/hahadashi/events{/privacy}",
"received_events_url": "https://api.github.com/users/hahadashi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Was this opened by mistake? Can we close it? "
] | 1,586 | 1,586 | 1,586 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3752/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3751 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3751/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3751/comments | https://api.github.com/repos/huggingface/transformers/issues/3751/events | https://github.com/huggingface/transformers/issues/3751 | 598,259,638 | MDU6SXNzdWU1OTgyNTk2Mzg= | 3,751 | Etract all last hidden states of the input dequences for Question and answering Bert | {
"login": "LincLabUCCS",
"id": 30666434,
"node_id": "MDQ6VXNlcjMwNjY2NDM0",
"avatar_url": "https://avatars.githubusercontent.com/u/30666434?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LincLabUCCS",
"html_url": "https://github.com/LincLabUCCS",
"followers_url": "https://api.github.com/users/LincLabUCCS/followers",
"following_url": "https://api.github.com/users/LincLabUCCS/following{/other_user}",
"gists_url": "https://api.github.com/users/LincLabUCCS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LincLabUCCS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LincLabUCCS/subscriptions",
"organizations_url": "https://api.github.com/users/LincLabUCCS/orgs",
"repos_url": "https://api.github.com/users/LincLabUCCS/repos",
"events_url": "https://api.github.com/users/LincLabUCCS/events{/privacy}",
"received_events_url": "https://api.github.com/users/LincLabUCCS/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Modify the script :)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,586 | 1,592 | 1,592 | NONE | null | Hello everyone,
I am using run_squad.py to finetune Bert for question answering. I would like to save and later extract all last hidden states for all sequences for further use. For example, if I have a dataset of 100 sequences, 64 length, and 768 features. I will eventually have a tensor of (100,64,768).
Is there any possible to do that using the run_squad.py script..
Thank you all | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3751/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3750 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3750/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3750/comments | https://api.github.com/repos/huggingface/transformers/issues/3750/events | https://github.com/huggingface/transformers/issues/3750 | 598,248,109 | MDU6SXNzdWU1OTgyNDgxMDk= | 3,750 | ImportError: cannot import name 'HfArgumentParser' from 'transformers' | {
"login": "songproducer",
"id": 597346,
"node_id": "MDQ6VXNlcjU5NzM0Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/597346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/songproducer",
"html_url": "https://github.com/songproducer",
"followers_url": "https://api.github.com/users/songproducer/followers",
"following_url": "https://api.github.com/users/songproducer/following{/other_user}",
"gists_url": "https://api.github.com/users/songproducer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/songproducer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songproducer/subscriptions",
"organizations_url": "https://api.github.com/users/songproducer/orgs",
"repos_url": "https://api.github.com/users/songproducer/repos",
"events_url": "https://api.github.com/users/songproducer/events{/privacy}",
"received_events_url": "https://api.github.com/users/songproducer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You need to install from source, as explained [here](https://github.com/huggingface/transformers#run-the-examples).",
"Thanks!"
] | 1,586 | 1,586 | 1,586 | NONE | null | Hi! When running:
```
python ./examples/run_glue.py \
--model_type bert \
--model_name_or_path bert-base-uncased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--data_dir $GLUE_DIR/$TASK_NAME \
--max_seq_length 128 \
--per_gpu_eval_batch_size=8 \
--per_gpu_train_batch_size=8 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/$TASK_NAME/
```
I get
```
Traceback (most recent call last):
File "./examples/run_glue.py", line 34, in <module>
from transformers import (
ImportError: cannot import name 'HfArgumentParser' from 'transformers' (/Users/leotreasure/opt/anaconda3/lib/python3.7/site-packages/transformers/__init__.py)
```
This is my python path
/Users/leotreasure/opt/anaconda3/bin/python
I ran the GLUE download script in the same folder
python download_glue_data.py
and exported:
export GLUE_DIR=/Users/leotreasure/transformers
export TASK_NAME=MRPC | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3750/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3750/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3749 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3749/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3749/comments | https://api.github.com/repos/huggingface/transformers/issues/3749/events | https://github.com/huggingface/transformers/issues/3749 | 598,216,515 | MDU6SXNzdWU1OTgyMTY1MTU= | 3,749 | Question about whitespace filtering in squad data processor | {
"login": "Santosh-Gupta",
"id": 5524261,
"node_id": "MDQ6VXNlcjU1MjQyNjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5524261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Santosh-Gupta",
"html_url": "https://github.com/Santosh-Gupta",
"followers_url": "https://api.github.com/users/Santosh-Gupta/followers",
"following_url": "https://api.github.com/users/Santosh-Gupta/following{/other_user}",
"gists_url": "https://api.github.com/users/Santosh-Gupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Santosh-Gupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Santosh-Gupta/subscriptions",
"organizations_url": "https://api.github.com/users/Santosh-Gupta/orgs",
"repos_url": "https://api.github.com/users/Santosh-Gupta/repos",
"events_url": "https://api.github.com/users/Santosh-Gupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/Santosh-Gupta/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,586 | 1,592 | 1,592 | CONTRIBUTOR | null | Usually to detect for whitespaces in python, the `isspace()` built-in is used. But I noticed for the squad data processor, this is used instead
```
def _is_whitespace(c):
if c == " " or c == "\t" or c == "\r" or c == "\n" or ord(c) == 0x202F:
return True
return False
```
https://github.com/huggingface/transformers/blob/master/src/transformers/data/processors/squad.py#L80
Is there any reason why this is used instead? My guess would be to deal with strings that were processed outside of python. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3749/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3748 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3748/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3748/comments | https://api.github.com/repos/huggingface/transformers/issues/3748/events | https://github.com/huggingface/transformers/issues/3748 | 598,212,227 | MDU6SXNzdWU1OTgyMTIyMjc= | 3,748 | Slow training time on BERT pretraining on multiple gpu compare to single gpu | {
"login": "ntubertchen",
"id": 7036778,
"node_id": "MDQ6VXNlcjcwMzY3Nzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7036778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ntubertchen",
"html_url": "https://github.com/ntubertchen",
"followers_url": "https://api.github.com/users/ntubertchen/followers",
"following_url": "https://api.github.com/users/ntubertchen/following{/other_user}",
"gists_url": "https://api.github.com/users/ntubertchen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ntubertchen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ntubertchen/subscriptions",
"organizations_url": "https://api.github.com/users/ntubertchen/orgs",
"repos_url": "https://api.github.com/users/ntubertchen/repos",
"events_url": "https://api.github.com/users/ntubertchen/events{/privacy}",
"received_events_url": "https://api.github.com/users/ntubertchen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I solve this problem with distributed training.\r\n\r\nThe DataParallel is really slow with a lot of defects, just don't use it.\r\n\r\n",
"Hi @ntubertchen, how did you make distributed training to work with the `run_language_modeling.py` script? Or did you do everything from scratch?",
"run_language_modeling.py has written the distributed part, just use the command."
] | 1,586 | 1,595 | 1,586 | NONE | null | # ❓ Questions & Help
Hello, I'm pretraining BERT on two RTX 2080Ti with 15 batch size on each gpu. They update every 40 steps.
The training time to 28000 steps of 2 2080Ti is around 9 days and it only takes 1 2080Ti 10 days to achieve the same training time. I was wondering whether is training time is accurate or I can do something to improve it.
First problem I can think of is although I expand to batch size to the most one GPU memory can hold. The gpu usage is only around 50% all the time. I'm not sure how to improve the overall gpu usage, is it possible that 50% usage is normal? Please give me some advice.
Sincerely | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3748/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3747 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3747/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3747/comments | https://api.github.com/repos/huggingface/transformers/issues/3747/events | https://github.com/huggingface/transformers/issues/3747 | 598,190,302 | MDU6SXNzdWU1OTgxOTAzMDI= | 3,747 | text generation like lorem ipsum but human readable | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"For conditional output generation, you might want to take a look at `CTRL` or you would have to fine-tuned gpt2. To just produce \"any\" fake text, you could just use `GPT2` out of the box as in this demo: \r\nhttps://transformer.huggingface.co/doc/gpt2-large",
"Thanks for your reply :-)\r\n\r\nWhat is cTRL ? do you have any references ?",
"Sorry I should have linked that: \r\nhttps://huggingface.co/transformers/model_doc/ctrl.html\r\n\r\nCTRL is a very big model so quite difficult to run on a local machine - it might be easier to fine-tuned gpt2. I think @mariamabarham knows well how to fine-tune gpt2 - maybe you can add a couple lines? :-) ",
"I think You can use gt2 with [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py). You can consider the [TextDataset](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py#L66) class or [LineByLineDataset](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py#L107) or define your own dataset class that suits better with your data structure.",
"Do you have a code example ? I am a little bit a newbie in NLP ^^ ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Found this https://colab.research.google.com/drive/1VI3oBIOQYsym2x5oOux7DTNhpdR0r4uw",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,586 | 1,598 | 1,598 | NONE | null | Hi guys,
Hope you are all well !
We would like to add a fake text generator based on transformers/gpt-2 for the wordpress module called https://github.com/bordoni/fakerpress. For now, It uses only an old and not comprehensible lorem ipsum generator.
Is it possible to generate fake text (paragraphs, headings, taxonomies) with transformers/gpt-2 based on type of topics like reddit or shakespare datasets.
Why it would be useful ? for creating fake wordpress with human readable content and also indexable by a full-text search engine module (eg manticore or elasticsearch).
Thanks in advance for any insights or inputs on that topic
Cheers,
X | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3747/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3746 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3746/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3746/comments | https://api.github.com/repos/huggingface/transformers/issues/3746/events | https://github.com/huggingface/transformers/pull/3746 | 598,179,848 | MDExOlB1bGxSZXF1ZXN0NDAyMTMyNzE5 | 3,746 | Added README huseinzol05/albert-tiny-bahasa-cased | {
"login": "huseinzol05",
"id": 19810909,
"node_id": "MDQ6VXNlcjE5ODEwOTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/19810909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/huseinzol05",
"html_url": "https://github.com/huseinzol05",
"followers_url": "https://api.github.com/users/huseinzol05/followers",
"following_url": "https://api.github.com/users/huseinzol05/following{/other_user}",
"gists_url": "https://api.github.com/users/huseinzol05/gists{/gist_id}",
"starred_url": "https://api.github.com/users/huseinzol05/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/huseinzol05/subscriptions",
"organizations_url": "https://api.github.com/users/huseinzol05/orgs",
"repos_url": "https://api.github.com/users/huseinzol05/repos",
"events_url": "https://api.github.com/users/huseinzol05/events{/privacy}",
"received_events_url": "https://api.github.com/users/huseinzol05/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3746?src=pr&el=h1) Report\n> Merging [#3746](https://codecov.io/gh/huggingface/transformers/pull/3746?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/700ccf6e35616fcbee59de81edd60cec9e14fb6b&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3746?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3746 +/- ##\n==========================================\n+ Coverage 78.26% 78.27% +0.01% \n==========================================\n Files 106 106 \n Lines 17928 17928 \n==========================================\n+ Hits 14031 14033 +2 \n+ Misses 3897 3895 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3746?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3746/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.12% <0.00%> (+0.32%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3746?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3746?src=pr&el=footer). Last update [700ccf6...65d2323](https://codecov.io/gh/huggingface/transformers/pull/3746?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Looks good! [model page](https://huggingface.co/huseinzol05/tiny-bert-bahasa-cased)"
] | 1,586 | 1,586 | 1,586 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3746/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3746",
"html_url": "https://github.com/huggingface/transformers/pull/3746",
"diff_url": "https://github.com/huggingface/transformers/pull/3746.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3746.patch",
"merged_at": 1586601727000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3745 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3745/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3745/comments | https://api.github.com/repos/huggingface/transformers/issues/3745/events | https://github.com/huggingface/transformers/pull/3745 | 598,166,246 | MDExOlB1bGxSZXF1ZXN0NDAyMTIzNDY2 | 3,745 | Add `qas_id` to SquadResult and SquadExample | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3745?src=pr&el=h1) Report\n> Merging [#3745](https://codecov.io/gh/huggingface/transformers/pull/3745?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/700ccf6e35616fcbee59de81edd60cec9e14fb6b&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `14.28%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3745?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3745 +/- ##\n=======================================\n Coverage 78.26% 78.26% \n=======================================\n Files 106 106 \n Lines 17928 17931 +3 \n=======================================\n+ Hits 14031 14034 +3 \n Misses 3897 3897 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3745?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/3745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.61% <14.28%> (-0.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.22% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3745/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.12% <0.00%> (+0.32%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3745?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3745?src=pr&el=footer). Last update [700ccf6...ce09bce](https://codecov.io/gh/huggingface/transformers/pull/3745?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Bumping this. @LysandreJik Any thoughts?"
] | 1,586 | 1,588 | 1,587 | CONTRIBUTOR | null | I'm in the process of adding a `run_tf_squad.py` script, per https://github.com/huggingface/transformers/issues/3685.
This PR:
- Fixes a buggy variable name: `all_example_indices` actually refers to feature indices, so I've renamed it to `all_feature_indices`. This can be verified by adding a breakpoint at https://github.com/huggingface/transformers/blob/master/src/transformers/data/processors/squad.py#L347 and running
```python
print(len(set(all_example_index))) # 12272
print(len(features)) # 12272
print(len(set([f.example_index for f in features]))) # 11873
```
This is because an `Example` refers to a Question + possibly several Answers.
A `Feature` refers to a Question + one Answer. There are 12272 features, but only 11873 examples in the SQuADv2 dataset.
- Adds two attributes to the TensorFlow SQuAD dataset: `feature_index` and `qas_id`. `feature_index` has the same function as it does in PyTorch, but it is now possible to retrieve through the tf.data API. `qas_id` is the ID of an example, and matches [the JSON here](https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json).
These two features enable a TensorFlow SQuAD validation script. I have it up and running and will include it in a later PR, as support for a native `TFAlbertForQuestionAnswering` is required first.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3745/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3745",
"html_url": "https://github.com/huggingface/transformers/pull/3745",
"diff_url": "https://github.com/huggingface/transformers/pull/3745.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3745.patch",
"merged_at": 1587413337000
} |
https://api.github.com/repos/huggingface/transformers/issues/3744 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3744/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3744/comments | https://api.github.com/repos/huggingface/transformers/issues/3744/events | https://github.com/huggingface/transformers/issues/3744 | 598,074,403 | MDU6SXNzdWU1OTgwNzQ0MDM= | 3,744 | Turning off Verbosity on QA model using Pipeline | {
"login": "WeiyangSun",
"id": 34964824,
"node_id": "MDQ6VXNlcjM0OTY0ODI0",
"avatar_url": "https://avatars.githubusercontent.com/u/34964824?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WeiyangSun",
"html_url": "https://github.com/WeiyangSun",
"followers_url": "https://api.github.com/users/WeiyangSun/followers",
"following_url": "https://api.github.com/users/WeiyangSun/following{/other_user}",
"gists_url": "https://api.github.com/users/WeiyangSun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WeiyangSun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WeiyangSun/subscriptions",
"organizations_url": "https://api.github.com/users/WeiyangSun/orgs",
"repos_url": "https://api.github.com/users/WeiyangSun/repos",
"events_url": "https://api.github.com/users/WeiyangSun/events{/privacy}",
"received_events_url": "https://api.github.com/users/WeiyangSun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
] | [
"I was just wondering the same thing."
] | 1,586 | 1,587 | 1,587 | NONE | null | Hi,
I am looking for a way to turn off the logs warnings. Please refer to the screenshot.

This is currently what I am doing.

Is there any way? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3744/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3743 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3743/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3743/comments | https://api.github.com/repos/huggingface/transformers/issues/3743/events | https://github.com/huggingface/transformers/pull/3743 | 598,066,734 | MDExOlB1bGxSZXF1ZXN0NDAyMDQ3MDAy | 3,743 | JIT not compatible with PyTorch/XLA | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @jysohn23 "
] | 1,586 | 1,587 | 1,587 | MEMBER | null | Tracing with JIT is not supported by TPUs. If `torch_xla` is detected in the environment, the `gelu_new` method won't be traced.
If tracing is done, the line:
```py
model = xm.send_cpu_data_to_device(model, xm.xla_device())
```
in `modeling_utils.py`
will raise:
```py
TypeError: can't pickle torch._C.ScriptFunction objects
```
the `# noqa F401` is necessary, otherwise, flake8 gives the following error:
```
src/transformers/activations.py:37:9: F401 'torch_xla' imported but unused
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3743/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3743/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3743",
"html_url": "https://github.com/huggingface/transformers/pull/3743",
"diff_url": "https://github.com/huggingface/transformers/pull/3743.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3743.patch",
"merged_at": 1587050365000
} |
https://api.github.com/repos/huggingface/transformers/issues/3742 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3742/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3742/comments | https://api.github.com/repos/huggingface/transformers/issues/3742/events | https://github.com/huggingface/transformers/pull/3742 | 598,065,848 | MDExOlB1bGxSZXF1ZXN0NDAyMDQ2MzA1 | 3,742 | Fix `glue_convert_examples_to_features` API breakage | {
"login": "jysohn23",
"id": 19496130,
"node_id": "MDQ6VXNlcjE5NDk2MTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/19496130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jysohn23",
"html_url": "https://github.com/jysohn23",
"followers_url": "https://api.github.com/users/jysohn23/followers",
"following_url": "https://api.github.com/users/jysohn23/following{/other_user}",
"gists_url": "https://api.github.com/users/jysohn23/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jysohn23/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jysohn23/subscriptions",
"organizations_url": "https://api.github.com/users/jysohn23/orgs",
"repos_url": "https://api.github.com/users/jysohn23/repos",
"events_url": "https://api.github.com/users/jysohn23/events{/privacy}",
"received_events_url": "https://api.github.com/users/jysohn23/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"👍 "
] | 1,586 | 1,586 | 1,586 | COLLABORATOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3742/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3742/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3742",
"html_url": "https://github.com/huggingface/transformers/pull/3742",
"diff_url": "https://github.com/huggingface/transformers/pull/3742.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3742.patch",
"merged_at": 1586549008000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3741 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3741/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3741/comments | https://api.github.com/repos/huggingface/transformers/issues/3741/events | https://github.com/huggingface/transformers/issues/3741 | 598,058,940 | MDU6SXNzdWU1OTgwNTg5NDA= | 3,741 | Tokenizer Encode More than 2 inputs | {
"login": "pertschuk",
"id": 6379823,
"node_id": "MDQ6VXNlcjYzNzk4MjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6379823?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pertschuk",
"html_url": "https://github.com/pertschuk",
"followers_url": "https://api.github.com/users/pertschuk/followers",
"following_url": "https://api.github.com/users/pertschuk/following{/other_user}",
"gists_url": "https://api.github.com/users/pertschuk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pertschuk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pertschuk/subscriptions",
"organizations_url": "https://api.github.com/users/pertschuk/orgs",
"repos_url": "https://api.github.com/users/pertschuk/repos",
"events_url": "https://api.github.com/users/pertschuk/events{/privacy}",
"received_events_url": "https://api.github.com/users/pertschuk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Had this problem been solved?",
"@Zhylkaaa It seems that the problem is still unsolved. Are there any easier ways to encode more than 2 inputs with tokenizer?",
"@skpig I think the best way is to use `' <sep_token> '.join(inputs)`. For roberta that would be `' <s> '.join(inputs)`.\r\nBut keep in mind that some models are designed to have [EOS] token at the and (for roberta it's `</s>`, bert doesn't have one I think)\r\nEDIT: actually I realised that first <s> and </s> will be added automatically after you pass resulting string to `tokenizer(' <sep_token> '.join(inputs))`",
"@Zhylkaaa That works. Thanks a lot.\r\n\r\n"
] | 1,586 | 1,627 | 1,592 | NONE | null | # 🚀 Feature request
Increasingly I'm seeing more than 2 inputs in some cases to BERT model, separated by [SEP] tokens. Often this helps by including context or for pairwise search ranking.
## Motivation
Right now I have to manually add the [SEP] token and concat the output to two tokenizer.encode_plus calls. Seems like it would be simple to just grab all the positional args and treat them as additional fields.
Also seems like this is more expected behavior than arbitrarily limiting encode_plus to single or pairs of text.
## Your contribution
I could submit a PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3741/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3740 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3740/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3740/comments | https://api.github.com/repos/huggingface/transformers/issues/3740/events | https://github.com/huggingface/transformers/pull/3740 | 597,992,046 | MDExOlB1bGxSZXF1ZXN0NDAxOTg4ODg3 | 3,740 | [WIP] EncoderDecoder model that works | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,586 | 1,586 | 1,586 | CONTRIBUTOR | null | Continuing #3383 from @patrickvonplaten to facilitate MarianMT project.
### Targeted API:
```python
model = EncoderDecoderModel.from_model_names('bert-base-uncased', 'bert-based-uncased')
model.save_pretrained('bert2bert')
model.from_pretrained('bert2bert') # way 2
```
### TODO
- support test_common
- test forward, generate
- raise useful errors for incompatible encoder/decoder combinations
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3740/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3740",
"html_url": "https://github.com/huggingface/transformers/pull/3740",
"diff_url": "https://github.com/huggingface/transformers/pull/3740.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3740.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3739 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3739/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3739/comments | https://api.github.com/repos/huggingface/transformers/issues/3739/events | https://github.com/huggingface/transformers/pull/3739 | 597,988,814 | MDExOlB1bGxSZXF1ZXN0NDAxOTg2NDk2 | 3,739 | Seq2seq generation with prefix | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3739?src=pr&el=h1) Report\n> Merging [#3739](https://codecov.io/gh/huggingface/transformers/pull/3739?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7a7fdf71f80452fcae064bd016f06e9a0f0f19ed&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `95.74%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3739?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3739 +/- ##\n==========================================\n- Coverage 78.27% 78.26% -0.02% \n==========================================\n Files 104 104 \n Lines 17835 17843 +8 \n==========================================\n+ Hits 13960 13964 +4 \n- Misses 3875 3879 +4 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3739?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.81% <95.45%> (-0.09%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.89% <95.65%> (-0.08%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.48% <100.00%> (-0.02%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3739/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `82.77% <100.00%> (-0.44%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3739?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3739?src=pr&el=footer). Last update [7a7fdf7...0de2191](https://codecov.io/gh/huggingface/transformers/pull/3739?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"In general, I think we have to be careful with a distinction between the different special token ids. \r\nI can see why `decoder_input_token_id` looks weird at first glance, but in #3225 and #3140, we decided to add it to keep Bart's good performance on summarization. \r\n\r\nI don't really see the need to overwrite `input_ids` with `prefix_ids` - do we have to do this? \r\nI would be ok with adding an optional `decoder_input_ids` that would be used for encoder-decoder models only. \r\n\r\nThere are quite a few hidden hacks in `generation()` (like the `force_token_id` fn) that look quite strange. If we replace / delete them, we should always check that the hard-coded integration tests don't fail (running the tests with `Run_SLOW=1` as mentioned above.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,586 | 1,594 | 1,594 | MEMBER | null | This PR introduces two small changes in the way model.generate() works.
Previously, the function took in an input_ids argument which had different behaviors in the seq2seq and language model settings: in language modeling, input_ids could be used to provide a prefix for the generation, while in seq2seq, input_ids represented the encoder input and the generation prefix was automatically initialized to a batch with one time step willed with the [BOS] token.
Conceptually, this feels a little awkward, as a language model and the decoder of a seq2seq model should really behave similarly (the latter just has added conditioning). And more importantly, there was no way to provide both the encoder input_ids and a generation prefix in the seq2seq model.
I've added a prefix_ids argument to fix that. The model will still default to using input_ids as a prefix in the language model setting so as not to break current use cases, but otherwise the model works with prefix_ids and initializes it similarly for the LM and seq2seq settings.
The second smaller change is the initialization of the past variable in generate_beam_search and generate_no_beam_search: it is now initialized to the form it will have in later generation steps, so we can dispense with the firs step tests in the prepare_inputs_for_generation functions in modeling_t5.py and modeling_bart.py
(Next time I'll do two separate PR's as suggested by @sshleifer :) )
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3739/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3739",
"html_url": "https://github.com/huggingface/transformers/pull/3739",
"diff_url": "https://github.com/huggingface/transformers/pull/3739.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3739.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3738 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3738/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3738/comments | https://api.github.com/repos/huggingface/transformers/issues/3738/events | https://github.com/huggingface/transformers/pull/3738 | 597,977,636 | MDExOlB1bGxSZXF1ZXN0NDAxOTc4MDcw | 3,738 | [docs] The use of `do_lower_case` in scripts is on its way to depreca… | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,586 | 1,586 | 1,586 | MEMBER | null | …tion
Will close #3633
Will close #3584
Will close #3491 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3738/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3738/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3738",
"html_url": "https://github.com/huggingface/transformers/pull/3738",
"diff_url": "https://github.com/huggingface/transformers/pull/3738.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3738.patch",
"merged_at": 1586536445000
} |
https://api.github.com/repos/huggingface/transformers/issues/3737 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3737/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3737/comments | https://api.github.com/repos/huggingface/transformers/issues/3737/events | https://github.com/huggingface/transformers/issues/3737 | 597,939,058 | MDU6SXNzdWU1OTc5MzkwNTg= | 3,737 | Seq2Seq: decoder hidden_states shape not tested | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"same for `T5`. I thought a bit about how to correctly sort the `encoder_hidden_states`, `decoder_hidden_states` output (if the user wants to output it). I think it's not easy at all to implement this cleanly...namedtuples would make this so much easier, so maybe just wait until we add those? ",
"Makes sense. \r\nAre namedtuples on anybody's roadmap?",
"Not really as far as I know! It would be a big change (lots of code has to be adapted). In my opinion it would be best to start with the outer-most outputs (the returned `outputs` of the models) and see how that goes:\r\n- How easy is it to have everything backwards compatible?\r\n- How much cleaner does the code get in `generate()` this way?\r\n- How much code has to be added for named tuples? ",
"(Just assigning myself so that I can easily find our discussion again)",
"Hi @patrickvonplaten,\r\nfor a distillation purpose of T5, I want to return the `deocder_hidden_states`. using this:\r\n\r\n```\r\nfrom transformers import T5Tokenizer, T5ForConditionalGeneration\r\ntokenizer = T5Tokenizer.from_pretrained(\"t5-small\")\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"t5-small\")\r\n# training\r\ninput_ids = tokenizer(\"The <extra_id_0> walks in <extra_id_1> park\", return_tensors=\"pt\").input_ids\r\nlabels = tokenizer(\"<extra_id_0> cute dog <extra_id_1> the <extra_id_2>\", return_tensors=\"pt\").input_ids\r\noutputs = model(input_ids=input_ids, labels=labels)\r\nloss = outputs.loss\r\nlogits = outputs.logits\r\nprint(outputs.decoder_hidden_states) # None!!\r\n```\r\n",
"Can you add `output_hidden_states=True` to `model(...)`? "
] | 1,586 | 1,651 | 1,591 | CONTRIBUTOR | null | [this line] points at encoder hidden states for Bart.
https://github.com/huggingface/transformers/blob/2ee410560e45ae3c619dc1e0b0fc4d257c48e18a/tests/test_modeling_common.py#L464 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3737/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3736 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3736/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3736/comments | https://api.github.com/repos/huggingface/transformers/issues/3736/events | https://github.com/huggingface/transformers/pull/3736 | 597,932,963 | MDExOlB1bGxSZXF1ZXN0NDAxOTQyOTgz | 3,736 | updated dutch squad model card | {
"login": "borhenryk",
"id": 35457598,
"node_id": "MDQ6VXNlcjM1NDU3NTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35457598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borhenryk",
"html_url": "https://github.com/borhenryk",
"followers_url": "https://api.github.com/users/borhenryk/followers",
"following_url": "https://api.github.com/users/borhenryk/following{/other_user}",
"gists_url": "https://api.github.com/users/borhenryk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borhenryk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borhenryk/subscriptions",
"organizations_url": "https://api.github.com/users/borhenryk/orgs",
"repos_url": "https://api.github.com/users/borhenryk/repos",
"events_url": "https://api.github.com/users/borhenryk/events{/privacy}",
"received_events_url": "https://api.github.com/users/borhenryk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,586 | 1,586 | 1,586 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3736/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3736",
"html_url": "https://github.com/huggingface/transformers/pull/3736",
"diff_url": "https://github.com/huggingface/transformers/pull/3736.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3736.patch",
"merged_at": 1586601900000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3735 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3735/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3735/comments | https://api.github.com/repos/huggingface/transformers/issues/3735/events | https://github.com/huggingface/transformers/issues/3735 | 597,930,505 | MDU6SXNzdWU1OTc5MzA1MDU= | 3,735 | Pipeline for text generation | {
"login": "r0levrai",
"id": 22660388,
"node_id": "MDQ6VXNlcjIyNjYwMzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/22660388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/r0levrai",
"html_url": "https://github.com/r0levrai",
"followers_url": "https://api.github.com/users/r0levrai/followers",
"following_url": "https://api.github.com/users/r0levrai/following{/other_user}",
"gists_url": "https://api.github.com/users/r0levrai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/r0levrai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/r0levrai/subscriptions",
"organizations_url": "https://api.github.com/users/r0levrai/orgs",
"repos_url": "https://api.github.com/users/r0levrai/repos",
"events_url": "https://api.github.com/users/r0levrai/events{/privacy}",
"received_events_url": "https://api.github.com/users/r0levrai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"There is currently no text generation pipeline, but the `generate` method on both PyTorch/TensorFlow models is here for that purpose!\r\n\r\n[Here's](https://huggingface.co/transformers/usage.html#causal-language-modeling) an example using that method for generating from left context :).",
"You are right, I thought there was a great deal of abstraction done by the `run_generation.py` script given its length, but it turns out except for a few things, it's just interfacing with the CLI. I will be fine with the vanilla `generate` function!\r\n\r\nThanks for the rapid answer :)\r\n\r\n---\r\n\r\nFor future reference (hey there future me!), \"a few things\" are:\r\n* tokenizer encoding+decoding\r\n* careful seed initialization\r\n* moving everything to cuda device\r\n* stop token handling\r\n* nice logging\r\n* (padding text for some models)\r\n* (capping generation length)",
"@LysandreJik Is there a way to generate text given context in a random position? For example, given a keyword 'window' I'd like to generate text that contains 'window', doesn't matter where. For example:\r\n\r\n* she was looking out the window\r\n* cleaning that window was a difficult task"
] | 1,586 | 1,605 | 1,586 | NONE | null | Hello,
* pipelines concise syntax and features are really nice, but there is none for text generation from left context
* `examples/run_generation.py` concise syntax (and some model-specific preprocessing) are really nice, but it is made for use via CLI and not from code
Any chance we see a text generation pipeline (optionally with some of the `run_generation.py` features) coming to 🤗 Transformers ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3735/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3735/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3734 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3734/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3734/comments | https://api.github.com/repos/huggingface/transformers/issues/3734/events | https://github.com/huggingface/transformers/pull/3734 | 597,928,533 | MDExOlB1bGxSZXF1ZXN0NDAxOTM5NjE5 | 3,734 | [Config, Caching] Remove `output_past` everywhere and replace by `use_cache` argument | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3734?src=pr&el=h1) Report\n> Merging [#3734](https://codecov.io/gh/huggingface/transformers/pull/3734?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7972a4019f4bc9f85fd358f42249b90f9cd27c68&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `89.47%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3734?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3734 +/- ##\n=======================================\n Coverage 78.26% 78.26% \n=======================================\n Files 106 106 \n Lines 17928 17956 +28 \n=======================================\n+ Hits 14031 14054 +23 \n- Misses 3897 3902 +5 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3734?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.03% <80.00%> (-1.22%)` | :arrow_down: |\n| [src/transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `85.93% <86.66%> (-0.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/3734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.43% <86.66%> (-0.75%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `98.40% <87.50%> (-1.18%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.96% <90.90%> (-0.13%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.96% <91.66%> (+0.16%)` | :arrow_up: |\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `97.01% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.48% <100.00%> (-0.02%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.74% <100.00%> (+0.53%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3734/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `89.08% <100.00%> (+0.02%)` | :arrow_up: |\n| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/3734/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3734?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3734?src=pr&el=footer). Last update [7972a40...0d27b0e](https://codecov.io/gh/huggingface/transformers/pull/3734?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"IMO, it's not a configuration option because it's either not configurable or does not need to be configured at __init__.\r\n\r\n\r\nWhether a model **implements** caching could be a property of the model, but the config should have nothing to do with that since it can't effect it.\r\n\r\nIf a model implements caching, you nearly always want to use it to speed up generation, but never want to use it during training. So if you generate during your validation step, should you reload your model with a new config? I think not.\r\n",
"I understand. I still feel that it should not be an input to the forward method which currently accepts tensors of inputs to be fed to the model. I don't think having a boolean flag here would make sense.\r\n\r\nI see it exactly as `output_attentions` and `output_hidden_states`, which are configurable with the configuration and are not boolean flags passed during the forward method. How is that different? ",
"The only difference is that there is a common use case where you want the flag to be true during validation and false during training.",
"> I understand. I still feel that it should not be an input to the forward method which currently accepts tensors of inputs to be fed to the model. I don't think having a boolean flag here would make sense.\r\n> \r\n> I see it exactly as `output_attentions` and `output_hidden_states`, which are configurable with the configuration and are not boolean flags passed during the forward method. How is that different?\r\n\r\nSorry, I probably should have waited until you answered on the conversation there was about `use_cache` vs `config.output_past` in #3682. \r\n\r\nI also noticed during the PR that the `forward()` function expects usually only tensors so that the flag does not fit in. \r\n\r\nI'm still in favor of using `use_cache` as an argument though because it gives the user more control over the memory vs. speed trade-off by setting the `use_cache` flag. As @sshleifer said that's especially important when you don't want to use caching during training, but want to speed it up during validation. When having `output_past` in this case, the user would have to change the config for every Attention Layer in the decoder. \r\nAnother smaller reason for me to favor `use_cache` is that a lot of models cannot or do not output the `past` key value states. \r\n\r\nBut happy to change it back or maybe there is another way that would give the user more control and not put it in the `forward` signature? @thomwolf ",
"Why is it important that `forward` only take tensors/None?",
"These are good points.\r\n\r\nI think that as @LysandreJik mentioned `use_cache` indeed falls in the same category as `output_attentions` and `output_hidden_states`, i.e. parameters which modify the model behavior without changing its architecture it-self (i.e. can be changed without re-loading/re-instantiating the model).\r\n\r\nI also agree with @sshleifer that people may want to alter this behavior between training and testing but I think they may also not want to have to specify this behavior each time they run the forward pass.\r\n\r\nOverall, I think this is actually also the same category as the parameters we have in the `.generate()` method.\r\n\r\nSo I think the behavior we designed for `.generate()` with @patrickvonplaten could be the way to go here:\r\n- have default parameter values in the configuration (that can thus be tweaked at model initialization as well), and\r\n- allow the user to override these defaults parameters at run time with values provided to the `forward()` pass.\r\n\r\nIt would actually be great to have this behavior implemented for `output_attentions` and `output_hidden_states` as well (this wouldn't be a breaking change).\r\n\r\nWhat do you think @LysandreJik @patrickvonplaten @sshleifer @julien-c ?",
"I like @thomwolf suggestion much better than the status quo.\r\n\r\nI slightly prefer no mention of `use_cache` on config for a few minor reasons, but I don't feel too strongly:\r\n1. to avoid the proliferation of `if x is None: x = config.x`\r\n2. logic can then be controlled in `prepare_inputs_for_generation` and tracked with version control.\r\n3. configs are long and getting longer and maintaining them is costlier than maintaining tested code.\r\n\r\nThese arguments also apply to `output_attentions` and `output_hidden_states`, but there we have more of a backwards compatibility issue.\r\n",
"Ok those are fair points. I agree with @thomwolf's proposition as well. Adding those arguments to the forward's signature means that from now on we'll control the behaviour of models according to these arguments, and not what's written in the configuration.\r\n\r\nI'm okay with this but it is a somewhat of a big change in the API, which we should document.",
"Is this good for merge? \r\n\r\nI can merge this and open a new PR regarding adding `output_hidden_states` and `output_attentions` to the models signature. I added `use_cache` to the docs when relevant. Should I add it anywhere else in the docs? @LysandreJik "
] | 1,586 | 1,586 | 1,586 | MEMBER | null | The `config.output_past` variable is removed and replaced by a function argument `use_cache`.
The reasons for this were explained in PR: https://github.com/huggingface/transformers/pull/3682
Affected models are:
T5, Bart, GPT2, XLNet, CTRL, TFGPT2, TFCTRL, TFXLNET
It is made sure that the change **does not break backwards compatibility** by setting `use_cache=True` by default since `config.output_past` was set to `True` by default before.
This can also be checked by seeing that none of the tests had to be changed (except T5's where the last PR: https://github.com/huggingface/transformers/pull/3682 for GPT2 and CTRL broke the backward compatibility of the default output length of T5)
I made the behavior of using the `past` variable the same in GPT2, T5, and CTRL. The logic is the following:
If the user decides to use `past` the `past` key-value states are cached and output.
The user can then optionally only input the last `input_ids` instead of all previous ones.
if the user decides to use `past`, the `last_hidden_states` output is reduced to only the last tensor instead of the same length as the `input_ids` (this is the same as before and cannot really be changed anyways because when caching keys and values earlier outputs cannot be calculated anymore and should not to improve speed).
It is made sure that if `use_cache` is False, nothing is cached! This means that a lot of memory can be saved when the user needs to be memory efficiency (this was not the case before).
All of this should be documented well in each of the models docstrings - let me know if something is badly documented and I'll change it :-)
Would be nice if you could take a look @sshleifer @thomwolf @LysandreJik @yjernite
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3734/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3734",
"html_url": "https://github.com/huggingface/transformers/pull/3734",
"diff_url": "https://github.com/huggingface/transformers/pull/3734.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3734.patch",
"merged_at": 1586889629000
} |
https://api.github.com/repos/huggingface/transformers/issues/3733 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3733/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3733/comments | https://api.github.com/repos/huggingface/transformers/issues/3733/events | https://github.com/huggingface/transformers/issues/3733 | 597,881,207 | MDU6SXNzdWU1OTc4ODEyMDc= | 3,733 | How i take an OpenAIGPTDoubleHeadsModel from run_language_modeling.py script? | {
"login": "nikkon3",
"id": 41228217,
"node_id": "MDQ6VXNlcjQxMjI4MjE3",
"avatar_url": "https://avatars.githubusercontent.com/u/41228217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikkon3",
"html_url": "https://github.com/nikkon3",
"followers_url": "https://api.github.com/users/nikkon3/followers",
"following_url": "https://api.github.com/users/nikkon3/following{/other_user}",
"gists_url": "https://api.github.com/users/nikkon3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikkon3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikkon3/subscriptions",
"organizations_url": "https://api.github.com/users/nikkon3/orgs",
"repos_url": "https://api.github.com/users/nikkon3/repos",
"events_url": "https://api.github.com/users/nikkon3/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikkon3/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"What are you trying to do exactly? If using our scripts, you would usually pre-train the transformer model (without the heads, so it doesn't make sense to use a double heads model), and you would then fine-tune the model with the heads on a specific dataset. \r\n\r\nIf you can explain a bit more what you're trying to do then I could guide you better towards the appropriate scripts. ",
"@LysandreJik I am new to all these , just i try to do things to learn. I saw this script: https://github.com/huggingface/transfer-learning-conv-ai/blob/master/train.py \r\nand i try to do something similar for another language(non english). \r\nSo my idea was, to start with pre-training a gpt2 with run_language_modeling, from scratch in a new language and after fine-tune it in dialogue.\r\nFrom your answer, i think if i understood, first i must pretrain the gpt2 and after to fine-tune it with the heads in a spesific dataset as dialogue in my case.\r\nBut how after the pre-training i use the DoubleHeadsModel?\r\n",
"That's right, that's how I would do it. I would use the `run_language_modeling.py` script with GPT-2 on your data (beware that pre-training requires a lot of data, and a lot of compute).\r\n\r\nOnce your model is trained on your *big* dataset, then you can use the double heads model and fine-tune it to dialog. We don't have a script for this now, but the link you shared alongside the [blog post](https://medium.com/huggingface/how-to-build-a-state-of-the-art-conversational-ai-with-transfer-learning-2d818ac26313) detail how to do it!\r\n\r\nIn order to do what I just mentioned, you would therefore need a large corpus for pre-training, and a dialog dataset in the specific language for fine-tuning.",
"thanks @LysandreJik. You cleared a litlle my mind at least for the pre-train. \r\nA small blind spot i have only in how i will pass to the double heads after the pre-training(in blog i don't think they say something for that).\r\nHope i will find an answer and to that when i ll be in that spot.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,586 | 1,591 | 1,591 | NONE | null | I train a gpt2 type model from scratch with run_language_modeling.py .
But i want to take an OpenAIGPTDoubleHeadsModel as my model.
My config.json is below. What i should change?
config = {
"architectures": [
"gpt2"
],
"model_name_or_path": None ,
"model_type": "gpt2",
"vocab_size":5000,
"n_positions":1024,
"n_ctx":1024,
"n_embd":768,
"n_layer":6,
"n_head":12,
"activation_function":'gelu_new',
"resid_pdrop":0.1,
"embd_pdrop":0.1,
"attn_pdrop":0.1,
"layer_norm_epsilon":1e-05,
"initializer_range": 0.02,
"summary_type":"cls_index",
"summary_use_proj":True,
"summary_activation":None,
"summary_proj_to_labels":True,
"summary_first_dropout":0.1,
"bos_token_id":4999,
"eos_token_id":4999,
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3733/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3732 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3732/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3732/comments | https://api.github.com/repos/huggingface/transformers/issues/3732/events | https://github.com/huggingface/transformers/issues/3732 | 597,808,226 | MDU6SXNzdWU1OTc4MDgyMjY= | 3,732 | Fine tuning XLMRoberta for Question Answering | {
"login": "wasiahmad",
"id": 17520413,
"node_id": "MDQ6VXNlcjE3NTIwNDEz",
"avatar_url": "https://avatars.githubusercontent.com/u/17520413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wasiahmad",
"html_url": "https://github.com/wasiahmad",
"followers_url": "https://api.github.com/users/wasiahmad/followers",
"following_url": "https://api.github.com/users/wasiahmad/following{/other_user}",
"gists_url": "https://api.github.com/users/wasiahmad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wasiahmad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wasiahmad/subscriptions",
"organizations_url": "https://api.github.com/users/wasiahmad/orgs",
"repos_url": "https://api.github.com/users/wasiahmad/repos",
"events_url": "https://api.github.com/users/wasiahmad/events{/privacy}",
"received_events_url": "https://api.github.com/users/wasiahmad/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,586 | 1,586 | 1,586 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I am trying to fine-tune XLM Roberta for SQuAD using the run_squad.py script available in examples. I have simply written an `XLMRobertaForQuestionAnswering` class as suggested at https://github.com/huggingface/transformers/issues/3694. However, the performance is extremely poor. I am wondering, do I need to perform anything special during preprocessing the SQuAD dataset when I use XLMRoberta?
I am using all the default hyper-parameters provided in the example script. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3732/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3731 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3731/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3731/comments | https://api.github.com/repos/huggingface/transformers/issues/3731/events | https://github.com/huggingface/transformers/issues/3731 | 597,777,813 | MDU6SXNzdWU1OTc3Nzc4MTM= | 3,731 | Loading pipeline("summarization") failed | {
"login": "asymmetric-supernova",
"id": 17072697,
"node_id": "MDQ6VXNlcjE3MDcyNjk3",
"avatar_url": "https://avatars.githubusercontent.com/u/17072697?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asymmetric-supernova",
"html_url": "https://github.com/asymmetric-supernova",
"followers_url": "https://api.github.com/users/asymmetric-supernova/followers",
"following_url": "https://api.github.com/users/asymmetric-supernova/following{/other_user}",
"gists_url": "https://api.github.com/users/asymmetric-supernova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asymmetric-supernova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asymmetric-supernova/subscriptions",
"organizations_url": "https://api.github.com/users/asymmetric-supernova/orgs",
"repos_url": "https://api.github.com/users/asymmetric-supernova/repos",
"events_url": "https://api.github.com/users/asymmetric-supernova/events{/privacy}",
"received_events_url": "https://api.github.com/users/asymmetric-supernova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"The default summarization pipeline doesn't have support for TF unfortunately, but we should probably add an explicit error message @sshleifer ",
"+1",
"You can use `T5's` TF summarization though, like:\r\n`pipeline(\"summarization\", model=\"t5-base\", framework=\"tf\")`",
"I think the error has nothing to do with \"summarization\" or \"Bart\". I think the problem is that you were calling a pytorch pipeline without having pytorch installed. If this happens a weird error message like the one above is thrown out. We should probably add a better error message here:\r\nhttps://github.com/huggingface/transformers/blob/700ccf6e35616fcbee59de81edd60cec9e14fb6b/src/transformers/pipelines.py#L1567\r\nSomething along the lines: `if modelclass in None: <good_error_message>`",
"To run pipelines in TF, the argument `framework=\"tf\"` should be added to `pipeline()`",
"I don't think that's right, @patrickvonplaten.\r\n\r\nPipelines use TF automatically if that's what you have instead of PyTorch: ie it does `framework = \"pt\" if is_torch_available() else \"tf\"`\r\n\r\nHowever, as I was saying, the **default** (bart-based) summarization pipeline doesn't have a TF model, see line 1447:\r\n```python\r\n\"default\": {\r\n \"model\": {\"pt\": \"bart-large-cnn\", \"tf\": None},\r\n}\r\n```\r\n\r\n",
"Sorry, you are 100 % right @julien-c! \r\n\r\nI overlooked this line:\r\nhttps://github.com/huggingface/transformers/blob/700ccf6e35616fcbee59de81edd60cec9e14fb6b/src/transformers/pipelines.py#L1564\r\n\r\nSo, we do need a better error message for pipelines that have a `None` as a default model.",
"@julien-c works fine with pytorch, thanks",
"Thanks, works for me as well!\r\nAs mentioned before, the improved error message would help a lot.",
"Do you want to take a stab at a better error message for this @patrickvonplaten?"
] | 1,586 | 1,591 | 1,591 | NONE | null | Hi guys, when I try to load pipeline("summarization") I get the following error:
`Traceback (most recent call last):
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "PATH", line 11, in <module>
summarizer = pipeline("summarization")
File "PATH\venv\lib\site-packages\transformers\pipelines.py", line 1626, in pipeline
return task_class(model=model, tokenizer=tokenizer, modelcard=modelcard, framework=framework, task=task, **kwargs,)
File "PATH\venv\lib\site-packages\transformers\pipelines.py", line 367, in __init__
task_specific_params = self.model.config.task_specific_params
AttributeError: 'NoneType' object has no attribute 'config'
`
**Setup:**
Python: 3.7.6
transformers==2.8.0
tensorboard==2.0.2
tensorflow==2.0.0
tensorflow-estimator==2.0.1
tensorflow-hub==0.7.0
```
from transformers import pipeline
from transformers import TFAutoModelWithLMHead, AutoTokenizer
summarizer = pipeline("summarization")
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3731/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3730 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3730/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3730/comments | https://api.github.com/repos/huggingface/transformers/issues/3730/events | https://github.com/huggingface/transformers/issues/3730 | 597,775,828 | MDU6SXNzdWU1OTc3NzU4Mjg= | 3,730 | OOM error when resuming training from a checkpoint | {
"login": "timsoraro",
"id": 61194445,
"node_id": "MDQ6VXNlcjYxMTk0NDQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/61194445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timsoraro",
"html_url": "https://github.com/timsoraro",
"followers_url": "https://api.github.com/users/timsoraro/followers",
"following_url": "https://api.github.com/users/timsoraro/following{/other_user}",
"gists_url": "https://api.github.com/users/timsoraro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timsoraro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timsoraro/subscriptions",
"organizations_url": "https://api.github.com/users/timsoraro/orgs",
"repos_url": "https://api.github.com/users/timsoraro/repos",
"events_url": "https://api.github.com/users/timsoraro/events{/privacy}",
"received_events_url": "https://api.github.com/users/timsoraro/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, unfortunately, we have no way of helping without having more information. What scripts are you using? What model? What Python version? What transformers version? \r\n\r\nIt would be wonderful if you could use the issue template and describe exactly the issue so that we may help.",
"Hi, I'm sorry. I'm using [examples/run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py). After some more experiments, I noticed as #2954 did, that the OOM error only happens when resuming on a checkpoint in multi GPU training. When resuming using a single GPU there's no error.\r\n\r\nCommand example:\r\n```\r\npython -m torch.distributed.launch --nproc_per_node 8 run_language_modeling.py --output_dir=./output/ --model_type=gpt2 --model_name_or_path=gpt2-large --do_train --train_data_file=./data/training.txt --per_gpu_train_batch_size 1 --num_train_epochs 3 --fp16\r\n```\r\n\r\nError:\r\n\r\n```python\r\nTraceback (most recent call last):\r\n File \"run_language_modeling.py\", line 992, in <module>\r\n main()\r\n File \"run_language_modeling.py\", line 942, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer)\r\n File \"run_language_modeling.py\", line 428, in train\r\n optimizer.load_state_dict(torch.load(os.path.join(args.model_name_or_path, \"optimizer.pt\")))\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/serialization.py\", line 590, in load\r\n return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/serialization.py\", line 764, in _legacy_load\r\n result = unpickler.load()\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/serialization.py\", line 726, in persistent_load\r\n deserialized_objects[root_key] = restore_location(obj, location)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/serialization.py\", line 190, in default_restore_location\r\n result = fn(storage, location)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/serialization.py\", line 170, in _cuda_deserialize\r\n return storage_type(obj.size())\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/cuda/__init__.py\", line 478, in _lazy_new\r\n return super(_CudaBase, cls).__new__(cls, *args, **kwargs)\r\nRuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 31.72 GiB total capacity; 1.89 GiB already allocated; 10.88 MiB free; 1.92 GiB reserved in total by PyTorch)\r\n```\r\n\r\n@LysandreJik Can you please reopen the issue?",
"I ran into this issue as well when restarting from a checkpoint. \r\nI think this is a bug in [trainer.py](https://github.com/huggingface/transformers/blob/3e0f06210646a440509efa718b30d18322d6a830/src/transformers/trainer.py#L334) :\r\n```\r\noptimizer.load_state_dict(torch.load(os.path.join(model_path, \"optimizer.pt\")))\r\n```\r\nLoading from `optimizer.pt` causes `optimizer` to be mapped to the same device as the saved `optimizer.pt`. In this case it's always `cuda:0`(saved by local master), which puts all optimizers on gpu0, causing OOM.\r\n\r\nChanging it to \r\n```\r\noptimizer.load_state_dict(torch.load(os.path.join(model_path, \"optimizer.pt\"), map_location=self.args.device))\r\n```\r\nsolved it for me.",
"This looks correct, can you open a PR?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,586 | 1,595 | 1,595 | NONE | null | # 🐛 Bug
Some previous issues #2954 described a memory leak when resuming training from a checkpoint. I still get an OOM error when resuming training from a checkpoint. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3730/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3729 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3729/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3729/comments | https://api.github.com/repos/huggingface/transformers/issues/3729/events | https://github.com/huggingface/transformers/pull/3729 | 597,648,707 | MDExOlB1bGxSZXF1ZXN0NDAxNzIxNDkz | 3,729 | exbert links for my albert model cards | {
"login": "elgeish",
"id": 6879673,
"node_id": "MDQ6VXNlcjY4Nzk2NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6879673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elgeish",
"html_url": "https://github.com/elgeish",
"followers_url": "https://api.github.com/users/elgeish/followers",
"following_url": "https://api.github.com/users/elgeish/following{/other_user}",
"gists_url": "https://api.github.com/users/elgeish/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elgeish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elgeish/subscriptions",
"organizations_url": "https://api.github.com/users/elgeish/orgs",
"repos_url": "https://api.github.com/users/elgeish/repos",
"events_url": "https://api.github.com/users/elgeish/events{/privacy}",
"received_events_url": "https://api.github.com/users/elgeish/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"Hi @elgeish , you also need to add a \r\n\r\n```\r\ntags:\r\n- exbert\r\n```\r\n\r\nto the metadata block.\r\n\r\n",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3729?src=pr&el=h1) Report\n> Merging [#3729](https://codecov.io/gh/huggingface/transformers/pull/3729?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ce2298fb5f84a8d0d8860c15fb677b7ada07a8ad&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3729?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3729 +/- ##\n==========================================\n+ Coverage 78.18% 78.20% +0.01% \n==========================================\n Files 104 104 \n Lines 17799 17799 \n==========================================\n+ Hits 13917 13919 +2 \n+ Misses 3882 3880 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3729?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3729/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.96% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3729/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `77.22% <0.00%> (+0.21%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3729?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3729?src=pr&el=footer). Last update [ce2298f...e003d19](https://codecov.io/gh/huggingface/transformers/pull/3729?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks!"
] | 1,586 | 1,587 | 1,587 | CONTRIBUTOR | null | Adding links for exbert visualization. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3729/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3729/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3729",
"html_url": "https://github.com/huggingface/transformers/pull/3729",
"diff_url": "https://github.com/huggingface/transformers/pull/3729.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3729.patch",
"merged_at": 1587394480000
} |
https://api.github.com/repos/huggingface/transformers/issues/3728 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3728/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3728/comments | https://api.github.com/repos/huggingface/transformers/issues/3728/events | https://github.com/huggingface/transformers/issues/3728 | 597,576,947 | MDU6SXNzdWU1OTc1NzY5NDc= | 3,728 | Checking that the LM actually trained | {
"login": "nikkon3",
"id": 41228217,
"node_id": "MDQ6VXNlcjQxMjI4MjE3",
"avatar_url": "https://avatars.githubusercontent.com/u/41228217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikkon3",
"html_url": "https://github.com/nikkon3",
"followers_url": "https://api.github.com/users/nikkon3/followers",
"following_url": "https://api.github.com/users/nikkon3/following{/other_user}",
"gists_url": "https://api.github.com/users/nikkon3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikkon3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikkon3/subscriptions",
"organizations_url": "https://api.github.com/users/nikkon3/orgs",
"repos_url": "https://api.github.com/users/nikkon3/repos",
"events_url": "https://api.github.com/users/nikkon3/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikkon3/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Yes: simply `model.generate()` (not even a need for a Pipeline in that case)\r\n\r\ncc @patrickvonplaten ",
"I'd check if 'GPT2' works by sampling from a simple prompt. E.g.:\r\n\r\n```\r\noutput = model.generate(tokenizer.encode('The president', return_tensors='pt'), do_sample=True)\r\ntokenizer.decode(output[0])\r\n```\r\n",
"Thanks for clarifying! I was about to consider sending a PR for a `GenerationPipeline` under `transformers.pipeline`.",
"#### I have a branch that implements a GenerationPipeline which already works for GPT models\r\n\r\nThe initial version of `GenerationPipeline` can be found in the branch's pipelines [module](https://github.com/enzoampil/transformers/blob/generation_pipeline/src/transformers/pipelines.py), where I've registered it to the `pipeline` function using `gpt2` as the default.\r\n\r\nThe implementation is based on the approach taken in [run_generation.py](https://github.com/huggingface/transformers/blob/master/examples/run_generation.py), which means the forward pass uses the `model.generate()` method explained by @julien-c and @patrickvonplaten above.\r\n\r\nSo far, the code above works smoothly for `open-ai` and `gpt2`. \r\n\r\nSample code:\r\n```\r\n# Pip install\r\n# If you're using Google Colab, make sure to reset runtime after installing\r\n!pip install -e git+git://github.com/enzoampil/transformers.git@generation_pipeline#egg=transformers\r\n\r\n# Pipeline uses `gpt2` by default\r\nfrom transformers import pipeline\r\ngpt = pipeline('generation', num_return_sequences=1, length=40)\r\ngpt(\"You look great\")\r\n# ['You look great, me!\" he says. \"There\\'s nothing wrong with that, it\\'s just I wanted a bit of attention so I had to go to work. I had to back down.\"\\n']\r\n\r\n```\r\n\r\nHowever, the module still doesn't work with other language models like `xlm`, `xlnet`, and `transfo-xl`.\r\n\r\nI will do a root cause analysis on this and will send a PR as soon as I get this to work on the rest of the language models that should work with `GenerationPipeline` (i.e. those runnable from `run_generation.py`).\r\n\r\nFor more details, you can check out this [colab notebook](https://colab.research.google.com/drive/1PHmYRpgzdMeSR68i4w5tPfUjlv0npCQz), which shows the gpt models working so far, and the rest of the models not working in the later sections.",
"#### [UPDATE] The issues above have been resolved and I'm in the process of sending a PR.\r\n\r\nGoogle Colab tutorial [here](https://colab.research.google.com/drive/1PHmYRpgzdMeSR68i4w5tPfUjlv0npCQz) for running `GenerationPipeline` for the following LM models:\r\n1. OpenAI GPT\r\n2. OpenAI GPT-2\r\n3. Transformer-XL\r\n4. XML\r\n5. XLNet\r\n6. T5\r\n7. CTRL\r\n",
"You're PR looks very nice so far :-) I will take a look early next week!",
"Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,586 | 1,591 | 1,591 | NONE | null | I have trained a gpt2 from scratch with the way that is decribed in that post https://huggingface.co/blog/how-to-train .
Just in the step 4, where he checks if the trained model actually works, he uses from pipeline the
"fill-mask" but that works only for models with masked language modeling objective.
Exists something similar i could use like "fill-mask" for my case? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3728/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3727 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3727/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3727/comments | https://api.github.com/repos/huggingface/transformers/issues/3727/events | https://github.com/huggingface/transformers/issues/3727 | 597,478,910 | MDU6SXNzdWU1OTc0Nzg5MTA= | 3,727 | ValueError: Cannot reshape a tensor - TFBertForSequenceClassification | {
"login": "DMells",
"id": 23553543,
"node_id": "MDQ6VXNlcjIzNTUzNTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/23553543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DMells",
"html_url": "https://github.com/DMells",
"followers_url": "https://api.github.com/users/DMells/followers",
"following_url": "https://api.github.com/users/DMells/following{/other_user}",
"gists_url": "https://api.github.com/users/DMells/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DMells/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DMells/subscriptions",
"organizations_url": "https://api.github.com/users/DMells/orgs",
"repos_url": "https://api.github.com/users/DMells/repos",
"events_url": "https://api.github.com/users/DMells/events{/privacy}",
"received_events_url": "https://api.github.com/users/DMells/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,586 | 1,592 | 1,592 | NONE | null | I'm building a multiclass text classification model using Keras and BERT.
To convert my inputs to the required bert format, I'm using the `encode_plus` method found in the BertTokenizer class [found here][1]
The data is a paragraph of sentences per feature, and has a single label (of 45 labels in total)
**The code to convert the inputs is :**
def create_input_array(df, tokenizer):
sentences = df.text.values
labels = df.label.values
input_ids = []
attention_masks = []
token_type_ids = []
# For every sentence...
for sent in sentences:
# `encode_plus` will:
# (1) Tokenize the sentence.
# (2) Prepend the `[CLS]` token to the start.
# (3) Append the `[SEP]` token to the end.
# (4) Map tokens to their IDs.
# (5) Pad or truncate the sentence to `max_length`
# (6) Create attention masks for [PAD] tokens.
encoded_dict = tokenizer.encode_plus(
sent, # Sentence to encode.
add_special_tokens=True, # Add '[CLS]' and '[SEP]'
max_length=128, # Pad & truncate all sentences.
pad_to_max_length=True,
return_attention_mask=True, # Construct attn. masks.
return_tensors='tf', # Return tf tensors.
)
# Add the encoded sentence to the list.
input_ids.append(encoded_dict['input_ids'])
# And its attention mask (simply differentiates padding from non-padding).
attention_masks.append(encoded_dict['attention_mask'])
token_type_ids.append(encoded_dict['token_type_ids'])
return [np.asarray(input_ids, dtype=np.int32),
np.asarray(attention_masks, dtype=np.int32),
np.asarray(token_type_ids, dtype=np.int32)]
**The model in it's most basic form which still reproduces the error:**
model = TFBertForSequenceClassification.from_pretrained(
"bert-base-uncased",
num_labels = labellen,
output_attentions = False,
output_hidden_states = False
)
**Compile and fit:**
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
model.fit(x_train, y[:100], epochs=1, batch_size=3)
**The error when I run this :**
> ValueError: Cannot reshape a tensor with 768 elements to shape
> [1,1,128,1] (128 elements) for '{{node
> tf_bert_for_sequence_classification_3/bert/embeddings/LayerNorm/Reshape}}
> = Reshape[T=DT_FLOAT, Tshape=DT_INT32](tf_bert_for_sequence_classification_3/bert/embeddings/LayerNorm/Reshape/ReadVariableOp,
> tf_bert_for_sequence_classification_3/bert/embeddings/LayerNorm/Reshape/shape)'
> with input shapes: [768], [4] and with input tensors computed as
> partial shapes: input[1] = [1,1,128,1].
I understand that BERT converts every token into a 768 value array, but that is the only knowledge I have of that particular number, so I'm stuck on how to proceed.
I would also appreciate your thoughts on whether TFBertForSequenceClassification is appropriate for paragraph classification.
Many thanks.
[1]: https://huggingface.co/transformers/main_classes/tokenizer.html
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3727/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3726 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3726/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3726/comments | https://api.github.com/repos/huggingface/transformers/issues/3726/events | https://github.com/huggingface/transformers/pull/3726 | 597,472,860 | MDExOlB1bGxSZXF1ZXN0NDAxNTc5Mjgz | 3,726 | Separate input_ids and decoder_input_ids in model.generate() | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3726?src=pr&el=h1) Report\n> Merging [#3726](https://codecov.io/gh/huggingface/transformers/pull/3726?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bc65afc4dfac3badf3de3be395d4023b44c61bdd&el=desc) will **increase** coverage by `0.03%`.\n> The diff coverage is `89.61%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3726?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3726 +/- ##\n==========================================\n+ Coverage 78.14% 78.17% +0.03% \n==========================================\n Files 104 104 \n Lines 17723 17799 +76 \n==========================================\n+ Hits 13849 13915 +66 \n- Misses 3874 3884 +10 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3726?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `82.77% <87.90%> (+1.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.81% <96.42%> (-0.17%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.00% <100.00%> (ø)` | |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `97.60% <100.00%> (-0.02%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3726/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.96% <0.00%> (-0.17%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3726?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3726?src=pr&el=footer). Last update [bc65afc...6a764df](https://codecov.io/gh/huggingface/transformers/pull/3726?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Can we get the prefix functionality with just one kwarg instead of all the renaming?",
"Got stuck in a merge / rebase loop, closing and starting again."
] | 1,586 | 1,586 | 1,586 | MEMBER | null | This makes the generation behavior more similar for sequence-to-sequence and language models, and allows us to initialize decoding with a prefix for the encoder-decoder setting. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3726/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3726/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3726",
"html_url": "https://github.com/huggingface/transformers/pull/3726",
"diff_url": "https://github.com/huggingface/transformers/pull/3726.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3726.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3725 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3725/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3725/comments | https://api.github.com/repos/huggingface/transformers/issues/3725/events | https://github.com/huggingface/transformers/issues/3725 | 597,462,081 | MDU6SXNzdWU1OTc0NjIwODE= | 3,725 | Fine-tuning for paraphrasing tasks | {
"login": "anmoljagetia",
"id": 6505326,
"node_id": "MDQ6VXNlcjY1MDUzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6505326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anmoljagetia",
"html_url": "https://github.com/anmoljagetia",
"followers_url": "https://api.github.com/users/anmoljagetia/followers",
"following_url": "https://api.github.com/users/anmoljagetia/following{/other_user}",
"gists_url": "https://api.github.com/users/anmoljagetia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anmoljagetia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anmoljagetia/subscriptions",
"organizations_url": "https://api.github.com/users/anmoljagetia/orgs",
"repos_url": "https://api.github.com/users/anmoljagetia/repos",
"events_url": "https://api.github.com/users/anmoljagetia/events{/privacy}",
"received_events_url": "https://api.github.com/users/anmoljagetia/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This might help: https://huggingface.co/transformers/usage.html#sequence-classification",
"^^ These are inference examples. Do you know how can I _retrain_ ?",
"I would stress that this topic is quite interesting and useful. A good generative model for paraphrasing may help with text classification with small datasets. Backtranslation (for example) has shown as an effective way to augment the training data and boost performance of a classifier. However, echoing the @anmoljagetia, fine-tuning on the target domain may also bee important. \r\n\r\n",
"@anmoljagetia did you find any method to retrain the model to generate paraphrase sentence?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,586 | 1,595 | 1,595 | NONE | null | # ❓ Questions & Help
I asked on SO and was downvoted since it is considered "off-site" and is against their Terms of Service. My question is somewhat simple. How do I fine-tune a GPT-2 model for the task of paraphrasing like the paper: https://www.aclweb.org/anthology/D19-5623.pdf
A link to my SO question : https://stackoverflow.com/questions/61115488/how-to-fine-tune-gpt-2-for-paraphrasing?noredirect=1#comment108120354_61115488
## Details
My question is somewhat simple. How do I fine-tune a GPT-2 model for the task of paraphrasing like the paper: https://www.aclweb.org/anthology/D19-5623.pdf
Is there a way to achieve this with huggingface-transformers ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3725/reactions",
"total_count": 7,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/3725/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3724 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3724/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3724/comments | https://api.github.com/repos/huggingface/transformers/issues/3724/events | https://github.com/huggingface/transformers/issues/3724 | 597,419,418 | MDU6SXNzdWU1OTc0MTk0MTg= | 3,724 | Has anyone used run_language_modelling.py to train a gpt 2 from scratch? | {
"login": "nikkon3",
"id": 41228217,
"node_id": "MDQ6VXNlcjQxMjI4MjE3",
"avatar_url": "https://avatars.githubusercontent.com/u/41228217?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikkon3",
"html_url": "https://github.com/nikkon3",
"followers_url": "https://api.github.com/users/nikkon3/followers",
"following_url": "https://api.github.com/users/nikkon3/following{/other_user}",
"gists_url": "https://api.github.com/users/nikkon3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikkon3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikkon3/subscriptions",
"organizations_url": "https://api.github.com/users/nikkon3/orgs",
"repos_url": "https://api.github.com/users/nikkon3/repos",
"events_url": "https://api.github.com/users/nikkon3/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikkon3/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834053007,
"node_id": "MDU6TGFiZWwxODM0MDUzMDA3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Pretraining)",
"name": "Ex: LM (Pretraining)",
"color": "76FFAF",
"default": false,
"description": "Related to language modeling pre-training"
}
] | closed | false | null | [] | [
"Hi @nikkon3, the special tokens for `gpt2` are automatically set when you import `GPT2Tokenizer`.\r\n\r\nThe code below shows that `'<|endoftext|>'` is the special token used for BOS (beginning of sequence), EOS (end of sequence), and UNK (unknown - out of vocabulary).\r\n\r\n```\r\nfrom transformers import GPT2Tokenizer\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\ntokenizer.special_tokens_map\r\n\r\n#{'bos_token': '<|endoftext|>',\r\n# 'eos_token': '<|endoftext|>',\r\n# 'unk_token': '<|endoftext|>'}\r\n```\r\n\r\nHope this helps you out!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"you can use run_clm.py to train gpt2 from scratch in transformers 3.5.1"
] | 1,586 | 1,702 | 1,592 | NONE | null | I have read this post https://huggingface.co/blog/how-to-train and i would like to do the train from scratch, for a gpt2 type model.
One question also, that i have, is the special tokens that he uses in that post for tokenizer will be the same and for a tokenizer that i will use for a gpt2 model? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3724/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3724/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3723 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3723/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3723/comments | https://api.github.com/repos/huggingface/transformers/issues/3723/events | https://github.com/huggingface/transformers/issues/3723 | 597,393,483 | MDU6SXNzdWU1OTczOTM0ODM= | 3,723 | How to get multiple answers from the context using BertForQuestionAnswering | {
"login": "MaheshChandrra",
"id": 13826929,
"node_id": "MDQ6VXNlcjEzODI2OTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/13826929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaheshChandrra",
"html_url": "https://github.com/MaheshChandrra",
"followers_url": "https://api.github.com/users/MaheshChandrra/followers",
"following_url": "https://api.github.com/users/MaheshChandrra/following{/other_user}",
"gists_url": "https://api.github.com/users/MaheshChandrra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MaheshChandrra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MaheshChandrra/subscriptions",
"organizations_url": "https://api.github.com/users/MaheshChandrra/orgs",
"repos_url": "https://api.github.com/users/MaheshChandrra/repos",
"events_url": "https://api.github.com/users/MaheshChandrra/events{/privacy}",
"received_events_url": "https://api.github.com/users/MaheshChandrra/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"this might help https://github.com/google-research/bert/issues/657",
"Thanks @chutaklee for the quick response, but may I please know whether we can do the same with the existing pretrained BERT model,by changing any parameters anywhere,as I currently do have question-answer pairs and I'm not not training the model, so just wanted to use pre trained to get the answers.\r\n\r\nThanks in advance!!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@MaheshChandrra Going through the same problem. Did you find any solution ?",
"@subigyaup ,No luck yet!! Will do drop the fix if I find anything."
] | 1,586 | 1,598 | 1,592 | NONE | null | How do I get multiple answers from the text using **BertForQuestionAnswering**, just like for the below question there are two possible answers:
1. a nice puppet
2. a software engineer
**Below is the code snippet for the same:**
```
from transformers import BertTokenizer, BertForQuestionAnswering
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForQuestionAnswering.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet.Jim Henson was a software engineer."
input_ids = tokenizer.encode(question, text)
token_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))]
start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))
all_tokens = tokenizer.convert_ids_to_tokens(input_ids)
answer = ' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1])
print(answer)
'a software engineer'
```
Thanks in advance!! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3723/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3723/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3722 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3722/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3722/comments | https://api.github.com/repos/huggingface/transformers/issues/3722/events | https://github.com/huggingface/transformers/pull/3722 | 597,362,060 | MDExOlB1bGxSZXF1ZXN0NDAxNDg5NDk2 | 3,722 | Integrate Bert-like model on Flax runtime. | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"As you said, Jax is a library that interact with numpy to provide additional features: autodiff, auto-vectorization [(**vmap**)](https://jax.readthedocs.io/en/latest/notebooks/quickstart.html#Auto-vectorization-with-vmap) and auto-parallelization [(**pmap**)](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap). \r\n\r\nJax is essentially stateless, which is reflected here through the function to differentiate (the model) doesn't holds the parameters. They have to be referenced somewhere else and feed somehow.\r\n\r\n`JaxPreTrainedModel` is introduced here mainly to handle the serialization of such model and provide conversion. Also, one specificity of Jax is many different Neural Network library are currently being implemented on top of it: \r\n\r\n- Google Flax (https://github.com/google/flax)\r\n- Google Trax (https://github.com/google/trax)\r\n- DeepMind Haiku (https://github.com/deepmind/dm-haiku)\r\n\r\nIn that aspect, @madisonmay is currently working on a [Haiku Bert integration](https://github.com/huggingface/transformers/pull/3520) in transformers. My hope it to be able to share as many things as possible between the two implementations (_but can't be sure for now_) ",
"Alright, that makes sense. Thanks for the explanation.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Unstale",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3722?src=pr&el=h1) Report\n> Merging [#3722](https://codecov.io/gh/huggingface/transformers/pull/3722?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/60de910e6010c76c25dd0ed0999e4c69f9692371?el=desc) will **increase** coverage by `2.55%`.\n> The diff coverage is `90.11%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3722?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3722 +/- ##\n==========================================\n+ Coverage 78.32% 80.88% +2.55% \n==========================================\n Files 187 165 -22 \n Lines 37162 30383 -6779 \n==========================================\n- Hits 29107 24575 -4532 \n+ Misses 8055 5808 -2247 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3722?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/3722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.30% <ø> (-0.11%)` | :arrow_down: |\n| [src/transformers/modeling\\_flax\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/3722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF4X2F1dG8ucHk=) | `60.86% <60.86%> (ø)` | |\n| [src/transformers/modeling\\_flax\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF4X3V0aWxzLnB5) | `83.60% <83.60%> (ø)` | |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.92% <92.85%> (-0.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_flax\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF4X3JvYmVydGEucHk=) | `94.11% <94.11%> (ø)` | |\n| [src/transformers/modeling\\_flax\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF4X2JlcnQucHk=) | `96.50% <96.50%> (ø)` | |\n| [src/transformers/testing\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `67.66% <100.00%> (+0.38%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/3722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.40%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/3722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-65.14%)` | :arrow_down: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/3722/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `55.11% <0.00%> (-9.71%)` | :arrow_down: |\n| ... and [156 more](https://codecov.io/gh/huggingface/transformers/pull/3722/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3722?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3722?src=pr&el=footer). Last update [60de910...c0d1c81](https://codecov.io/gh/huggingface/transformers/pull/3722?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"cc @levskaya",
"It looks like a file is missing:\r\n\r\n```\r\n$ make fixup\r\n[...]\r\nChecking all models are properly tested.\r\nTraceback (most recent call last):\r\n File \"utils/check_repo.py\", line 327, in <module>\r\n check_repo_quality()\r\n File \"utils/check_repo.py\", line 321, in check_repo_quality\r\n check_all_models_are_tested()\r\n File \"utils/check_repo.py\", line 212, in check_all_models_are_tested\r\n new_failures = check_models_are_tested(module, test_file)\r\n File \"utils/check_repo.py\", line 182, in check_models_are_tested\r\n tested_models = find_tested_models(test_file)\r\n File \"utils/check_repo.py\", line 163, in find_tested_models\r\n with open(os.path.join(PATH_TO_TESTS, test_file)) as f:\r\nFileNotFoundError: [Errno 2] No such file or directory: 'tests/test_modeling_flax_utils.py'\r\nMakefile:25: recipe for target 'extra_quality_checks' failed\r\nmake: *** [extra_quality_checks] Error 1\r\n```\r\n\r\nShouldn't the CI have caught this?",
"Looks like a problem in `make fixup`, `make quality` runs fine (and that's what the CI runs).",
"Nope, both run the same sub-target: `extra_quality_checks`\r\n\r\n```\r\n$ make quality\r\n[...]\r\npython utils/check_copies.py\r\npython utils/check_dummies.py\r\npython utils/check_repo.py\r\n2020-10-19 12:11:26.345843: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\nChecking all models are properly tested.\r\nTraceback (most recent call last):\r\n File \"utils/check_repo.py\", line 327, in <module>\r\n check_repo_quality()\r\n File \"utils/check_repo.py\", line 321, in check_repo_quality\r\n check_all_models_are_tested()\r\n File \"utils/check_repo.py\", line 212, in check_all_models_are_tested\r\n new_failures = check_models_are_tested(module, test_file)\r\n File \"utils/check_repo.py\", line 182, in check_models_are_tested\r\n tested_models = find_tested_models(test_file)\r\n File \"utils/check_repo.py\", line 163, in find_tested_models\r\n with open(os.path.join(PATH_TO_TESTS, test_file)) as f:\r\nFileNotFoundError: [Errno 2] No such file or directory: 'tests/test_modeling_flax_utils.py'\r\nMakefile:25: recipe for target 'extra_quality_checks' failed\r\nmake: *** [extra_quality_checks] Error 1\r\n```\r\n\r\nThis is with the latest master.",
"PR with fix https://github.com/huggingface/transformers/pull/7914\r\n\r\nThe question is - why CI didn't fail? It reports no problem here:\r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/14040/workflows/6cd2b931-ce7e-4e99-b313-4a34326fcece/jobs/101513\r\n\r\nOnce I got this fixed, 2 more issues came up:\r\n\r\n```\r\npython utils/check_repo.py\r\n2020-10-19 12:22:10.636984: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\nChecking all models are properly tested.\r\nTraceback (most recent call last):\r\n File \"utils/check_repo.py\", line 328, in <module>\r\n check_repo_quality()\r\n File \"utils/check_repo.py\", line 322, in check_repo_quality\r\n check_all_models_are_tested()\r\n File \"utils/check_repo.py\", line 217, in check_all_models_are_tested\r\n raise Exception(f\"There were {len(failures)} failures:\\n\" + \"\\n\".join(failures))\r\nException: There were 2 failures:\r\ntest_modeling_flax_bert.py should define `all_model_classes` to apply common tests to the models it tests. If this intentional, add the test filename to `TEST_FILES_WITH_NO_COMMON_TESTS` in the file `utils/check_repo.py`.\r\ntest_modeling_flax_roberta.py should define `all_model_classes` to apply common tests to the models it tests. If this intentional, add the test filename to `TEST_FILES_WITH_NO_COMMON_TESTS` in the file `utils/check_repo.py`.\r\nMakefile:25: recipe for target 'extra_quality_checks' failed\r\n\r\n```\r\nFixed in the same PR.\r\n"
] | 1,586 | 1,603 | 1,603 | MEMBER | null | This Pull Request attempts to bring support for [Flax](https://github.com/google/flax) framework as part of transformers.
Main focus as been put on providing BERT-like models, principally by making it possible to load PyTorch checkpoints and doing the necessary conversions (few) directly on the fly. Supports also providing a **msgpack** formatted file from Flax.
`save_pretrained` will save the model through **msgpack** format to avoid dependency on torch inside Jax code.
**Targeted models:**
- [x] Bert
- [x] RoBERTa
- [ ] DistilBERT
- [ ] DistilRoBERTa
**If not too hard**
- [ ] CamemBERT
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3722/reactions",
"total_count": 14,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 5,
"confused": 0,
"heart": 7,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3722/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3722",
"html_url": "https://github.com/huggingface/transformers/pull/3722",
"diff_url": "https://github.com/huggingface/transformers/pull/3722.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3722.patch",
"merged_at": 1603115742000
} |
https://api.github.com/repos/huggingface/transformers/issues/3721 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3721/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3721/comments | https://api.github.com/repos/huggingface/transformers/issues/3721/events | https://github.com/huggingface/transformers/issues/3721 | 597,230,864 | MDU6SXNzdWU1OTcyMzA4NjQ= | 3,721 | DistributedSampler can't shuffle the dataset | {
"login": "elk-cloner",
"id": 5828101,
"node_id": "MDQ6VXNlcjU4MjgxMDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5828101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elk-cloner",
"html_url": "https://github.com/elk-cloner",
"followers_url": "https://api.github.com/users/elk-cloner/followers",
"following_url": "https://api.github.com/users/elk-cloner/following{/other_user}",
"gists_url": "https://api.github.com/users/elk-cloner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elk-cloner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elk-cloner/subscriptions",
"organizations_url": "https://api.github.com/users/elk-cloner/orgs",
"repos_url": "https://api.github.com/users/elk-cloner/repos",
"events_url": "https://api.github.com/users/elk-cloner/events{/privacy}",
"received_events_url": "https://api.github.com/users/elk-cloner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think you are right",
"Isn't there the same issue in other places?\r\nE.g. in trainer.py: https://github.com/huggingface/transformers/blob/97a375484c618496691982f62518130f294bb9a8/src/transformers/trainer.py#L305-L307",
"I forgot to re-add this in Trainer when merging #3800 \r\n\r\nIt's on my todo-list, but feel free to open a PR if you can do it faster than I can",
"Great. Personally I've not yet upgraded to the newer version with trainer.py, so I'll leave it for you, thanks."
] | 1,586 | 1,588 | 1,586 | CONTRIBUTOR | null | # 🐛 Bug
## Information
I'm trying to fine-tune BERT model using ```run_language_modeling.py```.
Language I am using the model on is Persian:
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
But according to this [issue](https://github.com/pytorch/pytorch/issues/31771) there is a bug in ```torch.utils.data.distributed.DistributedSampler``` so that during different epochs shuffling operation doesn't work properly(it's not working).
To solve this problem: according to pytorch official example [here](https://github.com/pytorch/examples/blob/ad775ace1b9db09146cdd0724ce9195f7f863fff/imagenet/main.py#L238), we should add ```train_sampler.set_epoch(epoch)``` before each new epoch at this [line](https://github.com/huggingface/transformers/blob/f8208fa456039b46873a2e497b6318d30a4fc84e/examples/run_language_modeling.py#L322)
## To reproduce
Steps to reproduce the behavior:
1. compare batches between different epoch like mentioned [issue](https://github.com/pytorch/pytorch/issues/31771)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: transformers==2.8.0
- Platform: Ubuntu 18.04
- Python version: 3.7
- PyTorch version (GPU?): torch==1.4.0 (Yes)
- Tensorflow version (GPU?): tensorflow-gpu==2.1.0 (Yes)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: distributed
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3721/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3721/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3720 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3720/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3720/comments | https://api.github.com/repos/huggingface/transformers/issues/3720/events | https://github.com/huggingface/transformers/issues/3720 | 597,215,669 | MDU6SXNzdWU1OTcyMTU2Njk= | 3,720 | Disable @torch.no_grad() for model.generate() ? | {
"login": "Laksh1997",
"id": 59830552,
"node_id": "MDQ6VXNlcjU5ODMwNTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/59830552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Laksh1997",
"html_url": "https://github.com/Laksh1997",
"followers_url": "https://api.github.com/users/Laksh1997/followers",
"following_url": "https://api.github.com/users/Laksh1997/following{/other_user}",
"gists_url": "https://api.github.com/users/Laksh1997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Laksh1997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Laksh1997/subscriptions",
"organizations_url": "https://api.github.com/users/Laksh1997/orgs",
"repos_url": "https://api.github.com/users/Laksh1997/repos",
"events_url": "https://api.github.com/users/Laksh1997/events{/privacy}",
"received_events_url": "https://api.github.com/users/Laksh1997/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834059054,
"node_id": "MDU6TGFiZWwxODM0MDU5MDU0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Generation",
"name": "Ex: Generation",
"color": "06EFF8",
"default": false,
"description": "Natural Language Generation"
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"At the moment the only solution seems to be copying and pasting the entire generation code, as well as making a few changes that comes along with it, to avoid this issue.",
"One solution I propose is to add an argument `with_grad` which defaults to False.\r\nThen, add this as the first line in the generate code:\r\n```\r\ndef generate(...):\r\n torch.set_grad_enabled(with_grad)\r\n ...\r\n```\r\n\r\nThis will be backward-compatible.",
"Being able to back-prop through the `generate()` fn would require a lot of changes in my opinion. Not sure whether we plan on doing this any time soon. If you find a good way, feel free to open a PR though :-) ",
"Hi Patrick, yes I understand it's complicated. \r\n\r\nHere is a snippet that explains how it may work:\r\n\r\n```\r\nimport torch\r\nimport torch.distributions as dist\r\n\r\ndef generate_and_trace_log_probs(\r\nmodel, batch_size=32, max_len=100, top_k=0, top_p=1.0, bos_id=1, eos_id=2\r\n):\r\n\r\n\tinitial_pool = torch.full(\r\n\t size=(batch_size, 1),\r\n\t fill_value=bos_id,\r\n\t dtype=torch.long,\r\n\t device=next(model.parameters()).device,\r\n\t)\r\n\tpast_tokens = initial_pool\r\n\tcurrent_tokens = initial_pool\r\n\tlog_probs = []\r\n\tpast_attention_computation = None\r\n\r\n\tfor i in range(max_len - 1):\r\n\r\n\t # Forward prop through model\r\n\t outputs = model(\r\n\t input_ids=current_tokens, past=past_attention_computation\r\n\t )\r\n\r\n\t # Extract logits for sampling next tokens\r\n\t logits = outputs[0]\r\n\r\n\t # Top-p and/or top-k filtering\r\n\t if top_k > 0 or top_p < 1.0:\r\n\t logits = top_k_top_p_filtering(\r\n\t logits.squeeze(1), top_k=top_k, top_p=top_p, min_tokens_to_keep=1\r\n\t ).unsqueeze(1)\r\n\r\n\t # Extract attention computations to cache\r\n\t past_attention_computation = outputs[1]\r\n\r\n\t # Sample logits\r\n\t catdist = dist.Categorical(logits=logits)\r\n\t next_tokens = catdist.sample()\r\n\r\n\t # Compute and store log probs for REINFORCE\r\n\t log_prob = catdist.log_prob(next_tokens)\r\n\t log_probs.append(log_prob)\r\n\r\n\t # Update input into LM\r\n\t current_tokens = next_tokens\r\n\r\n\t # Store tokens for reward computation\r\n\t past_tokens = torch.cat([past_tokens, current_tokens.detach()], dim=-1)\r\n\r\n\t # Check if all examples have had an EOS token - if so, break\r\n\t if past_tokens.eq(eos_id).any(dim=-1).all():\r\n\t break\r\n\r\n\tlog_probs = torch.cat(log_probs, dim=-1)\r\n\r\n\t# For tokens that came after the EOS token, mask their log prob\r\n\tfor idx, ex in enumerate(past_tokens):\r\n\t eos_idx = torch.where(ex.eq(eos_id))[0].min()\r\n\t log_probs[idx, eos_idx + 1 :] = -1e4\r\n\r\n\treturn log_probs, past_tokens\r\n\r\n\r\ndef top_k_top_p_filtering(\r\n logits: torch.Tensor,\r\n top_k: int = 50,\r\n top_p: float = 0.95,\r\n min_tokens_to_keep=1,\r\n filter_value=-float(\"Inf\"),\r\n):\r\n\t\"\"\"Add torch.no_grad() for steps that unnecessarily trace gradients\"\"\"\r\n if top_k > 0:\r\n with torch.no_grad():\r\n top_k = min(max(top_k, min_tokens_to_keep), logits.size(-1)) # safety check\r\n indices_to_remove = logits < torch.topk(logits, top_k)[0][..., -1, None]\r\n logits[indices_to_remove] = filter_value\r\n\r\n if top_p < 1.0:\r\n with torch.no_grad():\r\n sorted_logits, sorted_indices = torch.sort(logits, descending=True)\r\n cumulative_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1)\r\n\r\n # Remove tokens with cumulative probs above threshold (token with 0 kept)\r\n sorted_indices_to_remove = cumulative_probs > top_p\r\n if min_tokens_to_keep > 1:\r\n # Keep at least min_tokens_to_keep (set to min_tokens_to_keep-1 because we add the first one below)\r\n sorted_indices_to_remove[..., :min_tokens_to_keep] = 0\r\n # Shift the indices to the right to keep also the first token above the threshold\r\n sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[\r\n ..., :-1\r\n ].clone()\r\n sorted_indices_to_remove[..., 0] = 0\r\n\r\n # scatter sorted tensors to original indexing\r\n indices_to_remove = sorted_indices_to_remove.scatter(\r\n 1, sorted_indices, sorted_indices_to_remove\r\n )\r\n logits[indices_to_remove] = filter_value\r\n\r\n return logits\r\n```",
"@Laksh1997 - thanks for the code snippet :-) If you think you are able to make a PR that can pass the tests, I think we would be more than happy to add this to the lib!",
"Okay, will try...",
"@patrickvonplaten Have edited the code (only had to make a few changes to enable this capability!) and ran the tests (369 pass, 808 skip, 10 warnings).\r\n\r\nI'm trying to push a new branch but getting access denied.",
"@patrickvonplaten that's my other account ...",
"I'm reading the instructions now on how to contribute ...",
"Done a PR... @patrickvonplaten ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"One could use `model.greedy_search` if they wan't to backpropogate through the generation process. This worked for me.",
"> `greedy_search`\r\n\r\n`model.greedy` is not working correctly, at least for T5.\r\n\r\n```\r\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained('t5-small')\r\ntokenizer = AutoTokenizer.from_pretrained('t5-small')\r\nmodel.greedy_search(**tokenizer(\"I love HuggingFace\", return_tensors='pt'))\r\n```\r\nI get the following error with the code above:\r\n```\r\n File \"/home/joaolages/.venv/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py\", line 930, in forward\r\n raise ValueError(f\"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds\")\r\nValueError: You have to specify either input_ids or inputs_embeds\r\n```\r\nI even tried calling `greedy_search` as suggested in [here](https://discuss.huggingface.co/t/question-about-greedy-search/5749/4?u=skinish), but this creates different outputs compared to calling `model.generate` with `num_beams=1`, which shouldn't, right?",
"@JoaoLages, you need to also add `encoder_outputs` to `generate` when using it on encoder-decoder models such as T5. \r\nThis should work:\r\n\r\n```python\r\n#!/usr/bin/env python3\r\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM\r\nimport torch\r\n\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained('t5-small')\r\ntokenizer = AutoTokenizer.from_pretrained('t5-small')\r\n\r\ninput_ids = tokenizer(\"Translate English to German: Today is a nice day.\", return_tensors=\"pt\").input_ids\r\nencoder_outputs = model.encoder(input_ids)\r\n\r\ndecoder_input_ids = torch.ones_like(input_ids)[:, :1] * model.config.decoder_start_token_id\r\nmodel_kwargs = {\"encoder_outputs\": encoder_outputs}\r\n\r\nsequences = model.greedy_search(decoder_input_ids, **model_kwargs)\r\n\r\nprint(\"Output:\", tokenizer.batch_decode(sequences))\r\n# => prints `['<pad> Heute ist ein schöner Tag.</s>']\r\n```\r\n\r\nI do very much admit though that this is too complicated and it also took me a bit. @JoaoLages think we need to improve our docs here no?",
"Thanks! \r\n\r\n>I do very much admit though that this is too complicated and it also took me a bit. @JoaoLages think we need to improve our docs here no?\r\n\r\nI think it would be simpler to change `T5ForConditionalGeneration.greedy_search` to have this code inside it, so that we could simply call `model.greedy_search(input_ids)` ",
"Sorry also meant to ping @gante here",
"@patrickvonplaten Trying to understand the problem -- am I right in saying that we want to use the generation methods directly for backpropagation purposes (because `generate()` won't work there), and thus we need to document their proper use (because `generate()` does a lot of input preparation)?",
"Good point! \r\n\r\nI think my idea back when we added the sub-methods was to push the community more to use those directly instead of the more \"magic\" `.generate()` function. The reason being because it's harder and harder to cover every use case in `generate()` where as the sub methods are very \"bare-bone\" without any magic which means that if one knows how to use them they can more or less cover every use case. \r\nNow, that failed a bit I think because 99.9% people just use `generate(...)`, probably because of how difficult it is to understand and use the sub methods directly (as shown here: https://github.com/huggingface/transformers/issues/3720#issuecomment-1235775528 <- that's too difficult to understand/know). \r\n\r\nSo just pinged you here to be aware of this and was wondering whether it could make sense to think about providing better guides for the sub-method, maybe even changing the submethods or continue to not spend much time on them. Don't think it's an urgent thing to think about though!",
"@patrickvonplaten @gante \r\nAt least [these docs](https://github.com/huggingface/transformers/blob/6678350c01629b848aa9c41e169da5d6b8d9e7e9/src/transformers/generation_utils.py#L1652) should be updated with the code that @patrickvonplaten shared in [here](https://github.com/huggingface/transformers/issues/3720#issuecomment-1235775528)",
"Just a heads up that I think some of these methods (if you want a continuous gradient) might have to use the softmax trick: https://datascience.stackexchange.com/questions/58376/gumbel-softmax-trick-vs-softmax-with-temperature to get a differentiable final next token. At least when I checked this out a while back that seemed to be the case but ¯\\_(ツ)_/¯ \r\n",
"Using the approach above with `greedy_search` and a T5 model, I'm still not seeing a `grad_fn` associated with the output logits. Was anyone able to get this working with a T5 architecture?",
"> Using the approach above with `greedy_search` and a T5 model, I'm still not seeing a `grad_fn` associated with the output logits. Was anyone able to get this working with a T5 architecture?\r\n\r\nIn order to get the gradient per step, you need to do the greedy decoding on your own. Try using `model.forward` instead to get the gradient and the next token, then you need to concatenate that generated token with the `decoder_input_ids` and repeat the process.\r\n\r\nIf you want to test this fast, you can use the [ecco package](https://github.com/jalammar/ecco) that I've helped build. It has logic for doing this gradient calculation for any kind of sampling approach (greedy/beam/sample/etc) and for models like T5 or GPT. It is not very optimized in terms of inference times though, I must warn you.",
"@JoaoLages that is a really helpful starting point, thank you! I'm not sure I see a beam search sampling process in the code (but perhaps I'm looking in the wrong place). I do see a TODO in `sample_output_token` to add beam search in the future.",
"There isn't beam search yeah. What we actually do is that we use the normal `model.generate` method to use beam search, and then we feed the generated tokens through the model to calculate their gradient. So we actually do the generation step 2 times, but in the second we capture the gradients. It's slow, but it could be optimized if we did our custom beam search."
] | 1,586 | 1,702 | 1,596 | NONE | null | # ❓ Questions & Help
Is there any way to do this?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3720/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3720/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3719 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3719/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3719/comments | https://api.github.com/repos/huggingface/transformers/issues/3719/events | https://github.com/huggingface/transformers/issues/3719 | 597,209,891 | MDU6SXNzdWU1OTcyMDk4OTE= | 3,719 | Unable to load german BERT model | {
"login": "dakshvar22",
"id": 8708249,
"node_id": "MDQ6VXNlcjg3MDgyNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8708249?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dakshvar22",
"html_url": "https://github.com/dakshvar22",
"followers_url": "https://api.github.com/users/dakshvar22/followers",
"following_url": "https://api.github.com/users/dakshvar22/following{/other_user}",
"gists_url": "https://api.github.com/users/dakshvar22/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dakshvar22/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dakshvar22/subscriptions",
"organizations_url": "https://api.github.com/users/dakshvar22/orgs",
"repos_url": "https://api.github.com/users/dakshvar22/repos",
"events_url": "https://api.github.com/users/dakshvar22/events{/privacy}",
"received_events_url": "https://api.github.com/users/dakshvar22/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"That particular model doesn't have a TF version: https://huggingface.co/bert-base-german-dbmdz-cased#list-files\r\n\r\nHowever, you should be able to convert the PyTorch version to TF, using the `from_pt=True` flag.",
"Thanks for clarifying that @julien-c . Is it possible to add this information(regarding availability of TF and pytorch models) somewhere on this [page](https://huggingface.co/transformers/pretrained_models.html) or maybe a dedicated table for it. It's quite useful info for frameworks which depend on Transformers. ",
"I'm working on converting our DBMDZ models to TF 😅",
"The same issue is true for the `uncased` version. Is there a way to force to HuggingFace to download the Torch version instead? ",
"Yes, as I said: `from_pt=True`",
"@hotzenklotz You can now use the model under our DBMDZ namespace: `dbmdz/bert-base-german-cased`.\r\n\r\nI've uploaded the TF-compatible model and it can be used with:\r\n\r\n```bash\r\nfrom transformers import *\r\nmodel = TFBertModel.from_pretrained('dbmdz/bert-base-german-cased')\r\n```\r\n\r\nPlease let me know if it's working for you!",
"@stefan-it Thank you so much. Can we expect to see a TF version of the `uncased` model as well? (And what about Roberta?)",
"`dbmdz/bert-base-german-uncased` has also a TF-compatible model now :)\r\n\r\nGerman RoBERTa is currently not planned on our side (unless there's a TPU-supported pre-training script out there) 😅",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,586 | 1,592 | 1,592 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): BERT
Language I am using the model on (English, Chinese ...): German
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
from transformers import *
model = TFBertModel.from_pretrained('bert-base-german-dbmdz-cased')
```
I get the following error trace -
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/daksh/miniconda3/envs/1.8/lib/python3.6/site-packages/transformers/modeling_tf_utils.py", line 351, in from_pretrained
assert os.path.isfile(resolved_archive_file), "Error retrieving file {}".format(resolved_archive_file)
File "/Users/daksh/miniconda3/envs/1.8/lib/python3.6/genericpath.py", line 30, in isfile
st = os.stat(path)
TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType
```
## Expected behavior
Model should be loaded
## Environment info
- `transformers` version: 2.4.1
- Platform: Mac OS
- Python version: 3.6.5
- Tensorflow version (GPU?): 2.1.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3719/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3718 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3718/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3718/comments | https://api.github.com/repos/huggingface/transformers/issues/3718/events | https://github.com/huggingface/transformers/issues/3718 | 597,188,002 | MDU6SXNzdWU1OTcxODgwMDI= | 3,718 | loading from tf_ckp and this showed up: AttributeError: 'BertCrf' object has no attribute 'bias' . | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,586 | 1,586 | 1,586 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3718/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/3717 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3717/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3717/comments | https://api.github.com/repos/huggingface/transformers/issues/3717/events | https://github.com/huggingface/transformers/issues/3717 | 597,182,829 | MDU6SXNzdWU1OTcxODI4Mjk= | 3,717 | how to use transformers with gpu | {
"login": "xiongma",
"id": 30991932,
"node_id": "MDQ6VXNlcjMwOTkxOTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/30991932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiongma",
"html_url": "https://github.com/xiongma",
"followers_url": "https://api.github.com/users/xiongma/followers",
"following_url": "https://api.github.com/users/xiongma/following{/other_user}",
"gists_url": "https://api.github.com/users/xiongma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiongma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiongma/subscriptions",
"organizations_url": "https://api.github.com/users/xiongma/orgs",
"repos_url": "https://api.github.com/users/xiongma/repos",
"events_url": "https://api.github.com/users/xiongma/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiongma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is a question more suitable to Stack Overflow or another forum"
] | 1,586 | 1,586 | 1,586 | NONE | null | I want to load a model with gpu in transformers, but it seem like the model always load in cpu
my os deepin 15.11 python 3.7.5 pytorch-gpu 1.4 transformers 2.8 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3717/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3716 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3716/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3716/comments | https://api.github.com/repos/huggingface/transformers/issues/3716/events | https://github.com/huggingface/transformers/pull/3716 | 597,176,958 | MDExOlB1bGxSZXF1ZXN0NDAxMzM2Mjky | 3,716 | Shift labels internally within TransfoXLLMHeadModel when called with labels | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I would also like to return a (1,)-sized tensor for the loss when called with labels as that's easier for the user, what the models do, and what the old documentation said TransfoXLLMHeadModel did."
] | 1,586 | 1,586 | 1,586 | CONTRIBUTOR | null | Fixes #3711 . | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3716/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3716",
"html_url": "https://github.com/huggingface/transformers/pull/3716",
"diff_url": "https://github.com/huggingface/transformers/pull/3716.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3716.patch",
"merged_at": 1586794283000
} |
https://api.github.com/repos/huggingface/transformers/issues/3715 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3715/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3715/comments | https://api.github.com/repos/huggingface/transformers/issues/3715/events | https://github.com/huggingface/transformers/issues/3715 | 597,152,006 | MDU6SXNzdWU1OTcxNTIwMDY= | 3,715 | How can i conditional fine-tuning with GPT2? | {
"login": "toriving",
"id": 42434734,
"node_id": "MDQ6VXNlcjQyNDM0NzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/42434734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/toriving",
"html_url": "https://github.com/toriving",
"followers_url": "https://api.github.com/users/toriving/followers",
"following_url": "https://api.github.com/users/toriving/following{/other_user}",
"gists_url": "https://api.github.com/users/toriving/gists{/gist_id}",
"starred_url": "https://api.github.com/users/toriving/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/toriving/subscriptions",
"organizations_url": "https://api.github.com/users/toriving/orgs",
"repos_url": "https://api.github.com/users/toriving/repos",
"events_url": "https://api.github.com/users/toriving/events{/privacy}",
"received_events_url": "https://api.github.com/users/toriving/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"To me, this sound more like a case where encoder-decoder models like `T5` or `Bart` should be fine-tuned. The encoder would encode the \"context\" and the decoder would be teacher-forced on the sentence.",
"> To me, this sound more like a case where encoder-decoder models like `T5` or `Bart` should be fine-tuned. The encoder would encode the \"context\" and the decoder would be teacher-forced on the sentence.\r\n\r\nThx very much :)",
"Perhaps, Is there such logic applied to training code now?",
"@toriving I've successfully done \"conditional\" fine-tuning by adding a new token that indicates which portion of the sequence refers to the \"context\", similar to the [SEP] token used in the multi sequence version of **BERT**.\r\n\r\nE.g. Here's an [example](https://github.com/enzoampil/tito-joker/blob/master/src/utils/process_jokes.py) of how I apply this to prepare a dataset for training GPT2 to generate answers to riddle jokes:\r\n\r\n```\r\n<soq> Why did the chicken cross the road? <eoq> To go to the other side <|endoftext|>\r\n```\r\n\r\nThe effect is the answer (after `<eoq>`), is conditional on the question that precedes it.",
"@enzoampil When learning with such data, is \"condition\" also used in the loss function?\r\nI mean, I am wondering if \"Condition\" is also learning with a language model.",
"Yes if you specify it like above it should",
"Okay. Thanks",
"> To me, this sound more like a case where encoder-decoder models like `T5` or `Bart` should be fine-tuned. The encoder would encode the \"context\" and the decoder would be teacher-forced on the sentence.\r\n\r\nI would like to ask if you think that using the encoder-decoder model (with wrapping the gpt2 model as encoder and decoder too) will provide normal results, or wrapping the gpt2 model as encoder is not a good idea(maybe use bert as encoder?)?",
"currently only bert2bert is supported with the EncoderDecoder structure.",
"> @toriving I've successfully done \"conditional\" fine-tuning by adding a new token that indicates which portion of the sequence refers to the \"context\", similar to the [SEP] token used in the multi sequence version of **BERT**.\r\n> \r\n> E.g. Here's an [example](https://github.com/enzoampil/tito-joker/blob/master/src/utils/process_jokes.py) of how I apply this to prepare a dataset for training GPT2 to generate answers to riddle jokes:\r\n> \r\n> ```\r\n> <soq> Why did the chicken cross the road? <eoq> To go to the other side <|endoftext|>\r\n> ```\r\n> \r\n> The effect is the answer (after `<eoq>`), is conditional on the question that precedes it.\r\n\r\ni would like to ask if you masked inputs part on labels on forward function. What I mean is that you maybe pass labels=input_ids to the forward function. So you set only the padding tokens as masked (value -100) or you set as masked the input tokens too? As we try to perform conditional generation, I think we should count on loss only the reply(?).\r\n"
] | 1,586 | 1,601 | 1,591 | NONE | null | I can use run_generation.py to create a statement by adding context.
But is there a way to do fine-tuning based on condition (context)?
For example, when data of "context [SEP] sentence" is input, the "context" is used to obtain the hidden state without learning.
In addition, the "sentence" is learned with the language model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3715/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3714 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3714/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3714/comments | https://api.github.com/repos/huggingface/transformers/issues/3714/events | https://github.com/huggingface/transformers/issues/3714 | 597,150,867 | MDU6SXNzdWU1OTcxNTA4Njc= | 3,714 | Zero shot multilingual BERT | {
"login": "paulthemagno",
"id": 38130299,
"node_id": "MDQ6VXNlcjM4MTMwMjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/38130299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paulthemagno",
"html_url": "https://github.com/paulthemagno",
"followers_url": "https://api.github.com/users/paulthemagno/followers",
"following_url": "https://api.github.com/users/paulthemagno/following{/other_user}",
"gists_url": "https://api.github.com/users/paulthemagno/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paulthemagno/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paulthemagno/subscriptions",
"organizations_url": "https://api.github.com/users/paulthemagno/orgs",
"repos_url": "https://api.github.com/users/paulthemagno/repos",
"events_url": "https://api.github.com/users/paulthemagno/events{/privacy}",
"received_events_url": "https://api.github.com/users/paulthemagno/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I think what you did there is probably best described as transfer learning.\r\n\r\n> Or maybe the 0 shot would be in the case in which I finetuned with an Italian dataset and then I evaluated on a corpus of another language?\r\n\r\nYeah, this is closer to the way the term \"zero-shot\" is being used in the field right now. Here's a [recent example](https://arxiv.org/abs/1812.10464). \r\n\r\nI'll also note that the way I've seen \"zero shot learning\" used traditionally was pretty narrow: it meant training a classifier on one set of labels and then evaluating on a different set of labels on in-domain data. Recently, especially in NLP, it's often been used more broadly to mean \"do a task that the model wasn't explicitly trained on without additional fine tuning\", e.g. in the GPT-2 paper.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,586 | 1,592 | 1,592 | NONE | null | ❓ Questions & Help
I have a doubt about the usage of multilingual BERT.
I did a domain adaptation on the language model **[`BERT-Base, Multilingual Cased`](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip)** with a dataset of a slightly different kind of text. This dataset is unbalanced on English language but contains also other languages as Italian.
Then I did a finetuning on an Italian dataset on NER.
Has this kind of training a name, like **zero shot** classification (beacuse I adapted on a multilingual dataset but unbalanced on English and then we finetune on Italian)?
Or maybe the 0 shot would be in the case in which I finetuned with an Italian dataset and then I evaluated on a corpus of another language?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3714/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3714/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3713 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3713/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3713/comments | https://api.github.com/repos/huggingface/transformers/issues/3713/events | https://github.com/huggingface/transformers/issues/3713 | 597,120,321 | MDU6SXNzdWU1OTcxMjAzMjE= | 3,713 | cannot determine what will be the cardinality of the output after applying glue_convert_examples_to_features [TF 2.2.0rcx] | {
"login": "tarrade",
"id": 12021701,
"node_id": "MDQ6VXNlcjEyMDIxNzAx",
"avatar_url": "https://avatars.githubusercontent.com/u/12021701?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tarrade",
"html_url": "https://github.com/tarrade",
"followers_url": "https://api.github.com/users/tarrade/followers",
"following_url": "https://api.github.com/users/tarrade/following{/other_user}",
"gists_url": "https://api.github.com/users/tarrade/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tarrade/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tarrade/subscriptions",
"organizations_url": "https://api.github.com/users/tarrade/orgs",
"repos_url": "https://api.github.com/users/tarrade/repos",
"events_url": "https://api.github.com/users/tarrade/events{/privacy}",
"received_events_url": "https://api.github.com/users/tarrade/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue might be of interest to @jplu ",
"Hey @tarrade \r\n\r\nUsually it is not advised to use the cardinality function for several reasons, the biggest two are: 1) it is still experimental, 2) cardinality works only with TF datasets created with `from_tensors` or `from_tensor_slices` which is not the case in the `glue_convert_examples_to_features` function.\r\n\r\nIf you need to know the size of your dataset from a TF dataset, there are two simple solutions:\r\n```\r\n# \"dataset\" is the variable that represents your tf.data.dataset\r\n# works only from TF 2.1 because of the as_numpy_iterator() method\r\nlen(list(dataset.as_numpy_iterator())\r\n```\r\nOr\r\n```\r\n# works for all the TF versions\r\ndataset.reduce(0, lambda x, _: x + 1)\r\n```",
"Hi @jplu,\r\n\r\nyes, \r\n\r\nthis is experimental but so much better that looping over the full dataset just to get the size while such info is almost here for free.\r\n\r\nand it doesn't work out of the box with tf.data.Dataset.from_generator\r\n\r\nBut, tit is already use in the transformers' code: \r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/data/processors/glue.py (line 87)\r\n`\r\nlen_examples = tf.data.experimental.cardinality(examples)`\r\n\r\nsomething like that in the right place is the code should work:\r\n`tf.data.experimental.assert_cardinality(len_examples)`\r\n\r\n",
"It works in glue.py because `examples` is the direct output of the `tfds.load()` function which returns compatible TF Datasets for `cardinality`.\r\n\r\nGetting the size of a TF Dataset if very complicated because of the way it is structured. It is a known problem, many issues are opened in the TF repo because of that. And that's also why the `cardinality| method is still in the experimental module since a very long time now, even much before TF 2.0 was released.\r\n\r\nIf you are not ok to use one of the two solution I proposed, explain me your use case? Are you using Glue from the `tensorflow_datasets` package? If yes, you have several facilities proposed by this package (https://www.tensorflow.org/datasets/overview)",
"Hi @jplu,\r\n\r\nI understand that using experimental feature of Tensorflow may introduce some instability. This is a faire point and `tf.data.experimental.assert_cardinality` is only available with TF 2.2.0.\r\n\r\nMy main points and usecases are:\r\n1- Computing the number of element in a sample is very time consuming since you need to loop over all elements.\r\n2- Normally even if data are coming from `tfds.load()` you need some cleaning, preprocessing steps or maybe you want to resample you train/test/valid sample. In such case the total number from the metadata (info) will not help since it was changed. This is a normal process is any ML project.\r\n3- In the version of the code was looking at, the length was computed anyway (this doesn't seems to be the case any more with the big clean up from 3 days ago). This was my main argumentation: you compute for any case the total number of even so why not simply assert the cardinality so any sample produced by `glue_convert_examples_to_features` will have the total number of event it contain and for free (no computation required).\r\n4- Now `tf.data.experimental.assert_cardinality(len_examples)` is experimental, require TF 2.2.0 and it the head of the code, the length doesn't seems to be computed any more.\r\n5- One issue is that I soon as I will store the data preprocessed with `glue_convert_examples_to_features` as TFRecord files, then the cardinality will be lost.\r\n\r\nConclusion: I will take care of doing the assert of the cardinality in my code and I hope that when TF 2.2.0 will be the main release and cardinality more stable we could rediscuss this topic.",
"Good points :)\r\n\r\n1. I fully agree with you, it is really painful.\r\n2. If you need to change the content of each dataset (doesn't matter is preprocessing or not) such as when doing cross-validation, indeed you have to recompute the size.\r\n3. There is currently a project to fully review and rework the data processing part of the lib, so it should be way more convenient to use once done. Until there, indeed, it is a bit of a mess.\r\n4. I was not aware of this new `tf.data.experimental.assert_cardinality(len_examples)` as I did not fully dive in TF 2.2 yet, but looks very interesting, thanks for the hint :)\r\n5. Indeed, the size cannot be computed from TFRecords, which is a real problem IMHO. I hope in future releases it will be much easier to get the size of a dataset ^^\r\n\r\nI will be happy to rediscuss about that, very sorry to do no have, sorry that I could not find a suitable solution to your issue."
] | 1,586 | 1,586 | 1,586 | NONE | null | # 🐛 Bug
## Information
With Tensorflow 2.2.0 (2.2.0rc2) we should be able to see the number of entries in the data without looking over them and using tf.data.experimental.cardinality.
One issue that I found is that after applying `glue_convert_examples_to_features` tf.data.experimental.cardinality is not able to find the total number of entry. I thought first that it was bug in this TF 2.2.0 release candidate.https://github.com/tensorflow/tensorflow/issues/37998.
When using data from tensorflow dataset tf.data.experimental.cardinality is returning the number of event
```
print(data['train'])
print(tf.data.experimental.cardinality(data['train']))
```
```
<DatasetV1Adapter shapes: {idx: (), label: (), sentence: ()}, types: {idx: tf.int32, label: tf.int64, sentence: tf.string}>
tf.Tensor(67349, shape=(), dtype=int64)
```
Now when I am using Huggingface transformer that modify the structure of the data:
```
train_dataset = glue_convert_examples_to_features(data['train'],
tokenizer,
max_length=128,
task='sst-2')
print(tf.data.experimental.cardinality(train_dataset))
```
```
<FlatMapDataset shapes: ({input_ids: (None,), attention_mask: (None,), token_type_ids: (None,)}, ()), types: ({input_ids: tf.int32, attention_mask: tf.int32, token_type_ids: tf.int32}, tf.int64)>
tf.Tensor(-2, shape=(), dtype=int64)
```
When the input pipeline contains a flat_map, it is generally not possible to statically determine what will be the cardinality of the output from the cardinality from the input. I don't see any flatmap in this function. I am trying to identify which part of the code is responsible. I am not 100% sure this is a transformer issue.
## To reproduce
Steps to reproduce the behavior:
```
import tensorflow as tf
import tensorflow_datasets
from transformers import (
BertConfig,
BertTokenizer,
TFBertModel,
TFBertForSequenceClassification,
glue_convert_examples_to_features,
glue_processors
)
data, info = tensorflow_datasets.load(name='glue/sst2',
data_dir='/tmp/',
with_info=True)
pretrained_weights = 'bert-base-multilingual-uncased'
# Load tokenizer
tokenizer = BertTokenizer.from_pretrained(pretrained_weights)
# recap of input dataset
print(data['train'])
print(tf.data.experimental.cardinality(data['train']))
# Prepare data for BERT
train_dataset = glue_convert_examples_to_features(data['train'],
tokenizer,
max_length=128,
task='sst-2')
# recap of pre processing dataset
print(train_dataset)
print(tf.data.experimental.cardinality(train_dataset))
```
## Expected behavior
I am expecting tf.data.experimental.cardinality to still be able to report the total number of entries after transforming the data with `glue_convert_examples_to_features`
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: MacOS 0.14.6
- Python version: 3.7.5
- Tensorflow version (CPU): 2.2.0rc2 (v2.2.0-rc1-34-ge6e5d6df2a 2.2.0-rc2)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3713/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3713/timeline | completed | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.