url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/8421
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8421/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8421/comments
https://api.github.com/repos/huggingface/transformers/issues/8421/events
https://github.com/huggingface/transformers/pull/8421
739,209,062
MDExOlB1bGxSZXF1ZXN0NTE3OTEyMjM3
8,421
[docs] improve bart/marian/mBART/pegasus docs
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,605
1,605
CONTRIBUTOR
null
+ Give example of bart mask filling + Link to training scripts where applicable + Clarify Marian naming scheme a bit.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8421/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8421/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8421", "html_url": "https://github.com/huggingface/transformers/pull/8421", "diff_url": "https://github.com/huggingface/transformers/pull/8421.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8421.patch", "merged_at": 1605021515000 }
https://api.github.com/repos/huggingface/transformers/issues/8420
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8420/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8420/comments
https://api.github.com/repos/huggingface/transformers/issues/8420/events
https://github.com/huggingface/transformers/pull/8420
739,170,392
MDExOlB1bGxSZXF1ZXN0NTE3ODgwNDY5
8,420
Deprecate old data/metrics functions
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,605
1,604
COLLABORATOR
null
# What does this PR do? This PR deprecates the old data/metrics utils we used now that we have some examples of scripts leveraging the Datasets library to point at. The idea is to eventually remove those from the library but keep them somewhere in the examples folder as utilities, so the old scripts can still be run.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8420/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8420/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8420", "html_url": "https://github.com/huggingface/transformers/pull/8420", "diff_url": "https://github.com/huggingface/transformers/pull/8420.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8420.patch", "merged_at": 1604941810000 }
https://api.github.com/repos/huggingface/transformers/issues/8419
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8419/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8419/comments
https://api.github.com/repos/huggingface/transformers/issues/8419/events
https://github.com/huggingface/transformers/pull/8419
739,148,961
MDExOlB1bGxSZXF1ZXN0NTE3ODYzMDA2
8,419
Bump tokenizers
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
COLLABORATOR
null
# What does this PR do? Bump the version of tokenizers to the last release. This fixes some bugs in `XLNetTokenizerFast`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8419/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8419/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8419", "html_url": "https://github.com/huggingface/transformers/pull/8419", "diff_url": "https://github.com/huggingface/transformers/pull/8419.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8419.patch", "merged_at": 1604939530000 }
https://api.github.com/repos/huggingface/transformers/issues/8418
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8418/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8418/comments
https://api.github.com/repos/huggingface/transformers/issues/8418/events
https://github.com/huggingface/transformers/pull/8418
739,070,865
MDExOlB1bGxSZXF1ZXN0NTE3Nzk5MjE5
8,418
[docs] remove sshleifer from issue-template :(
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thank you for the invitation, @sshleifer. \r\n\r\nMy feeling is that a person in charge of any domain should have commit rights to that domain. Since I don't have those I'd be happy to be delegated to. (with the exception of fsmt since I wrote it)" ]
1,604
1,604
1,604
CONTRIBUTOR
null
+ Removes sshleifer from issue-templates :( + For previous @sshleifer stuff, if it's in `src/` I put @patrickvonplaten, if it's `examples/seq2seq` I put @patil-suraj. + @stas00 if you want to take over any thing or feel comfortable being the point person for certain things, feel free to suggest.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8418/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8418/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8418", "html_url": "https://github.com/huggingface/transformers/pull/8418", "diff_url": "https://github.com/huggingface/transformers/pull/8418.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8418.patch", "merged_at": 1604944299000 }
https://api.github.com/repos/huggingface/transformers/issues/8417
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8417/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8417/comments
https://api.github.com/repos/huggingface/transformers/issues/8417/events
https://github.com/huggingface/transformers/pull/8417
739,016,819
MDExOlB1bGxSZXF1ZXN0NTE3NzU0MjI1
8,417
Changing XLNet default from not using memories to 512 context size following paper
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Ha yeah I saw your message on Slack just afterwards, I'll just change the default :)", "It's changed !", "Merging now so that it's in v3.5.0!" ]
1,604
1,604
1,604
CONTRIBUTOR
null
In #8317, we found out that calling `XLNetLMHeadModel.from_pretrained(mem_len=384)` still produced the FutureWarning that announces that the default configuration will change in 3.5.0, as the config first gets initialized with `mem_len=0` then `mem_len` gets changed. This PR moves the warning to the `forward` pass in the model to avoid this.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8417/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8417/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8417", "html_url": "https://github.com/huggingface/transformers/pull/8417", "diff_url": "https://github.com/huggingface/transformers/pull/8417.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8417.patch", "merged_at": 1604972992000 }
https://api.github.com/repos/huggingface/transformers/issues/8416
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8416/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8416/comments
https://api.github.com/repos/huggingface/transformers/issues/8416/events
https://github.com/huggingface/transformers/issues/8416
739,001,437
MDU6SXNzdWU3MzkwMDE0Mzc=
8,416
Does MBartTokenizer remove the parameter decoder_input_ids?
{ "login": "wmathor", "id": 32392878, "node_id": "MDQ6VXNlcjMyMzkyODc4", "avatar_url": "https://avatars.githubusercontent.com/u/32392878?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wmathor", "html_url": "https://github.com/wmathor", "followers_url": "https://api.github.com/users/wmathor/followers", "following_url": "https://api.github.com/users/wmathor/following{/other_user}", "gists_url": "https://api.github.com/users/wmathor/gists{/gist_id}", "starred_url": "https://api.github.com/users/wmathor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wmathor/subscriptions", "organizations_url": "https://api.github.com/users/wmathor/orgs", "repos_url": "https://api.github.com/users/wmathor/repos", "events_url": "https://api.github.com/users/wmathor/events{/privacy}", "received_events_url": "https://api.github.com/users/wmathor/received_events", "type": "User", "site_admin": false }
[ { "id": 1834067346, "node_id": "MDU6TGFiZWwxODM0MDY3MzQ2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation", "name": "Documentation", "color": "77cc3b", "default": false, "description": "" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "The docs are incorrect, sorry about that.\r\n\r\nTry\r\n```python\r\n from transformers import MBartForConditionalGeneration, MBartTokenizer\r\n model = MBartForConditionalGeneration.from_pretrained(\"facebook/mbart-large-en-ro\")\r\n tokenizer = MBartTokenizer.from_pretrained(\"facebook/mbart-large-en-ro\")\r\n article = \"UN Chief Says There Is No Military Solution in Syria\"\r\n batch = tokenizer.prepare_seq2seq_batch(src_texts=[article], src_lang=\"en_XX\")\r\n translated_tokens = model.generate(**batch, decoder_start_token_id=tokenizer.lang_code_to_id[\"ro_RO\"])\r\n translation = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]\r\n assert translation == \"Şeful ONU declară că nu există o soluţie militară în Siria\"\r\n```", "> The docs are incorrect, sorry about that.\r\n> \r\n> Try\r\n> \r\n> ```python\r\n> from transformers import MBartForConditionalGeneration, MBartTokenizer\r\n> model = MBartForConditionalGeneration.from_pretrained(\"facebook/mbart-large-en-ro\")\r\n> tokenizer = MBartTokenizer.from_pretrained(\"facebook/mbart-large-en-ro\")\r\n> article = \"UN Chief Says There Is No Military Solution in Syria\"\r\n> batch = tokenizer.prepare_seq2seq_batch(src_texts=[article], src_lang=\"en_XX\")\r\n> translated_tokens = model.generate(**batch, decoder_start_token_id=tokenizer.lang_code_to_id[\"ro_RO\"])\r\n> translation = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]\r\n> assert translation == \"Şeful ONU declară că nu există o soluţie militară în Siria\"\r\n> ```\r\n\r\nthank you for your reply, If I don't want to generate, I just want to train. How should I change it?\r\n\r\n```python\r\nexample_english_phrase = \"UN Chief Says There Is No Military Solution in Syria\"\r\nexpected_translation_romanian = \"Şeful ONU declară că nu există o soluţie militară în Siria\"\r\nbatch = tokenizer.prepare_seq2seq_batch(example_english_phrase, src_lang=\"en_XX\", tgt_lang=\"ro_RO\", tgt_texts=expected_translation_romanian)\r\ninput_ids = batch[\"input_ids\"]\r\ntarget_ids = batch[\"decoder_input_ids\"] # Error\r\ndecoder_input_ids = target_ids[:, :-1].contiguous()\r\nlabels = target_ids[:, 1:].clone()\r\nmodel(input_ids=input_ids, decoder_input_ids=decoder_input_ids, labels=labels) #forward\r\n```", "See this https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.py#L138\r\n\r\nthe `batch` argument to that fn is the same as your `batch` (the output of `prepare_seq2seq_batch`)" ]
1,604
1,605
1,605
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version:3.4.0 - Platform:Google Colab - Python version:3.7 - PyTorch version (GPU?):1.7.0+cu101 - Tensorflow version (GPU?):2.x - Using GPU in script?: no - Using distributed or parallel set-up in script?:no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSTM: @stas00 examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): mbart The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce ```python example_english_phrase = "UN Chief Says There Is No Military Solution in Syria" expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria" batch = MBartTokenizer.from_pretrained('facebook/mbart-large-cc25').prepare_seq2seq_batch(example_english_phrase, src_lang="en_XX", tgt_lang="ro_RO", tgt_texts=expected_translation_romanian) input_ids = batch["input_ids"] target_ids = batch["decoder_input_ids"] ``` Steps to reproduce the behavior: ```python KeyError Traceback (most recent call last) <ipython-input-11-b3eedaf10c3e> in <module>() 3 batch = MBartTokenizer.from_pretrained('facebook/mbart-large-en-ro').prepare_seq2seq_batch(example_english_phrase, src_lang="en_XX", tgt_lang="ro_RO", tgt_texts=expected_translation_romanian) 4 input_ids = batch["input_ids"] ----> 5 target_ids = batch["decoder_input_ids"] /usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py in __getitem__(self, item) 232 """ 233 if isinstance(item, str): --> 234 return self.data[item] 235 elif self._encodings is not None: 236 return self._encodings[item] KeyError: 'decoder_input_ids' ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8416/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8416/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8415
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8415/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8415/comments
https://api.github.com/repos/huggingface/transformers/issues/8415/events
https://github.com/huggingface/transformers/pull/8415
738,853,124
MDExOlB1bGxSZXF1ZXN0NTE3NjE3Mjk3
8,415
[Tests] Add Common Test for Training + Fix a couple of bugs
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,604
1,604
1,604
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds an aggressive test to check that all models that should be trainable can perform a backward pass of their loss output. In addition, a test for training with gradient checkpointing is added as well. The motivation comes from this error: https://github.com/huggingface/transformers/pull/7562#issuecomment-723887221 - the PR introduced broke gradient checkpointing without any test noticing it. To make the test applicable for all models, some `ModelTests` have to overwrite the `_prepare_for_class` function. In addition, some cleaning was done `ForPretraining` was renamed to `ForPreTraining`; an `AutoModelForNextSentencePredicition` was added, ... ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8415/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8415/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8415", "html_url": "https://github.com/huggingface/transformers/pull/8415", "diff_url": "https://github.com/huggingface/transformers/pull/8415.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8415.patch", "merged_at": 1604942682000 }
https://api.github.com/repos/huggingface/transformers/issues/8414
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8414/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8414/comments
https://api.github.com/repos/huggingface/transformers/issues/8414/events
https://github.com/huggingface/transformers/issues/8414
738,704,164
MDU6SXNzdWU3Mzg3MDQxNjQ=
8,414
[seq2seq] translation tpu example doesnt work
{ "login": "Stupack", "id": 68128236, "node_id": "MDQ6VXNlcjY4MTI4MjM2", "avatar_url": "https://avatars.githubusercontent.com/u/68128236?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Stupack", "html_url": "https://github.com/Stupack", "followers_url": "https://api.github.com/users/Stupack/followers", "following_url": "https://api.github.com/users/Stupack/following{/other_user}", "gists_url": "https://api.github.com/users/Stupack/gists{/gist_id}", "starred_url": "https://api.github.com/users/Stupack/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Stupack/subscriptions", "organizations_url": "https://api.github.com/users/Stupack/orgs", "repos_url": "https://api.github.com/users/Stupack/repos", "events_url": "https://api.github.com/users/Stupack/events{/privacy}", "received_events_url": "https://api.github.com/users/Stupack/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "Hi,\r\nI'm experiencing the same issue while fine-tuning on TPU for marianMT. Code works well on GPU. On TPU, it throws similar exception: \r\n```python\r\nRuntimeError: Cannot access data pointer of Tensor that doesn't have storage\r\n```\r\nThanks!", "Hi @chris-tng ,\r\n\r\ncould you post your env info, and the script/command that you are running ?", "Sure, I'm using \r\n```shell\r\ntransformers 4.0.1 \r\ntorch 1.7.0+cu101 \r\ntorch-xla 1.7 \r\ntorchsummary 1.5.1 \r\ntorchtext 0.3.1 \r\ntorchvision 0.8.1+cu101\r\n```\r\n- command\r\n\r\n```shell\r\n!python xla_spawn.py --num_cores 8 finetune_trainer.py \\\r\n --tokenizer_name \"Helsinki-NLP/opus-mt-es-en\" \\\r\n --model_name_or_path \"Helsinki-NLP/opus-mt-es-en\" \\\r\n --data_dir \"/content/data\" \\\r\n --output_dir \"/content/marian_es_en\" --overwrite_output_dir \\\r\n --learning_rate=3e-4 \\\r\n --warmup_steps 500 \\\r\n --per_device_train_batch_size=256 --per_device_eval_batch_size=256 \\\r\n --freeze_encoder --freeze_embeds \\\r\n --num_train_epochs=6 \\\r\n --save_steps 3000 --eval_steps 3000 \\\r\n --logging_first_step --logging_steps 200 \\\r\n --max_source_length 128 \\\r\n --max_target_length 128 --val_max_target_length 128 --test_max_target_length 128 \\\r\n --do_train --do_eval --do_predict \\\r\n --n_val 5000 --n_test 10000 --evaluation_strategy steps \\\r\n --prediction_loss_only \\\r\n --task translation --label_smoothing 0.1 \\\r\n \"$@\"\r\n```\r\n\r\nHere is the error\r\n```\r\n[INFO|trainer.py:666] 2020-12-11 05:56:11,550 >> Total train batch size (w. parallel, distributed & accumulation) = 2048\r\n[INFO|trainer.py:667] 2020-12-11 05:56:11,550 >> Gradient Accumulation steps = 1\r\n[INFO|trainer.py:668] 2020-12-11 05:56:11,550 >> Total optimization steps = 1830\r\n 0% 0/1830 [00:00<?, ?it/s]Exception in device=TPU:0: Cannot access data pointer of Tensor that doesn't have storage\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py\", line 330, in _mp_start_fn\r\n _start_fn(index, pf_cfg, fn, args)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py\", line 324, in _start_fn\r\n fn(gindex, *args)\r\n File \"/content/transformers/examples/seq2seq/finetune_trainer.py\", line 309, in _mp_fn\r\n main()\r\n File \"/content/transformers/examples/seq2seq/finetune_trainer.py\", line 258, in main\r\n model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 747, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/trainer.py\", line 1089, in training_step\r\n loss.backward()\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/tensor.py\", line 221, in backward\r\n torch.autograd.backward(self, gradient, retain_graph, create_graph)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py\", line 132, in backward\r\n allow_unreachable=True) # allow_unreachable flag\r\nRuntimeError: Cannot access data pointer of Tensor that doesn't have storage\r\n 0% 0/1830 [03:34<?, ?it/s]\r\nTraceback (most recent call last):\r\n File \"xla_spawn.py\", line 72, in <module>\r\n main()\r\n File \"xla_spawn.py\", line 68, in main\r\n xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py\", line 395, in spawn\r\n start_method=start_method)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/spawn.py\", line 157, in start_processes\r\n while not context.join():\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/spawn.py\", line 112, in join\r\n (error_index, exitcode)\r\nException: process 0 terminated with exit code 17\r\n```\r\n", "Hi @patil-suraj , any idea why it happens? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,604
1,619
1,619
NONE
null
Hi im trying to run the `train_distil_marian_enro_tpu.sh` example in collab/kaggle tpus and for some reason it gives me the following output: @sshleifer ``` Exception in device=TPU:0: Cannot access data pointer of Tensor that doesn't have storage Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn fn(gindex, *args) File "/content/transformers/examples/seq2seq/finetune_trainer.py", line 300, in _mp_fn main() File "/content/transformers/examples/seq2seq/finetune_trainer.py", line 249, in main model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 776, in train tr_loss += self.training_step(model, inputs) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1128, in training_step loss.backward() File "/usr/local/lib/python3.6/dist-packages/torch/tensor.py", line 221, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py", line 132, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: Cannot access data pointer of Tensor that doesn't have storage 0% 0/7158 [00:44<?, ?it/s] Traceback (most recent call last): File "xla_spawn.py", line 72, in <module> main() File "xla_spawn.py", line 68, in main xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores) File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 395, in spawn start_method=start_method) File "/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/spawn.py", line 157, in start_processes while not context.join(): File "/usr/local/lib/python3.6/dist-packages/torch/multiprocessing/spawn.py", line 112, in join (error_index, exitcode) Exception: process 0 terminated with exit code ``` Related to this issue #https://github.com/pytorch/xla/issues/929 Not sure how to solve it. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8414/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8414/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8413
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8413/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8413/comments
https://api.github.com/repos/huggingface/transformers/issues/8413/events
https://github.com/huggingface/transformers/issues/8413
738,565,293
MDU6SXNzdWU3Mzg1NjUyOTM=
8,413
continuing fine-tuning from the last checkpoint
{ "login": "naturecreator", "id": 39854185, "node_id": "MDQ6VXNlcjM5ODU0MTg1", "avatar_url": "https://avatars.githubusercontent.com/u/39854185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/naturecreator", "html_url": "https://github.com/naturecreator", "followers_url": "https://api.github.com/users/naturecreator/followers", "following_url": "https://api.github.com/users/naturecreator/following{/other_user}", "gists_url": "https://api.github.com/users/naturecreator/gists{/gist_id}", "starred_url": "https://api.github.com/users/naturecreator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/naturecreator/subscriptions", "organizations_url": "https://api.github.com/users/naturecreator/orgs", "repos_url": "https://api.github.com/users/naturecreator/repos", "events_url": "https://api.github.com/users/naturecreator/events{/privacy}", "received_events_url": "https://api.github.com/users/naturecreator/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It looks like the config was not saved in the checkpoint folder you are passing. Double-check its contents, but apparently, the model was not properly saved inside it.", "> It looks like the config was not saved in the checkpoint folder you are passing. Double-check its contents, but apparently, the model was not properly saved inside it.\r\n\r\nThanks for replying @sgugger \r\n\r\nI have the following files in my checkpoint folder:\r\n\r\n```\r\nconfig.json optimizer.pt pytorch_model.bin scheduler.pt trainer_state.json training_args.bin\r\n\r\n```\r\nand inside the \"config.json\" it looks like this:\r\n\r\n```\r\n{\r\n \"_name_or_path\": \"bert-base-cased\",\r\n \"architectures\": [\r\n \"BertForMaskedLM\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"gradient_checkpointing\": false,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 512,\r\n \"model_type\": \"bert\",\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 12,\r\n \"pad_token_id\": 0,\r\n \"type_vocab_size\": 2,\r\n \"vocab_size\": 28996\r\n}\r\n\r\n```\r\n\r\nDo you have any idea where exactly I am going wrong?", "Oh, I think you just have a typo in your path:\r\n```\r\n/media/ai-students/Data/Nesara/Bert_MLM_fine_tune/new_results/result_dir/checkpoint_37000/\r\n```\r\nshould be\r\n```\r\n/media/ai-students/Data/Nesara/Bert_MLM_fine_tune/new_results/result_dir/checkpoint-37000/\r\n```\r\n(a dash instead of the underscore).", "Ahhhh my bad!\r\nThanks a lot @sgugger \r\nI had encountered one more problem (vocab.txt is missing):\r\n```\r\nTraceback (most recent call last):\r\n File \"run_language_modeling.py\", line 355, in <module>\r\n main()\r\n File \"run_language_modeling.py\", line 244, in main\r\n tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, cache_dir=model_args.cache_dir)\r\n File \"/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/transformers/tokenization_auto.py\", line 336, in from_pretrained\r\n return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)\r\n File \"/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/transformers/tokenization_utils_base.py\", line 1649, in from_pretrained\r\n list(cls.vocab_files_names.values()),\r\nOSError: Model name '/media/ai-students/Data/Nesara/Bert_MLM_fine_tune/result_dir/checkpoint-37000' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). We assumed '/media/ai-students/Data/Nesara/Bert_MLM_fine_tune/result_dir/checkpoint-37000' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\r\n\r\n```\r\n\r\nAfter adding the parameter --tokenizer_name I could resolve the issue and now the fine-tuning resumes as expected\r\n\r\nBelow is the command:\r\n\r\n```\r\npython run_language_modeling.py --output_dir=/media/ai-students/Data/Nesara/Bert_MLM_fine_tune/new_results --model_type=bert --model_name_or_path=/media/ai-students/Data/Nesara/Bert_MLM_fine_tune/result_dir/checkpoint-37000 --do_train --train_data_file=$TRAIN_FILE --do_eval --eval_data_file=$TEST_FILE --mlm --per_gpu_train_batch_size=4 --tokenizer_name=bert-base-cased\r\n```" ]
1,604
1,604
1,604
NONE
null
Hello, While fine-tuning BERT on the custom data using "run_language_modeling.py" script, due to memory issue the fine-tuning stopped in the middle. However, I tried to resume the fine-tuning from the last checkpoint. But, I came across with the following error: ``` python run_language_modeling.py --output_dir=/media/ai-students/Data/Nesara/Bert_MLM_fine_tune/new_results --model_type=bert --model_name_or_path=/media/ai-students/Data/Nesara/Bert_MLM_fine_tune/new_results/result_dir/checkpoint_37000/ --do_train --train_data_file=$TRAIN_FILE --do_eval --eval_data_file=$TEST_FILE --mlm --per_gpu_train_batch_size=4 /home/ai-students/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /home/ai-students/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /home/ai-students/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /home/ai-students/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /home/ai-students/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /home/ai-students/.local/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) /home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) 11/08/2020 22:40:18 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 2, distributed training: False, 16-bits training: False 11/08/2020 22:40:18 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='/media/ai-students/Data/Nesara/Bert_MLM_fine_tune/new_results', overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=False, evaluate_during_training=False, evaluation_strategy=<EvaluationStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, per_gpu_train_batch_size=4, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Nov08_22-40-18_aistudents-msi', logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name='/media/ai-students/Data/Nesara/Bert_MLM_fine_tune/new_results', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None) Traceback (most recent call last): File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/transformers/configuration_utils.py", line 387, in get_config_dict raise EnvironmentError OSError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "run_language_modeling.py", line 355, in <module> main() File "run_language_modeling.py", line 236, in main config = AutoConfig.from_pretrained(model_args.model_name_or_path, cache_dir=model_args.cache_dir) File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/transformers/configuration_auto.py", line 329, in from_pretrained config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/ai-students/anaconda3/envs/env_nesara/lib/python3.6/site-packages/transformers/configuration_utils.py", line 396, in get_config_dict raise EnvironmentError(msg) OSError: Can't load config for '/media/ai-students/Data/Nesara/Bert_MLM_fine_tune/new_results/result_dir/checkpoint_37000/'. Make sure that: - '/media/ai-students/Data/Nesara/Bert_MLM_fine_tune/new_results/result_dir/checkpoint_37000/' is a correct model identifier listed on 'https://huggingface.co/models' - or '/media/ai-students/Data/Nesara/Bert_MLM_fine_tune/new_results/result_dir/checkpoint_37000/' is the correct path to a directory containing a config.json file ``` Command used to fine-tune from the last checkpoint is as follows: ``` python run_language_modeling.py --output_dir=/media/ai-students/Data/Nesara/Bert_MLM_fine_tune/new_results --model_type=bert --model_name_or_path=/media/ai-students/Data/Nesara/Bert_MLM_fine_tune/new_results/result_dir/checkpoint_37000/ --do_train --train_data_file=$TRAIN_FILE --do_eval --eval_data_file=$TEST_FILE --mlm --per_gpu_train_batch_size=4 ``` Here is the command used to fine-tune BERT earlier: ``` python run_language_modeling.py --output_dir=/media/ai-students/Data/Nesara/Bert_MLM_fine_tune/result_dir --model_type=bert --model_name_or_path=bert-base-cased --do_train --train_data_file=$TRAIN_FILE --do_eval --eval_data_file=$TEST_FILE --mlm --per_gpu_train_batch_size=4 ``` @sgugger Could anyone please let me know on how to resume fine-tuning from the last checkpoint? Thanks in advance :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8413/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8413/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8412
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8412/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8412/comments
https://api.github.com/repos/huggingface/transformers/issues/8412/events
https://github.com/huggingface/transformers/pull/8412
738,562,852
MDExOlB1bGxSZXF1ZXN0NTE3Mzc2OTI4
8,412
[s2s/distill] remove run_distiller.sh, fix xsum script
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8412/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8412/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8412", "html_url": "https://github.com/huggingface/transformers/pull/8412", "diff_url": "https://github.com/huggingface/transformers/pull/8412.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8412.patch", "merged_at": 1604872664000 }
https://api.github.com/repos/huggingface/transformers/issues/8411
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8411/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8411/comments
https://api.github.com/repos/huggingface/transformers/issues/8411/events
https://github.com/huggingface/transformers/issues/8411
738,545,183
MDU6SXNzdWU3Mzg1NDUxODM=
8,411
Tokenizer return nothing instead of unk for certain token?
{ "login": "zhouhanxie", "id": 47683426, "node_id": "MDQ6VXNlcjQ3NjgzNDI2", "avatar_url": "https://avatars.githubusercontent.com/u/47683426?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhouhanxie", "html_url": "https://github.com/zhouhanxie", "followers_url": "https://api.github.com/users/zhouhanxie/followers", "following_url": "https://api.github.com/users/zhouhanxie/following{/other_user}", "gists_url": "https://api.github.com/users/zhouhanxie/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhouhanxie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhouhanxie/subscriptions", "organizations_url": "https://api.github.com/users/zhouhanxie/orgs", "repos_url": "https://api.github.com/users/zhouhanxie/repos", "events_url": "https://api.github.com/users/zhouhanxie/events{/privacy}", "received_events_url": "https://api.github.com/users/zhouhanxie/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Similar error at [1662]https://github.com/huggingface/transformers/issues/1662\r\n\r\nThe problem could be solved by switching to multilingual tokenizer(if present), else it would require some hot fix. " ]
1,604
1,604
1,604
NONE
null
## Environment info - `transformers` version: 3.1.0 - Platform: pytorch - Python version: python 3.6.9 - PyTorch version (GPU?): 1.6.0 - Tensorflow version (GPU?): na - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ## Information Model I am using (tokenizer && model == google/electra-large-generator): The problem arises when using: * [x ] my own modified scripts: (give details below) self.fill_mask = pipeline("fill-mask", model="google/electra-large-generator",\ tokenizer="google/electra-large-generator") The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x ] my own task or dataset: (give details below) a simple fill mask type of task with transformer pipline ## To reproduce Steps to reproduce the behavior: 1. fill_mask = pipeline("fill-mask", model="google/electra-large-generator",\ tokenizer="google/electra-large-generator") 2. fill_mask.tokenizer.tokenize(""" ̈ """) 3. output is [] ## Expected behavior In this case the missing of this punctuation would cause some downstream usage to fail due to IndexError. The problem might be intrinsic to this particular tokenizer, maybe it is worthwhile to raise a warning/error or returning a unk token? Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8411/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8411/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8410
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8410/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8410/comments
https://api.github.com/repos/huggingface/transformers/issues/8410/events
https://github.com/huggingface/transformers/pull/8410
738,541,049
MDExOlB1bGxSZXF1ZXN0NTE3MzYwNzA5
8,410
comet_ml init weirdness
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for the report... I'll take a look at this.", "Again, if you find out how comet_ml got installed, that would be helpful.", "> Again, if you find out how comet_ml got installed, that would be helpful.\r\n\r\nYay, I found at least one of them `pytorch-lightning`. https://github.com/PyTorchLightning/pytorch-lightning/blob/master/environment.yml#L49\r\n\r\nI have a feeling `pipdeptree` misses packages not installed directly via pypi deps, but via a local `pip install -e .[dev]`\r\n\r\nI probably should just go ahead and create the key - though I have no use for it - please let me know whether it'd serve better if I didn't and continued reporting any related problems.", "Thanks for tracking down pytorch's dependencies. I see that that is for conda. What is weird though is that I can't figure out how `comet_ml.config` wouldn't be defined. In any event, this PR seems fine. (I'd like to get to the bottom of this at some point).\r\n\r\nRemember that you don't have to set a `COMET_API_KEY` (unless you really want to log stuff). You can also set `COMET_MODE=\"DISABLED\"`. ", "I think it could be some fragile error handling. As I mentioned the crash happened when I mistakenly provided a bogus value to one of the pytorch-lightening clargs - so it was supposed to fail telling me that the argument was wrong, but instead failed with the error posted in OP. Once the band-aid was added it reported the error properly w/o crashing. ", "I still have it re-producable w/o rebasing to this fix:\r\n\r\n```\r\ncd examples/seq2seq\r\nBS=2; PYTHONPATH=\"../../src\" python finetune.py --data_dir cnn_dm --do_predict --do_train --eval_batch_size $BS --fp16 --fp16_opt_level O1 --freeze_embeds --freeze_encoder --gpus 1 --gradient_accumulation_steps 1 --learning_rate 3e-5 --max_target_length 142 --model_name_or_path sshleifer/student_cnn_12_6 --n_val 500 --num_train_epochs 2 --output_dir distilbart-cnn-12-6 --tokenizer_name facebook/bart-large --train_batch_size $BS --val_check_interval 0.25 --val_max_target_length 142 --warmup_steps 500\r\n```\r\ngiving:\r\n```\r\n/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb/util.py:36: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working\r\n from collections import namedtuple, Mapping, Sequence\r\n/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb/vendor/graphql-core-1.1/graphql/type/directives.py:55: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working\r\n assert isinstance(locations, collections.Iterable), 'Must provide locations for directive.'\r\n/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/monkey_patching.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\r\n import imp\r\nTraceback (most recent call last):\r\n File \"finetune.py\", line 18, in <module>\r\n from callbacks import Seq2SeqLoggingCallback, get_checkpoint_callback, get_early_stopping_callback\r\n File \"/mnt/nvme1/code/huggingface/transformers-master-stas00/examples/seq2seq/callbacks.py\", line 11, in <module>\r\n from utils import save_json\r\n File \"/mnt/nvme1/code/huggingface/transformers-master-stas00/examples/seq2seq/utils.py\", line 22, in <module>\r\n from transformers import BartTokenizer, EvalPrediction, PreTrainedTokenizer, T5Tokenizer\r\n File \"/mnt/nvme1/code/huggingface/transformers-master-stas00/src/transformers/__init__.py\", line 22, in <module>\r\n from .integrations import ( # isort:skip\r\n File \"/mnt/nvme1/code/huggingface/transformers-master-stas00/src/transformers/integrations.py\", line 17, in <module>\r\n if comet_ml.config.get_config(\"comet.api_key\"):\r\nAttributeError: module 'comet_ml' has no attribute 'config'\r\n```\r\nSee if you get it yourself (pre-this PR)? or if you want me to try something let me know \r\n", "Hmm, actually the crash described in the OP happens all the time pre this PR. I even uninstalled and reinstalled `comet_ml` via `pip`.", "I figured it out:\r\n\r\n```\r\npython -c \"import comet_ml; print(comet_ml.config)\"\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/__init__.py\", line 34, in <module>\r\n from .api import API, APIExperiment\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/api.py\", line 28, in <module>\r\n from .experiment import CommonExperiment\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/experiment.py\", line 97, in <module>\r\n from .gpu_logging import (\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/gpu_logging.py\", line 27, in <module>\r\n import pynvml\r\nModuleNotFoundError: No module named 'pynvml'\r\n```\r\nIf I install `pynvml` the error disappers.\r\n\r\nSo basically `comet_ml` fails to load and the try: block ignores the exception.", "The bug is somewhere in seq2seq utils (**edit**: doesn't seem to be the case)\r\n\r\nFollowing the trace in https://github.com/huggingface/transformers/pull/8410#issuecomment-724281982\r\n\r\nThis does the right thing:\r\n```\r\ncd examples/seq2seq\r\nPYTHONPATH=\"../../src\" python -c \"import utils\"\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/mnt/nvme1/code/huggingface/transformers-master-stas00/examples/seq2seq/utils.py\", line 22, in <module>\r\n from transformers import BartTokenizer, EvalPrediction, PreTrainedTokenizer, T5Tokenizer\r\n File \"/mnt/nvme1/code/huggingface/transformers-master-stas00/src/transformers/__init__.py\", line 22, in <module>\r\n from .integrations import ( # isort:skip\r\n File \"/mnt/nvme1/code/huggingface/transformers-master-stas00/src/transformers/integrations.py\", line 16, in <module>\r\n import comet_ml # noqa: F401\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/__init__.py\", line 34, in <module>\r\n from .api import API, APIExperiment\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/api.py\", line 28, in <module>\r\n from .experiment import CommonExperiment\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/experiment.py\", line 97, in <module>\r\n from .gpu_logging import (\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/gpu_logging.py\", line 27, in <module>\r\n import pynvml\r\nModuleNotFoundError: No module named 'pynvml'\r\n```\r\nbut one above that imports `utils` eats the exception:\r\n```\r\nPYTHONPATH=\"../../src\" python -c \"import callbacks\"\r\n/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb/util.py:36: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working\r\n from collections import namedtuple, Mapping, Sequence\r\n/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb/vendor/graphql-core-1.1/graphql/type/directives.py:55: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working\r\n assert isinstance(locations, collections.Iterable), 'Must provide locations for directive.'\r\n/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/monkey_patching.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\r\n import imp\r\n2020-11-09 14:15:06.573247: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\n/mnt/nvme1/code/github/00nlp/fairseq/fairseq/optim/adam.py:8: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working\r\n from collections import Collection\r\n(main-38) /mnt/nvme1/code/huggingface/transformers-master-stas00/examples/seq2seq [stas00/transformers|patch-3|+1?8]> PYTHONPATH=\"../../src\" python -c \"import callbacks\"\r\n/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb/util.py:36: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working\r\n from collections import namedtuple, Mapping, Sequence\r\n/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb/vendor/graphql-core-1.1/graphql/type/directives.py:55: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working\r\n assert isinstance(locations, collections.Iterable), 'Must provide locations for directive.'\r\n/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/monkey_patching.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\r\n import imp\r\n2020-11-09 14:15:31.366275: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\n/mnt/nvme1/code/github/00nlp/fairseq/fairseq/optim/adam.py:8: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working\r\n from collections import Collection\r\n```\r\n", "Oh, great find! Thanks for this... we'll take a further look on our end and get something out soon.", "This is very odd, since I don't see any `try` blocks around this sequence of imports - except at the end inside `integrations.py` and even if I remove it there, the exception is still suppressed. `import utils` catches the error, but `import callbacks` which imports `utils` suppresses it (see https://github.com/huggingface/transformers/pull/8410#issuecomment-724312036).\r\n\r\nCould this somehow be related to this:\r\n\r\n```\r\n/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/monkey_patching.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\r\n import imp\r\n```\r\n\r\nDo you by chance mess something up with the importer there? it feels like something overrides `import` because suddenly it ignores import errors. (**edit**: ruled that out too - see the next comment)", "It's possibly the doings of `PL`, observe this:\r\n```\r\nPYTHONPATH=\"../../src\" python -c \"import utils\"\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/mnt/nvme1/code/huggingface/transformers-master-stas00/examples/seq2seq/utils.py\", line 22, in <module>\r\n from transformers import BartTokenizer, EvalPrediction, PreTrainedTokenizer, T5Tokenizer\r\n File \"/mnt/nvme1/code/huggingface/transformers-master-stas00/src/transformers/__init__.py\", line 22, in <module>\r\n from .integrations import ( # isort:skip\r\n File \"/mnt/nvme1/code/huggingface/transformers-master-stas00/src/transformers/integrations.py\", line 16, in <module>\r\n import comet_ml # noqa: F401\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/__init__.py\", line 34, in <module>\r\n from .api import API, APIExperiment\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/api.py\", line 28, in <module>\r\n from .experiment import CommonExperiment\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/experiment.py\", line 97, in <module>\r\n from .gpu_logging import (\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/gpu_logging.py\", line 27, in <module>\r\n import pynvml\r\nModuleNotFoundError: No module named 'pynvml'\r\n```\r\nand now let's add `import pytorch_lightning` first:\r\n```\r\nPYTHONPATH=\"../../src\" python -c \"import pytorch_lightning; import utils\"\r\n/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb/util.py:36: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working\r\n from collections import namedtuple, Mapping, Sequence\r\n/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/wandb/vendor/graphql-core-1.1/graphql/type/directives.py:55: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working\r\n assert isinstance(locations, collections.Iterable), 'Must provide locations for directive.'\r\n/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/monkey_patching.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses\r\n import imp\r\n2020-11-09 14:35:24.485463: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\n/mnt/nvme1/code/github/00nlp/fairseq/fairseq/optim/adam.py:8: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working\r\n from collections import Collection\r\n```\r\n\r\nThe exception was suppressed - so most likely it's PL's doing - most likely `import` functionality gets modified incorrectly.\r\n\r\nDo you have enough @dsblank to proceed from here? this is no longer `transformers`-related.", "Yes, I'll take it from here. We have a fix, and I'll test some more tomorrow and probably have a new comet_ml release out shortly. Thanks again!", "Oh, and I forgot to give the fully isolated command that reproduces the problem:\r\n```\r\npython -c \"import pytorch_lightning; import comet_ml\"\r\n```\r\ndoesn't fail and it should\r\n\r\nwhereas:\r\n```\r\npython -c \"import comet_ml\"\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/__init__.py\", line 34, in <module>\r\n from .api import API, APIExperiment\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/api.py\", line 28, in <module>\r\n from .experiment import CommonExperiment\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/experiment.py\", line 97, in <module>\r\n from .gpu_logging import (\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/comet_ml/gpu_logging.py\", line 27, in <module>\r\n import pynvml\r\nModuleNotFoundError: No module named 'pynvml'\r\n```\r\nas it should." ]
1,604
1,604
1,604
CONTRIBUTOR
null
Using a slightly bogus invocation of the following ``` cd examples/seq2seq PYTHONPATH="../../src" BS=2 python finetune.py --data_dir cnn_dm --do_predict --do_train --eval_batch_size $BS --fp16 --fp16_opt_level O1--freeze_embeds --freeze_encoder --gpus 1 --gradient_accumulation_steps 1 --learning_rate 3e-5 --max_target_length 142 --model_name_or_path sshleifer/student_cnn_12_6 --n_val 500 --num_train_epochs 2 --output_dir distilbart-cnn-12-6 --tokenizer_name facebook/bart-large --train_batch_size $BS --val_check_interval 0.25 --val_max_target_length 142 --warmup_steps 500 ``` I get: ``` Traceback (most recent call last): File "finetune.py", line 18, in <module> from callbacks import Seq2SeqLoggingCallback, get_checkpoint_callback, get_early_stopping_callback File "/mnt/nvme1/code/huggingface/transformers-comet_ml/examples/seq2seq/callbacks.py", line 11, in <module> from utils import save_json File "/mnt/nvme1/code/huggingface/transformers-comet_ml/examples/seq2seq/utils.py", line 22, in <module> from transformers import BartTokenizer, EvalPrediction, PreTrainedTokenizer, T5Tokenizer File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/__init__.py", line 22, in <module> from .integrations import ( # isort:skip File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/integrations.py", line 17, in <module> if comet_ml.config.get_config("comet.api_key"): AttributeError: module 'comet_ml' has no attribute 'config' ``` no idea why this happens. this PR adds a band-aid - and then once I apply it - the real error shows up `$BS` wasn't defined - i.e. my args had an issue. But I need to see the real error and not some totally unrelated `comet_ml` error. perhaps something else needs to be fixed. While we are at it, once again I ended up with `comet_ml` w/o explicitly installing it. So I again get to enjoy the incessant: ``` comet_ml is installed but `COMET_API_KEY` is not set. ``` :( ``` pipdeptree --reverse --packages comet_ml ``` doesn't give me the parent who pulled it in which is very odd, since it works for other packages. Is there a different way to trace what pulled in a certain package? If I install/update it explicitly it appears I already have the latest version: ``` pip install comet_ml -U Requirement already up-to-date: comet_ml in /mnt/nvme1/anaconda3/envs/main-38/lib/python3.8/site-packages (3.2.5) ``` @sgugger, @dsblank
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8410/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8410/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8410", "html_url": "https://github.com/huggingface/transformers/pull/8410", "diff_url": "https://github.com/huggingface/transformers/pull/8410.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8410.patch", "merged_at": 1604910967000 }
https://api.github.com/repos/huggingface/transformers/issues/8409
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8409/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8409/comments
https://api.github.com/repos/huggingface/transformers/issues/8409/events
https://github.com/huggingface/transformers/pull/8409
738,535,548
MDExOlB1bGxSZXF1ZXN0NTE3MzU2ODE3
8,409
Bug fix for permutation language modelling
{ "login": "shngt", "id": 20009551, "node_id": "MDQ6VXNlcjIwMDA5NTUx", "avatar_url": "https://avatars.githubusercontent.com/u/20009551?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shngt", "html_url": "https://github.com/shngt", "followers_url": "https://api.github.com/users/shngt/followers", "following_url": "https://api.github.com/users/shngt/following{/other_user}", "gists_url": "https://api.github.com/users/shngt/gists{/gist_id}", "starred_url": "https://api.github.com/users/shngt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shngt/subscriptions", "organizations_url": "https://api.github.com/users/shngt/orgs", "repos_url": "https://api.github.com/users/shngt/repos", "events_url": "https://api.github.com/users/shngt/events{/privacy}", "received_events_url": "https://api.github.com/users/shngt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
Addresses a bug in permutation language modelling data collator, where `&` is used instead of `|` to compute the non-functional token mask (tokens excluding [PAD], [SEP], [CLS]). For verification, may refer to original XLNet code (https://github.com/zihangdai/xlnet/blob/master/data_utils.py#L602). Addresses #6812 (further investigation needed, however) @patrickvonplaten @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8409/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8409/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8409", "html_url": "https://github.com/huggingface/transformers/pull/8409", "diff_url": "https://github.com/huggingface/transformers/pull/8409.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8409.patch", "merged_at": 1604935407000 }
https://api.github.com/repos/huggingface/transformers/issues/8408
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8408/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8408/comments
https://api.github.com/repos/huggingface/transformers/issues/8408/events
https://github.com/huggingface/transformers/pull/8408
738,491,640
MDExOlB1bGxSZXF1ZXN0NTE3MzI0NzI2
8,408
updating tag for exbert viz
{ "login": "smanjil", "id": 11598535, "node_id": "MDQ6VXNlcjExNTk4NTM1", "avatar_url": "https://avatars.githubusercontent.com/u/11598535?v=4", "gravatar_id": "", "url": "https://api.github.com/users/smanjil", "html_url": "https://github.com/smanjil", "followers_url": "https://api.github.com/users/smanjil/followers", "following_url": "https://api.github.com/users/smanjil/following{/other_user}", "gists_url": "https://api.github.com/users/smanjil/gists{/gist_id}", "starred_url": "https://api.github.com/users/smanjil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/smanjil/subscriptions", "organizations_url": "https://api.github.com/users/smanjil/orgs", "repos_url": "https://api.github.com/users/smanjil/repos", "events_url": "https://api.github.com/users/smanjil/events{/privacy}", "received_events_url": "https://api.github.com/users/smanjil/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Note that `exbert` integration is unfortunately not automatic. The authors from Exbert (@bhoov @HendrikStrobelt) need to re-deploy manually to add support for a new model. We've discussed with @Narsil @mfuntowicz hooking this into the hosted Inference API, so this might be automatic in the future.\r\n\r\ncc'ing @JetRunner \r\n\r\nTo help us understand, what's your use case for ExBERT @smanjil?", "> Note that `exbert` integration is unfortunately not automatic. The authors from Exbert (@bhoov @HendrikStrobelt) need to re-deploy manually to add support for a new model. We've discussed with @Narsil @mfuntowicz hooking this into the hosted Inference API, so this might be automatic in the future.\r\n> \r\n> cc'ing @JetRunner\r\n> \r\n> To help us understand, what's your use case for ExBERT @smanjil?\r\n\r\n\r\n@julien-c I did not know that. \r\n\r\nAs this model is a fine-tuned model for German medical domain texts, I wanted to see the attention distribution as done in German BERT. \r\n\r\nI believe this will be helpful for me as well as others to understand the effects of fine-tuning. Lastly, I tried with jessevig in colab, but, I have to fire it up in colab everytime. And, it was difficult to load the fine-tuned model there.\r\n\r\nSo, I am looking for a possibility here, and hope it will be done.", "@julien-c Oh I thought the model was already added to ExBERT. Good to know!" ]
1,604
1,604
1,604
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8408/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8408/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8408", "html_url": "https://github.com/huggingface/transformers/pull/8408", "diff_url": "https://github.com/huggingface/transformers/pull/8408.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8408.patch", "merged_at": 1604911435000 }
https://api.github.com/repos/huggingface/transformers/issues/8407
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8407/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8407/comments
https://api.github.com/repos/huggingface/transformers/issues/8407/events
https://github.com/huggingface/transformers/issues/8407
738,470,169
MDU6SXNzdWU3Mzg0NzAxNjk=
8,407
All the weights of the model checkpoint at roberta-base were not used when initializing
{ "login": "xujiaz2000", "id": 72122139, "node_id": "MDQ6VXNlcjcyMTIyMTM5", "avatar_url": "https://avatars.githubusercontent.com/u/72122139?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xujiaz2000", "html_url": "https://github.com/xujiaz2000", "followers_url": "https://api.github.com/users/xujiaz2000/followers", "following_url": "https://api.github.com/users/xujiaz2000/following{/other_user}", "gists_url": "https://api.github.com/users/xujiaz2000/gists{/gist_id}", "starred_url": "https://api.github.com/users/xujiaz2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xujiaz2000/subscriptions", "organizations_url": "https://api.github.com/users/xujiaz2000/orgs", "repos_url": "https://api.github.com/users/xujiaz2000/repos", "events_url": "https://api.github.com/users/xujiaz2000/events{/privacy}", "received_events_url": "https://api.github.com/users/xujiaz2000/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "but when i use the same way to initializing `MyBert` which is based on the `BertModel`, it works and have a good result.", "This is probably because you put `self.bert = RobertaModel(config)`. It's looking for the identifier `roberta`, which isn't in your model.\r\n\r\nTry replacing `self.roberta = RobertaModel(config)`?", "> This is probably because you put `self.bert = RobertaModel(config)`. It's looking for the identifier `roberta`, which isn't in your model.\n> \n> \n> \n> Try replacing `self.roberta = RobertaModel(config)`?\n\nThank you very much for replying, i did as you said and replaced `self.cls = RobertaLMHead(config)` with `self.lm_head = RobertaLMHead(config)`, it worked well, Thanks!", "Hi @LysandreJik, sorry for reviving this old thread, but could you point to me where can I find this info in the docs? I'm interested to know what is the identifier used for different models." ]
1,604
1,649
1,605
NONE
null
# How to use the model checkpoint to initializing my own roberta? **I wrote a class named `MyRoberta` at `modeling_roberta.py`, the main code is just like below:** ```python class MyRoberta(RobertaPreTrainedModel): def __init__(self, config): super(MyRoberta, self).__init__(config) # self.bert = RobertaModel.from_pretrained("model/roberta_base/", config=config) self.bert = RobertaModel(config) self.cls = RobertaLMHead(config) # ... ``` **and then I initialized it using the code below:** ```python config = RobertaConfig.from_pretrained("roberta-base") logging("Model config {}".format(config)) model = MyRoberta.from_pretrained("roberta-base", mirror="tuna", cache_dir="./model/", config=config) ``` **however, one warning message shows that i didn't using the pretrained parameters.** ``` Some weights of the model checkpoint at roberta-base were not used when initializing MyRoberta: ['roberta.embeddings.word_embeddings.weight', 'roberta.embeddings.position_embeddings.weight', 'roberta.embeddings.token_type_embeddings.weight', 'roberta.embeddings.LayerNorm.weight', 'roberta.embeddings.LayerNorm.bias', 'roberta.encoder.layer.0.attention.self.query.weight', 'roberta.encoder.layer.0.attention.self.query.bias', 'roberta.encoder.layer.0.attention.self.key.weight', 'roberta.encoder.layer.0.attention.self.key.bias', 'roberta.encoder.layer.0.attention.self.value.weight', 'roberta.encoder.layer.0.attention.self.value.bias', 'roberta.encoder.layer.0.attention.output.dense.weight', 'roberta.encoder.layer.0.attention.output.dense.bias', 'roberta.encoder.layer.0.attention.output.LayerNorm.weight', 'roberta.encoder.layer.0.attention.output.LayerNorm.bias', 'roberta.encoder.layer.0.intermediate.dense.weight', 'roberta.encoder.layer.0.intermediate.dense.bias', 'roberta.encoder.layer.0.output.dense.weight', 'roberta.encoder.layer.0.output.dense.bias', 'roberta.encoder.layer.0.output.LayerNorm.weight', 'roberta.encoder.layer.0.output.LayerNorm.bias', 'roberta.encoder.layer.1.attention.self.query.weight', 'roberta.encoder.layer.1.attention.self.query.bias', 'roberta.encoder.layer.1.attention.self.key.weight', 'roberta.encoder.layer.1.attention.self.key.bias', 'roberta.encoder.layer.1.attention.self.value.weight', 'roberta.encoder.layer.1.attention.self.value.bias', 'roberta.encoder.layer.1.attention.output.dense.weight', 'roberta.encoder.layer.1.attention.output.dense.bias', 'roberta.encoder.layer.1.attention.output.LayerNorm.weight', 'roberta.encoder.layer.1.attention.output.LayerNorm.bias', 'roberta.encoder.layer.1.intermediate.dense.weight', 'roberta.encoder.layer.1.intermediate.dense.bias', 'roberta.encoder.layer.1.output.dense.weight', 'roberta.encoder.layer.1.output.dense.bias', 'roberta.encoder.layer.1.output.LayerNorm.weight', 'roberta.encoder.layer.1.output.LayerNorm.bias', 'roberta.encoder.layer.2.attention.self.query.weight', 'roberta.encoder.layer.2.attention.self.query.bias', 'roberta.encoder.layer.2.attention.self.key.weight', 'roberta.encoder.layer.2.attention.self.key.bias', 'roberta.encoder.layer.2.attention.self.value.weight', 'roberta.encoder.layer.2.attention.self.value.bias', 'roberta.encoder.layer.2.attention.output.dense.weight', 'roberta.encoder.layer.2.attention.output.dense.bias', 'roberta.encoder.layer.2.attention.output.LayerNorm.weight', 'roberta.encoder.layer.2.attention.output.LayerNorm.bias', 'roberta.encoder.layer.2.intermediate.dense.weight', 'roberta.encoder.layer.2.intermediate.dense.bias', 'roberta.encoder.layer.2.output.dense.weight', 'roberta.encoder.layer.2.output.dense.bias', 'roberta.encoder.layer.2.output.LayerNorm.weight', 'roberta.encoder.layer.2.output.LayerNorm.bias', 'roberta.encoder.layer.3.attention.self.query.weight', 'roberta.encoder.layer.3.attention.self.query.bias', 'roberta.encoder.layer.3.attention.self.key.weight', 'roberta.encoder.layer.3.attention.self.key.bias', 'roberta.encoder.layer.3.attention.self.value.weight', 'roberta.encoder.layer.3.attention.self.value.bias', 'roberta.encoder.layer.3.attention.output.dense.weight', 'roberta.encoder.layer.3.attention.output.dense.bias', 'roberta.encoder.layer.3.attention.output.LayerNorm.weight', 'roberta.encoder.layer.3.attention.output.LayerNorm.bias', 'roberta.encoder.layer.3.intermediate.dense.weight', 'roberta.encoder.layer.3.intermediate.dense.bias', 'roberta.encoder.layer.3.output.dense.weight', 'roberta.encoder.layer.3.output.dense.bias', 'roberta.encoder.layer.3.output.LayerNorm.weight', 'roberta.encoder.layer.3.output.LayerNorm.bias', 'roberta.encoder.layer.4.attention.self.query.weight', 'roberta.encoder.layer.4.attention.self.query.bias', 'roberta.encoder.layer.4.attention.self.key.weight', 'roberta.encoder.layer.4.attention.self.key.bias', 'roberta.encoder.layer.4.attention.self.value.weight', 'roberta.encoder.layer.4.attention.self.value.bias', 'roberta.encoder.layer.4.attention.output.dense.weight', 'roberta.encoder.layer.4.attention.output.dense.bias', 'roberta.encoder.layer.4.attention.output.LayerNorm.weight', 'roberta.encoder.layer.4.attention.output.LayerNorm.bias', 'roberta.encoder.layer.4.intermediate.dense.weight', 'roberta.encoder.layer.4.intermediate.dense.bias', 'roberta.encoder.layer.4.output.dense.weight', 'roberta.encoder.layer.4.output.dense.bias', 'roberta.encoder.layer.4.output.LayerNorm.weight', 'roberta.encoder.layer.4.output.LayerNorm.bias', 'roberta.encoder.layer.5.attention.self.query.weight', 'roberta.encoder.layer.5.attention.self.query.bias', 'roberta.encoder.layer.5.attention.self.key.weight', 'roberta.encoder.layer.5.attention.self.key.bias', 'roberta.encoder.layer.5.attention.self.value.weight', 'roberta.encoder.layer.5.attention.self.value.bias', 'roberta.encoder.layer.5.attention.output.dense.weight', 'roberta.encoder.layer.5.attention.output.dense.bias', 'roberta.encoder.layer.5.attention.output.LayerNorm.weight', 'roberta.encoder.layer.5.attention.output.LayerNorm.bias', 'roberta.encoder.layer.5.intermediate.dense.weight', 'roberta.encoder.layer.5.intermediate.dense.bias', 'roberta.encoder.layer.5.output.dense.weight', 'roberta.encoder.layer.5.output.dense.bias', 'roberta.encoder.layer.5.output.LayerNorm.weight', 'roberta.encoder.layer.5.output.LayerNorm.bias', 'roberta.encoder.layer.6.attention.self.query.weight', 'roberta.encoder.layer.6.attention.self.query.bias', 'roberta.encoder.layer.6.attention.self.key.weight', 'roberta.encoder.layer.6.attention.self.key.bias', 'roberta.encoder.layer.6.attention.self.value.weight', 'roberta.encoder.layer.6.attention.self.value.bias', 'roberta.encoder.layer.6.attention.output.dense.weight', 'roberta.encoder.layer.6.attention.output.dense.bias', 'roberta.encoder.layer.6.attention.output.LayerNorm.weight', 'roberta.encoder.layer.6.attention.output.LayerNorm.bias', 'roberta.encoder.layer.6.intermediate.dense.weight', 'roberta.encoder.layer.6.intermediate.dense.bias', 'roberta.encoder.layer.6.output.dense.weight', 'roberta.encoder.layer.6.output.dense.bias', 'roberta.encoder.layer.6.output.LayerNorm.weight', 'roberta.encoder.layer.6.output.LayerNorm.bias', 'roberta.encoder.layer.7.attention.self.query.weight', 'roberta.encoder.layer.7.attention.self.query.bias', 'roberta.encoder.layer.7.attention.self.key.weight', 'roberta.encoder.layer.7.attention.self.key.bias', 'roberta.encoder.layer.7.attention.self.value.weight', 'roberta.encoder.layer.7.attention.self.value.bias', 'roberta.encoder.layer.7.attention.output.dense.weight', 'roberta.encoder.layer.7.attention.output.dense.bias', 'roberta.encoder.layer.7.attention.output.LayerNorm.weight', 'roberta.encoder.layer.7.attention.output.LayerNorm.bias', 'roberta.encoder.layer.7.intermediate.dense.weight', 'roberta.encoder.layer.7.intermediate.dense.bias', 'roberta.encoder.layer.7.output.dense.weight', 'roberta.encoder.layer.7.output.dense.bias', 'roberta.encoder.layer.7.output.LayerNorm.weight', 'roberta.encoder.layer.7.output.LayerNorm.bias', 'roberta.encoder.layer.8.attention.self.query.weight', 'roberta.encoder.layer.8.attention.self.query.bias', 'roberta.encoder.layer.8.attention.self.key.weight', 'roberta.encoder.layer.8.attention.self.key.bias', 'roberta.encoder.layer.8.attention.self.value.weight', 'roberta.encoder.layer.8.attention.self.value.bias', 'roberta.encoder.layer.8.attention.output.dense.weight', 'roberta.encoder.layer.8.attention.output.dense.bias', 'roberta.encoder.layer.8.attention.output.LayerNorm.weight', 'roberta.encoder.layer.8.attention.output.LayerNorm.bias', 'roberta.encoder.layer.8.intermediate.dense.weight', 'roberta.encoder.layer.8.intermediate.dense.bias', 'roberta.encoder.layer.8.output.dense.weight', 'roberta.encoder.layer.8.output.dense.bias', 'roberta.encoder.layer.8.output.LayerNorm.weight', 'roberta.encoder.layer.8.output.LayerNorm.bias', 'roberta.encoder.layer.9.attention.self.query.weight', 'roberta.encoder.layer.9.attention.self.query.bias', 'roberta.encoder.layer.9.attention.self.key.weight', 'roberta.encoder.layer.9.attention.self.key.bias', 'roberta.encoder.layer.9.attention.self.value.weight', 'roberta.encoder.layer.9.attention.self.value.bias', 'roberta.encoder.layer.9.attention.output.dense.weight', 'roberta.encoder.layer.9.attention.output.dense.bias', 'roberta.encoder.layer.9.attention.output.LayerNorm.weight', 'roberta.encoder.layer.9.attention.output.LayerNorm.bias', 'roberta.encoder.layer.9.intermediate.dense.weight', 'roberta.encoder.layer.9.intermediate.dense.bias', 'roberta.encoder.layer.9.output.dense.weight', 'roberta.encoder.layer.9.output.dense.bias', 'roberta.encoder.layer.9.output.LayerNorm.weight', 'roberta.encoder.layer.9.output.LayerNorm.bias', 'roberta.encoder.layer.10.attention.self.query.weight', 'roberta.encoder.layer.10.attention.self.query.bias', 'roberta.encoder.layer.10.attention.self.key.weight', 'roberta.encoder.layer.10.attention.self.key.bias', 'roberta.encoder.layer.10.attention.self.value.weight', 'roberta.encoder.layer.10.attention.self.value.bias', 'roberta.encoder.layer.10.attention.output.dense.weight', 'roberta.encoder.layer.10.attention.output.dense.bias', 'roberta.encoder.layer.10.attention.output.LayerNorm.weight', 'roberta.encoder.layer.10.attention.output.LayerNorm.bias', 'roberta.encoder.layer.10.intermediate.dense.weight', 'roberta.encoder.layer.10.intermediate.dense.bias', 'roberta.encoder.layer.10.output.dense.weight', 'roberta.encoder.layer.10.output.dense.bias', 'roberta.encoder.layer.10.output.LayerNorm.weight', 'roberta.encoder.layer.10.output.LayerNorm.bias', 'roberta.encoder.layer.11.attention.self.query.weight', 'roberta.encoder.layer.11.attention.self.query.bias', 'roberta.encoder.layer.11.attention.self.key.weight', 'roberta.encoder.layer.11.attention.self.key.bias', 'roberta.encoder.layer.11.attention.self.value.weight', 'roberta.encoder.layer.11.attention.self.value.bias', 'roberta.encoder.layer.11.attention.output.dense.weight', 'roberta.encoder.layer.11.attention.output.dense.bias', 'roberta.encoder.layer.11.attention.output.LayerNorm.weight', 'roberta.encoder.layer.11.attention.output.LayerNorm.bias', 'roberta.encoder.layer.11.intermediate.dense.weight', 'roberta.encoder.layer.11.intermediate.dense.bias', 'roberta.encoder.layer.11.output.dense.weight', 'roberta.encoder.layer.11.output.dense.bias', 'roberta.encoder.layer.11.output.LayerNorm.weight', 'roberta.encoder.layer.11.output.LayerNorm.bias', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias'] - This IS expected if you are initializing MyRoberta from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model). - This IS NOT expected if you are initializing MyRoberta from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of MyRoberta were not initialized from the model checkpoint at roberta-base and are newly initialized: ['roberta.bert.embeddings.position_ids', 'roberta.bert.embeddings.word_embeddings.weight', 'roberta.bert.embeddings.position_embeddings.weight', 'roberta.bert.embeddings.token_type_embeddings.weight', 'roberta.bert.embeddings.LayerNorm.weight', 'roberta.bert.embeddings.LayerNorm.bias', 'roberta.bert.encoder.layer.0.attention.self.query.weight', 'roberta.bert.encoder.layer.0.attention.self.query.bias', 'roberta.bert.encoder.layer.0.attention.self.key.weight', 'roberta.bert.encoder.layer.0.attention.self.key.bias', 'roberta.bert.encoder.layer.0.attention.self.value.weight', 'roberta.bert.encoder.layer.0.attention.self.value.bias', 'roberta.bert.encoder.layer.0.attention.output.dense.weight', 'roberta.bert.encoder.layer.0.attention.output.dense.bias', 'roberta.bert.encoder.layer.0.attention.output.LayerNorm.weight', 'roberta.bert.encoder.layer.0.attention.output.LayerNorm.bias', 'roberta.bert.encoder.layer.0.intermediate.dense.weight', 'roberta.bert.encoder.layer.0.intermediate.dense.bias', 'roberta.bert.encoder.layer.0.output.dense.weight', 'roberta.bert.encoder.layer.0.output.dense.bias', 'roberta.bert.encoder.layer.0.output.LayerNorm.weight', 'roberta.bert.encoder.layer.0.output.LayerNorm.bias', 'roberta.bert.encoder.layer.1.attention.self.query.weight', 'roberta.bert.encoder.layer.1.attention.self.query.bias', 'roberta.bert.encoder.layer.1.attention.self.key.weight', 'roberta.bert.encoder.layer.1.attention.self.key.bias', 'roberta.bert.encoder.layer.1.attention.self.value.weight', 'roberta.bert.encoder.layer.1.attention.self.value.bias', 'roberta.bert.encoder.layer.1.attention.output.dense.weight', 'roberta.bert.encoder.layer.1.attention.output.dense.bias', 'roberta.bert.encoder.layer.1.attention.output.LayerNorm.weight', 'roberta.bert.encoder.layer.1.attention.output.LayerNorm.bias', 'roberta.bert.encoder.layer.1.intermediate.dense.weight', 'roberta.bert.encoder.layer.1.intermediate.dense.bias', 'roberta.bert.encoder.layer.1.output.dense.weight', 'roberta.bert.encoder.layer.1.output.dense.bias', 'roberta.bert.encoder.layer.1.output.LayerNorm.weight', 'roberta.bert.encoder.layer.1.output.LayerNorm.bias', 'roberta.bert.encoder.layer.2.attention.self.query.weight', 'roberta.bert.encoder.layer.2.attention.self.query.bias', 'roberta.bert.encoder.layer.2.attention.self.key.weight', 'roberta.bert.encoder.layer.2.attention.self.key.bias', 'roberta.bert.encoder.layer.2.attention.self.value.weight', 'roberta.bert.encoder.layer.2.attention.self.value.bias', 'roberta.bert.encoder.layer.2.attention.output.dense.weight', 'roberta.bert.encoder.layer.2.attention.output.dense.bias', 'roberta.bert.encoder.layer.2.attention.output.LayerNorm.weight', 'roberta.bert.encoder.layer.2.attention.output.LayerNorm.bias', 'roberta.bert.encoder.layer.2.intermediate.dense.weight', 'roberta.bert.encoder.layer.2.intermediate.dense.bias', 'roberta.bert.encoder.layer.2.output.dense.weight', 'roberta.bert.encoder.layer.2.output.dense.bias', 'roberta.bert.encoder.layer.2.output.LayerNorm.weight', 'roberta.bert.encoder.layer.2.output.LayerNorm.bias', 'roberta.bert.encoder.layer.3.attention.self.query.weight', 'roberta.bert.encoder.layer.3.attention.self.query.bias', 'roberta.bert.encoder.layer.3.attention.self.key.weight', 'roberta.bert.encoder.layer.3.attention.self.key.bias', 'roberta.bert.encoder.layer.3.attention.self.value.weight', 'roberta.bert.encoder.layer.3.attention.self.value.bias', 'roberta.bert.encoder.layer.3.attention.output.dense.weight', 'roberta.bert.encoder.layer.3.attention.output.dense.bias', 'roberta.bert.encoder.layer.3.attention.output.LayerNorm.weight', 'roberta.bert.encoder.layer.3.attention.output.LayerNorm.bias', 'roberta.bert.encoder.layer.3.intermediate.dense.weight', 'roberta.bert.encoder.layer.3.intermediate.dense.bias', 'roberta.bert.encoder.layer.3.output.dense.weight', 'roberta.bert.encoder.layer.3.output.dense.bias', 'roberta.bert.encoder.layer.3.output.LayerNorm.weight', 'roberta.bert.encoder.layer.3.output.LayerNorm.bias', 'roberta.bert.encoder.layer.4.attention.self.query.weight', 'roberta.bert.encoder.layer.4.attention.self.query.bias', 'roberta.bert.encoder.layer.4.attention.self.key.weight', 'roberta.bert.encoder.layer.4.attention.self.key.bias', 'roberta.bert.encoder.layer.4.attention.self.value.weight', 'roberta.bert.encoder.layer.4.attention.self.value.bias', 'roberta.bert.encoder.layer.4.attention.output.dense.weight', 'roberta.bert.encoder.layer.4.attention.output.dense.bias', 'roberta.bert.encoder.layer.4.attention.output.LayerNorm.weight', 'roberta.bert.encoder.layer.4.attention.output.LayerNorm.bias', 'roberta.bert.encoder.layer.4.intermediate.dense.weight', 'roberta.bert.encoder.layer.4.intermediate.dense.bias', 'roberta.bert.encoder.layer.4.output.dense.weight', 'roberta.bert.encoder.layer.4.output.dense.bias', 'roberta.bert.encoder.layer.4.output.LayerNorm.weight', 'roberta.bert.encoder.layer.4.output.LayerNorm.bias', 'roberta.bert.encoder.layer.5.attention.self.query.weight', 'roberta.bert.encoder.layer.5.attention.self.query.bias', 'roberta.bert.encoder.layer.5.attention.self.key.weight', 'roberta.bert.encoder.layer.5.attention.self.key.bias', 'roberta.bert.encoder.layer.5.attention.self.value.weight', 'roberta.bert.encoder.layer.5.attention.self.value.bias', 'roberta.bert.encoder.layer.5.attention.output.dense.weight', 'roberta.bert.encoder.layer.5.attention.output.dense.bias', 'roberta.bert.encoder.layer.5.attention.output.LayerNorm.weight', 'roberta.bert.encoder.layer.5.attention.output.LayerNorm.bias', 'roberta.bert.encoder.layer.5.intermediate.dense.weight', 'roberta.bert.encoder.layer.5.intermediate.dense.bias', 'roberta.bert.encoder.layer.5.output.dense.weight', 'roberta.bert.encoder.layer.5.output.dense.bias', 'roberta.bert.encoder.layer.5.output.LayerNorm.weight', 'roberta.bert.encoder.layer.5.output.LayerNorm.bias', 'roberta.bert.encoder.layer.6.attention.self.query.weight', 'roberta.bert.encoder.layer.6.attention.self.query.bias', 'roberta.bert.encoder.layer.6.attention.self.key.weight', 'roberta.bert.encoder.layer.6.attention.self.key.bias', 'roberta.bert.encoder.layer.6.attention.self.value.weight', 'roberta.bert.encoder.layer.6.attention.self.value.bias', 'roberta.bert.encoder.layer.6.attention.output.dense.weight', 'roberta.bert.encoder.layer.6.attention.output.dense.bias', 'roberta.bert.encoder.layer.6.attention.output.LayerNorm.weight', 'roberta.bert.encoder.layer.6.attention.output.LayerNorm.bias', 'roberta.bert.encoder.layer.6.intermediate.dense.weight', 'roberta.bert.encoder.layer.6.intermediate.dense.bias', 'roberta.bert.encoder.layer.6.output.dense.weight', 'roberta.bert.encoder.layer.6.output.dense.bias', 'roberta.bert.encoder.layer.6.output.LayerNorm.weight', 'roberta.bert.encoder.layer.6.output.LayerNorm.bias', 'roberta.bert.encoder.layer.7.attention.self.query.weight', 'roberta.bert.encoder.layer.7.attention.self.query.bias', 'roberta.bert.encoder.layer.7.attention.self.key.weight', 'roberta.bert.encoder.layer.7.attention.self.key.bias', 'roberta.bert.encoder.layer.7.attention.self.value.weight', 'roberta.bert.encoder.layer.7.attention.self.value.bias', 'roberta.bert.encoder.layer.7.attention.output.dense.weight', 'roberta.bert.encoder.layer.7.attention.output.dense.bias', 'roberta.bert.encoder.layer.7.attention.output.LayerNorm.weight', 'roberta.bert.encoder.layer.7.attention.output.LayerNorm.bias', 'roberta.bert.encoder.layer.7.intermediate.dense.weight', 'roberta.bert.encoder.layer.7.intermediate.dense.bias', 'roberta.bert.encoder.layer.7.output.dense.weight', 'roberta.bert.encoder.layer.7.output.dense.bias', 'roberta.bert.encoder.layer.7.output.LayerNorm.weight', 'roberta.bert.encoder.layer.7.output.LayerNorm.bias', 'roberta.bert.encoder.layer.8.attention.self.query.weight', 'roberta.bert.encoder.layer.8.attention.self.query.bias', 'roberta.bert.encoder.layer.8.attention.self.key.weight', 'roberta.bert.encoder.layer.8.attention.self.key.bias', 'roberta.bert.encoder.layer.8.attention.self.value.weight', 'roberta.bert.encoder.layer.8.attention.self.value.bias', 'roberta.bert.encoder.layer.8.attention.output.dense.weight', 'roberta.bert.encoder.layer.8.attention.output.dense.bias', 'roberta.bert.encoder.layer.8.attention.output.LayerNorm.weight', 'roberta.bert.encoder.layer.8.attention.output.LayerNorm.bias', 'roberta.bert.encoder.layer.8.intermediate.dense.weight', 'roberta.bert.encoder.layer.8.intermediate.dense.bias', 'roberta.bert.encoder.layer.8.output.dense.weight', 'roberta.bert.encoder.layer.8.output.dense.bias', 'roberta.bert.encoder.layer.8.output.LayerNorm.weight', 'roberta.bert.encoder.layer.8.output.LayerNorm.bias', 'roberta.bert.encoder.layer.9.attention.self.query.weight', 'roberta.bert.encoder.layer.9.attention.self.query.bias', 'roberta.bert.encoder.layer.9.attention.self.key.weight', 'roberta.bert.encoder.layer.9.attention.self.key.bias', 'roberta.bert.encoder.layer.9.attention.self.value.weight', 'roberta.bert.encoder.layer.9.attention.self.value.bias', 'roberta.bert.encoder.layer.9.attention.output.dense.weight', 'roberta.bert.encoder.layer.9.attention.output.dense.bias', 'roberta.bert.encoder.layer.9.attention.output.LayerNorm.weight', 'roberta.bert.encoder.layer.9.attention.output.LayerNorm.bias', 'roberta.bert.encoder.layer.9.intermediate.dense.weight', 'roberta.bert.encoder.layer.9.intermediate.dense.bias', 'roberta.bert.encoder.layer.9.output.dense.weight', 'roberta.bert.encoder.layer.9.output.dense.bias', 'roberta.bert.encoder.layer.9.output.LayerNorm.weight', 'roberta.bert.encoder.layer.9.output.LayerNorm.bias', 'roberta.bert.encoder.layer.10.attention.self.query.weight', 'roberta.bert.encoder.layer.10.attention.self.query.bias', 'roberta.bert.encoder.layer.10.attention.self.key.weight', 'roberta.bert.encoder.layer.10.attention.self.key.bias', 'roberta.bert.encoder.layer.10.attention.self.value.weight', 'roberta.bert.encoder.layer.10.attention.self.value.bias', 'roberta.bert.encoder.layer.10.attention.output.dense.weight', 'roberta.bert.encoder.layer.10.attention.output.dense.bias', 'roberta.bert.encoder.layer.10.attention.output.LayerNorm.weight', 'roberta.bert.encoder.layer.10.attention.output.LayerNorm.bias', 'roberta.bert.encoder.layer.10.intermediate.dense.weight', 'roberta.bert.encoder.layer.10.intermediate.dense.bias', 'roberta.bert.encoder.layer.10.output.dense.weight', 'roberta.bert.encoder.layer.10.output.dense.bias', 'roberta.bert.encoder.layer.10.output.LayerNorm.weight', 'roberta.bert.encoder.layer.10.output.LayerNorm.bias', 'roberta.bert.encoder.layer.11.attention.self.query.weight', 'roberta.bert.encoder.layer.11.attention.self.query.bias', 'roberta.bert.encoder.layer.11.attention.self.key.weight', 'roberta.bert.encoder.layer.11.attention.self.key.bias', 'roberta.bert.encoder.layer.11.attention.self.value.weight', 'roberta.bert.encoder.layer.11.attention.self.value.bias', 'roberta.bert.encoder.layer.11.attention.output.dense.weight', 'roberta.bert.encoder.layer.11.attention.output.dense.bias', 'roberta.bert.encoder.layer.11.attention.output.LayerNorm.weight', 'roberta.bert.encoder.layer.11.attention.output.LayerNorm.bias', 'roberta.bert.encoder.layer.11.intermediate.dense.weight', 'roberta.bert.encoder.layer.11.intermediate.dense.bias', 'roberta.bert.encoder.layer.11.output.dense.weight', 'roberta.bert.encoder.layer.11.output.dense.bias', 'roberta.bert.encoder.layer.11.output.LayerNorm.weight', 'roberta.bert.encoder.layer.11.output.LayerNorm.bias', 'roberta.bert.pooler.dense.weight', 'roberta.bert.pooler.dense.bias', 'roberta.cls.bias', 'roberta.cls.dense.weight', 'roberta.cls.dense.bias', 'roberta.cls.layer_norm.weight', 'roberta.cls.layer_norm.bias', 'roberta.cls.decoder.weight', 'roberta.cls.decoder.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8407/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8407/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8406
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8406/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8406/comments
https://api.github.com/repos/huggingface/transformers/issues/8406/events
https://github.com/huggingface/transformers/pull/8406
738,445,215
MDExOlB1bGxSZXF1ZXN0NTE3MjkwODIy
8,406
Update README.md
{ "login": "dartrevan", "id": 24587263, "node_id": "MDQ6VXNlcjI0NTg3MjYz", "avatar_url": "https://avatars.githubusercontent.com/u/24587263?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dartrevan", "html_url": "https://github.com/dartrevan", "followers_url": "https://api.github.com/users/dartrevan/followers", "following_url": "https://api.github.com/users/dartrevan/following{/other_user}", "gists_url": "https://api.github.com/users/dartrevan/gists{/gist_id}", "starred_url": "https://api.github.com/users/dartrevan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dartrevan/subscriptions", "organizations_url": "https://api.github.com/users/dartrevan/orgs", "repos_url": "https://api.github.com/users/dartrevan/repos", "events_url": "https://api.github.com/users/dartrevan/events{/privacy}", "received_events_url": "https://api.github.com/users/dartrevan/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8406/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8406/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8406", "html_url": "https://github.com/huggingface/transformers/pull/8406", "diff_url": "https://github.com/huggingface/transformers/pull/8406.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8406.patch", "merged_at": 1604911484000 }
https://api.github.com/repos/huggingface/transformers/issues/8405
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8405/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8405/comments
https://api.github.com/repos/huggingface/transformers/issues/8405/events
https://github.com/huggingface/transformers/pull/8405
738,444,776
MDExOlB1bGxSZXF1ZXN0NTE3MjkwNTE0
8,405
Update README.md
{ "login": "dartrevan", "id": 24587263, "node_id": "MDQ6VXNlcjI0NTg3MjYz", "avatar_url": "https://avatars.githubusercontent.com/u/24587263?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dartrevan", "html_url": "https://github.com/dartrevan", "followers_url": "https://api.github.com/users/dartrevan/followers", "following_url": "https://api.github.com/users/dartrevan/following{/other_user}", "gists_url": "https://api.github.com/users/dartrevan/gists{/gist_id}", "starred_url": "https://api.github.com/users/dartrevan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dartrevan/subscriptions", "organizations_url": "https://api.github.com/users/dartrevan/orgs", "repos_url": "https://api.github.com/users/dartrevan/repos", "events_url": "https://api.github.com/users/dartrevan/events{/privacy}", "received_events_url": "https://api.github.com/users/dartrevan/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,604
1,605
1,605
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8405/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8405/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8405", "html_url": "https://github.com/huggingface/transformers/pull/8405", "diff_url": "https://github.com/huggingface/transformers/pull/8405.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8405.patch", "merged_at": 1605723789000 }
https://api.github.com/repos/huggingface/transformers/issues/8404
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8404/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8404/comments
https://api.github.com/repos/huggingface/transformers/issues/8404/events
https://github.com/huggingface/transformers/issues/8404
738,419,402
MDU6SXNzdWU3Mzg0MTk0MDI=
8,404
Tokenizer problem for model 'patrickvonplaten/longformer-random-tiny'
{ "login": "lessenko", "id": 40150500, "node_id": "MDQ6VXNlcjQwMTUwNTAw", "avatar_url": "https://avatars.githubusercontent.com/u/40150500?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lessenko", "html_url": "https://github.com/lessenko", "followers_url": "https://api.github.com/users/lessenko/followers", "following_url": "https://api.github.com/users/lessenko/following{/other_user}", "gists_url": "https://api.github.com/users/lessenko/gists{/gist_id}", "starred_url": "https://api.github.com/users/lessenko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lessenko/subscriptions", "organizations_url": "https://api.github.com/users/lessenko/orgs", "repos_url": "https://api.github.com/users/lessenko/repos", "events_url": "https://api.github.com/users/lessenko/events{/privacy}", "received_events_url": "https://api.github.com/users/lessenko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @lessenko - the model is useless for any real application as it's just randomly initialized. It's only used for testing purposes.", "Hi @patrickvonplaten,\r\nThanks. " ]
1,604
1,604
1,604
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.4.0 - Platform: Ubuntu 18.04.5 LTS - Python version: Python 3.6.9 - PyTorch version (GPU?):1.6.0 - Tensorflow version (GPU?):2.3.1 - Using GPU in script?:No - Using distributed or parallel set-up in script?:No ### Who can help T5: @patrickvonplaten <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSTM: @stas00 examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (patrickvonplaten/longformer-random-tiny): The problem arises when using: * [ ] the official example scripts: (give details below) https://huggingface.co/patrickvonplaten/longformer-random-tiny ## To reproduce Steps to reproduce the behavior: 1. Running the script. 2. Result: _2020-11-08 11:09:14.631646: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory 2020-11-08 11:09:14.631672: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Traceback (most recent call last): File "test_tiny.py", line 3, in <module> tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/longformer-random-tiny") File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_auto.py", line 333, in from_pretrained return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 1591, in from_pretrained list(cls.vocab_files_names.values()), OSError: Model name 'patrickvonplaten/longformer-random-tiny' was not found in tokenizers model name list (allenai/longformer-base-4096, allenai/longformer-large-4096, allenai/longformer-large-4096-finetuned-triviaqa, allenai/longformer-base-4096-extra.pos.embd.only, allenai/longformer-large-4096-extra.pos.embd.only). We assumed 'patrickvonplaten/longformer-random-tiny' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url. _ <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior I suppose two files are needed - 'vocab.json', 'merges.txt'. Thanks. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8404/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8404/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8403
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8403/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8403/comments
https://api.github.com/repos/huggingface/transformers/issues/8403/events
https://github.com/huggingface/transformers/issues/8403
738,409,232
MDU6SXNzdWU3Mzg0MDkyMzI=
8,403
[s2s finetune] huge increase in memory demands with --fp16 native amp
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I managed to install `nvidia-apex` binary via conda:\r\n```\r\nconda install nvidia-apex -c conda-forge\r\n``` \r\nSo now I was able to validate that with pytorch-1.5 + nvdia-apex `--fp16` consumes less memory, than w/o `--fp16`. \r\n\r\nI was able to squeeze bs=20 (!) onto a 8gb card.\r\n\r\nSo the problem has to do with pytorch's 1.6 fp16\r\n", "Found another report of memory increase with fp16:\r\nhttps://discuss.pytorch.org/t/fp16-training-with-feedforward-network-slower-time-and-no-memory-reduction/95560/\r\n", "Michael Carilli suggested I add this to the top of the script to find the problem:\r\n```\r\nimport torch\r\ntorch.cuda.amp.autocast = \"hfdj\"\r\n```\r\nrerunning gives:\r\n```\r\nValidation sanity check: 0it [00:00, ?it/s]Traceback (most recent call last):\r\n File \"finetune.py\", line 449, in <module>\r\n main(args)\r\n File \"finetune.py\", line 416, in main\r\n trainer: pl.Trainer = generic_train(\r\n File \"/mnt/nvme1/code/huggingface/transformers-watchdog/examples/lightning_base.py\", line 395, in generic_train\r\n trainer.fit(model)\r\n File \"/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/trainer/trainer.py\", line 445, in fit\r\n results = self.accelerator_backend.train()\r\n File \"/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/accelerators/gpu_accelerator.py\", line 54, in train\r\n results = self.train_or_test()\r\n File \"/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/accelerators/accelerator.py\", line 74, in train_or_test\r\n results = self.trainer.train()\r\n File \"/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/trainer/trainer.py\", line 467, in train\r\n self.run_sanity_check(self.get_model())\r\n File \"/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/trainer/trainer.py\", line 671, in run_sanity_check\r\n _, eval_results = self.run_evaluation(test_mode=False, max_batches=self.num_sanity_val_batches)\r\n File \"/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/trainer/trainer.py\", line 591, in run_evaluation\r\n output = self.evaluation_loop.evaluation_step(test_mode, batch, batch_idx, dataloader_idx)\r\n File \"/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/trainer/evaluation_loop.py\", line 176, in evaluation_step\r\n output = self.trainer.accelerator_backend.validation_step(args)\r\n File \"/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/accelerators/gpu_accelerator.py\", line 75, in validation_step\r\n with torch.cuda.amp.autocast():\r\nTypeError: 'str' object is not callable\r\n```\r\n\r\nI am on the master of PL (pytorch-lightning), but tried bisecting on earlier versions (3 months back) with no change in behavior.", "The same issue, when I run the train_distilbart_xsum.sh, in sanity check step, meet OOM in 12G card", "Based on some initial debugging Lightning is calling into `trainer,model.validation_step`, which is calling `BartForConditionalGeneration` with fixed tensor sizes of:\r\n```python\r\ntorch.Size([48, 1, 1024]) torch.Size([50264, 1024]) torch.Size([1, 50264])\r\n```\r\nIn each iteration more tensors are allocated and never freed, which yields the OOM in the `sanity_check`.\r\n\r\nAMP run before OOM:\r\n```python\r\n|===========================================================================|\r\n| PyTorch CUDA memory summary, device ID 0 |\r\n|---------------------------------------------------------------------------|\r\n| CUDA OOMs: 0 | cudaMalloc retries: 0 |\r\n|===========================================================================|\r\n| Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed |\r\n|---------------------------------------------------------------------------|\r\n| Allocated memory | 13553 MB | 13555 MB | 139840 MB | 126286 MB |\r\n| from large pool | 13551 MB | 13551 MB | 137780 MB | 124228 MB |\r\n| from small pool | 2 MB | 25 MB | 2059 MB | 2057 MB |\r\n|---------------------------------------------------------------------------|\r\n| Active memory | 13553 MB | 13555 MB | 139840 MB | 126286 MB |\r\n| from large pool | 13551 MB | 13551 MB | 137780 MB | 124228 MB |\r\n| from small pool | 2 MB | 25 MB | 2059 MB | 2057 MB |\r\n|---------------------------------------------------------------------------|\r\n| GPU reserved memory | 13888 MB | 13888 MB | 13888 MB | 0 B |\r\n| from large pool | 13858 MB | 13858 MB | 13858 MB | 0 B |\r\n| from small pool | 30 MB | 30 MB | 30 MB | 0 B |\r\n|---------------------------------------------------------------------------|\r\n| Non-releasable memory | 315554 KB | 913 MB | 109380 MB | 109072 MB |\r\n| from large pool | 313557 KB | 912 MB | 107043 MB | 106737 MB |\r\n| from small pool | 1997 KB | 8 MB | 2336 MB | 2334 MB |\r\n|---------------------------------------------------------------------------|\r\n| Allocations | 3403 | 3410 | 29885 | 26482 |\r\n| from large pool | 3161 | 3161 | 7265 | 4104 |\r\n| from small pool | 242 | 267 | 22620 | 22378 |\r\n|---------------------------------------------------------------------------|\r\n| Active allocs | 3403 | 3410 | 29885 | 26482 |\r\n| from large pool | 3161 | 3161 | 7265 | 4104 |\r\n| from small pool | 242 | 267 | 22620 | 22378 |\r\n|---------------------------------------------------------------------------|\r\n| GPU reserved segments | 368 | 368 | 368 | 0 |\r\n| from large pool | 353 | 353 | 353 | 0 |\r\n| from small pool | 15 | 15 | 15 | 0 |\r\n|---------------------------------------------------------------------------|\r\n| Non-releasable allocs | 204 | 210 | 14948 | 14744 |\r\n| from large pool | 200 | 205 | 3772 | 3572 |\r\n| from small pool | 4 | 20 | 11176 | 11172 |\r\n|===========================================================================|\r\n```\r\n\r\nFP32 run for the same step:\r\n```python\r\n|===========================================================================|\r\n| PyTorch CUDA memory summary, device ID 0 |\r\n|---------------------------------------------------------------------------|\r\n| CUDA OOMs: 0 | cudaMalloc retries: 0 |\r\n|===========================================================================|\r\n| Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed |\r\n|---------------------------------------------------------------------------|\r\n| Allocated memory | 3948 MB | 4000 MB | 203229 MB | 199281 MB |\r\n| from large pool | 3946 MB | 3998 MB | 201302 MB | 197356 MB |\r\n| from small pool | 2 MB | 24 MB | 1927 MB | 1925 MB |\r\n|---------------------------------------------------------------------------|\r\n| Active memory | 3948 MB | 4000 MB | 203229 MB | 199281 MB |\r\n| from large pool | 3946 MB | 3998 MB | 201302 MB | 197356 MB |\r\n| from small pool | 2 MB | 24 MB | 1927 MB | 1925 MB |\r\n|---------------------------------------------------------------------------|\r\n| GPU reserved memory | 4226 MB | 4226 MB | 4226 MB | 0 B |\r\n| from large pool | 4198 MB | 4198 MB | 4198 MB | 0 B |\r\n| from small pool | 28 MB | 28 MB | 28 MB | 0 B |\r\n|---------------------------------------------------------------------------|\r\n| Non-releasable memory | 259725 KB | 778 MB | 118090 MB | 117836 MB |\r\n| from large pool | 257727 KB | 777 MB | 115922 MB | 115670 MB |\r\n| from small pool | 1997 KB | 8 MB | 2168 MB | 2166 MB |\r\n|---------------------------------------------------------------------------|\r\n| Allocations | 415 | 422 | 23263 | 22848 |\r\n| from large pool | 173 | 175 | 3796 | 3623 |\r\n| from small pool | 242 | 267 | 19467 | 19225 |\r\n|---------------------------------------------------------------------------|\r\n| Active allocs | 415 | 422 | 23263 | 22848 |\r\n| from large pool | 173 | 175 | 3796 | 3623 |\r\n| from small pool | 242 | 267 | 19467 | 19225 |\r\n|---------------------------------------------------------------------------|\r\n| GPU reserved segments | 85 | 85 | 85 | 0 |\r\n| from large pool | 71 | 71 | 71 | 0 |\r\n| from small pool | 14 | 14 | 14 | 0 |\r\n|---------------------------------------------------------------------------|\r\n| Non-releasable allocs | 17 | 26 | 13706 | 13689 |\r\n| from large pool | 13 | 14 | 2264 | 2251 |\r\n| from small pool | 4 | 21 | 11442 | 11438 |\r\n|===========================================================================|\r\n```\r\nThe allocations increase using `AMP` by ~50 in each iteration, while FP32 increases them sometimes by ~1.\r\nSo far I wasn't able to isolate the memory increase. ", "@ptrblck, and if the same is done with pt15/nvidia-apex - why doesn't the same happen there? How are the two different (native vs apex)", "I don't know, why an older PyTorch version with apex/amp works, as I wasn't able to isolate the issue yet.\r\nThe native amp implementation differs in various ways from `apex/amp`.\r\nBtw. do you only see the OOM, if you are using Lightning or also using the standalone model + amp?", "@ptrblck, I have tried the hugging face trainer and the problem doesn't seem to happen there. I don't think `--fp16` makes any difference in that trainer but it doesn't increase memory requirements. So is it possible this is specifically a PL issue?\r\n\r\nFor testing I used:\r\n\r\nsetup:\r\n```\r\ncd examples/seq2seq\r\nwget https://cdn-datasets.huggingface.co/translation/wmt_en_ro.tar.gz\r\ntar -xzvf wmt_en_ro.tar.gz\r\n```\r\nand then run:\r\n```\r\n\r\nbs=11; rm -rf tmpdir; PYTHONPATH=\"../../src\" python ./finetune_trainer.py \\\r\n--model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --data_dir wmt_en_ro --output_dir tmpdir \\\r\n--overwrite_output_dir --max_source_length 128 --max_target_length 128 --val_max_target_length 128 \\\r\n--do_train --do_eval --do_predict --num_train_epochs 10 --per_device_train_batch_size $bs \\\r\n--per_device_eval_batch_size $bs --learning_rate 3e-4 --warmup_steps 2 --evaluate_during_training \\\r\n--predict_with_generate --logging_steps 0 --save_steps 2 --eval_steps 2 --sortish_sampler \\\r\n--label_smoothing 0.1 --adafactor --task translation --tgt_lang ro_RO --src_lang en_XX --n_train 100 \\\r\n--n_val 50 --fp16\r\n```\r\n(same memory consumption w/o `--fp16`)\r\n\r\nThis command also uses `BartForConditionalGeneration`.\r\n\r\n`bs=11` is the biggest batch size I could fit onto the 8GB card, `bs=12` OOMs\r\n", "@ptrblck How did you make such a nice table?\r\n\r\n@stas00 I will check your #s on my card. cc @patil-suraj \r\n\r\n", "I replicated the OOM and fixed by passing `--amp_backend='apex'` in my torch 1.6 environment on a 24GB card. Would still be good to see if there is any easy way to get native amp working well.", "Thank you for validating that, @sshleifer. So it really has something to do with the native amp in pytorch.\r\n\r\nHere is the summary of my experiments:\r\n\r\n- pt15 + confa-force apex w/ `--fp16` works at the start - reduces memory consumption\r\n- pt16 + conda-forge apex w/ `--fp16 --amp_backend='apex'` works at the start too!\r\n\r\nbut both fail at the end with:\r\n```\r\n File \"python3.8/site-packages/apex/amp/_amp_state.py\", line 32, in warn_or_err\r\n raise RuntimeError(msg)\r\nRuntimeError: Found param model.model.shared.weight with type torch.cuda.HalfTensor, expected torch.cuda.FloatTensor.\r\nWhen using amp.initialize, you do not need to call .half() on your model\r\n```\r\nIf I use `--fp16_opt_level O1`, the failure changes to:\r\n```\r\n File \"python3.8/site-packages/torch/optim/lr_scheduler.py\", line 56, in with_counter\r\n instance_ref = weakref.ref(method.__self__)\r\nAttributeError: 'function' object has no attribute '__self__'\r\n```\r\nboth envs use the binary apex from `conda-forge`\r\n\r\n- apex doesn't support cuda11 at the moment\r\n- rtx-30* cards don't support cuda<11 (and really cuda<11.1).\r\n- ubuntu-20.4 doesn't support cuda<11, since it dropped gcc7, so can't build apex from source even for cuda-10\r\n\r\nBottom line, apex is not a great option at the moment, but may work for some short term - need to sort out native amp.\r\n\r\nI will poke at it with debugger today.", "I'm running in parallel `pt15+apex` and `pt16+native` with debugger:\r\n\r\nFound the first issue. \r\n\r\nAt this point in stack (we are in PL domain) - both have about 1.7GB allocated on GPU:\r\n```\r\nrestore_weights, checkpoint_connector.py:64\r\nsetup_training, training_loop.py:174\r\ntrain, gpu_accelerator.py:51\r\nfit, trainer.py:444\r\ngeneric_train, lightning_base.py:398\r\nmain, finetune.py:413\r\n<module>, finetune.py:446\r\n```\r\n\r\nnext step: `torch.cuda.empty_cache()` frees about 0.6GB on pt15, but 0 on pt16.\r\n\r\nCould `GradScaler` be holding onto objects and not letting them go? I commented out its init and there is no change. \r\n\r\nPerhaps there are some circular references preventing the objects from being cleared out.\r\n", "@sshleifer I used [`torch.cuda.memory_summary()`](https://pytorch.org/docs/stable/cuda.html#torch.cuda.memory_summary) and added it for each iteration in the `BartForConditionalGeneration` model.\r\n\r\nAfter some more debugging it seems that the `autocast` cache is blowing up.\r\nAs a workaround you can add `torch.clear_autocast_cache()` in [BartForConditionalGeneration.forward](https://github.com/huggingface/transformers/blob/eb3bd73ce35bfef56eeb722d697f2d39a06a8f8d/src/transformers/modeling_bart.py#L1036), which might slow down your code but should at least work.\r\n\r\nBased on @stas00 's debugging it seems that PL is interacting with native AMP in a way that the cache is increasing.\r\nCC @mcarilli", "The autocast cache is [cleared automatically](https://github.com/pytorch/pytorch/blob/88ec72e1c2a4b1e2a15cbe4703b9567bf9369a09/torch/cuda/amp/autocast_mode.py#L127) every time you exit an autocast context, which is one reason autocast should wrap the forward pass then exit.\r\n\r\n@ptrblck where in `BartConditionalGeneration.forward` did you call `clear_autocast_cache()` to resolve the memory blowup?\r\n\r\nIt's also helpful to write in the following sequence:\r\n```\r\ntorch.autocast_increment_nesting()\r\nprint(torch.autocast_decrement_nesting())\r\n```\r\nto see how deeply we're nested in autocast contexts at that point in forward. It should print `1`.", "@ptrblck, if I use your suggestion to add `torch.clear_autocast_cache()` torch blows up:\r\n\r\n```\r\nEpoch 0: 50%|████████████████Traceback (most recent call last): | 1/2 [00:02<00:02, 2.37s/it, loss=2.934, v_num=156]\r\n File \"finetune.py\", line 447, in <module>\r\n main(args)\r\n File \"finetune.py\", line 414, in main\r\n trainer: pl.Trainer = generic_train(\r\n File \"/mnt/nvme1/code/huggingface/transformers-watchdog/examples/lightning_base.py\", line 403, in generic_train\r\n trainer.fit(model)\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py\", line 444, in fit\r\n results = self.accelerator_backend.train()\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py\", line 63, in train\r\n results = self.train_or_test()\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py\", line 74, in train_or_test\r\n results = self.trainer.train()\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py\", line 493, in train\r\n self.train_loop.run_training_epoch()\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/trainer/training_loop.py\", line 589, in run_training_epoch\r\n self.trainer.run_evaluation(test_mode=False)\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py\", line 578, in run_evaluation\r\n output = self.evaluation_loop.evaluation_step(test_mode, batch, batch_idx, dataloader_idx)\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py\", line 171, in evaluation_step\r\n output = self.trainer.accelerator_backend.validation_step(args)\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py\", line 85, in validation_step\r\n output = self.__validation_step(args)\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py\", line 95, in __validation_step\r\n output = self.trainer.model.validation_step(*args)\r\n File \"finetune.py\", line 183, in validation_step\r\n return self._generative_step(batch)\r\n File \"finetune.py\", line 216, in _generative_step\r\n generated_ids = self.model.generate(\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/autograd/grad_mode.py\", line 26, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/mnt/nvme1/code/huggingface/transformers-watchdog/src/transformers/generation_utils.py\", line 553, in generate\r\n return self.beam_search(\r\n File \"/mnt/nvme1/code/huggingface/transformers-watchdog/src/transformers/generation_utils.py\", line 950, in beam_search\r\n beam_scores = torch.zeros((batch_size, num_beams), dtype=torch.float, device=input_ids.device)\r\nUnboundLocalError: local variable 'torch' referenced before assignment\r\nException ignored in: <function tqdm.__del__ at 0x7f1ec3812ee0>\r\nTraceback (most recent call last):\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tqdm/std.py\", line 1122, in __del__\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tqdm/std.py\", line 1335, in close\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tqdm/std.py\", line 1514, in display\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tqdm/std.py\", line 1125, in __repr__\r\n File \"/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/tqdm/std.py\", line 1475, in format_dict\r\nTypeError: cannot unpack non-iterable NoneType object\r\n```\r\n\r\nthis is on pt-nightly, the only pt version I can use rtx-3090 24G card.\r\n\r\n**correction**: on pt-nightly it blows up with this error w/ or w/o cache clearing - it just doesn't work. I checked `torch` is imported at the point it says it's not defined.\r\n\r\n**edit2**: some problem related to `--warmup 500` - I set it to `1` and the above failure is gone - something to deal with separately.\r\n\r\nLet me see if I can figure out how to test this on pt16 - the problem is the 8GB card still can't fit even a single sample.", "> It's also helpful to write in the following sequence:\r\n> \r\n> ```\r\n> torch.autocast_increment_nesting()\r\n> print(torch.autocast_decrement_nesting())\r\n> ```\r\n> \r\n> to see how deeply we're nested in autocast contexts at that point in forward. It should print `1`.\r\n\r\nI validated - it prints `1`.", "for calling `torch.clear_autocast_cache()` how does a forward know it's in the autocast context?\r\n```\r\nif torch.autocast_enabled:\r\n torch.clear_autocast_cache()\r\n```\r\nI don't see a public API to check that [here](https://pytorch.org/docs/stable/_modules/torch/cuda/amp/autocast_mode.html#autocast)\r\n", "`torch.is_autocast_enabled()`", "Thank you, I see it now in the source code - should it be in the docs too? https://pytorch.org/docs/stable/amp.html", "I'm not sure, I want people to use autocast through the context manager interface. I guess it's useful for debugging.", "Oh, for sure. I don't think any of the `transformers` models should be made autocast-aware - this is the job of the trainer.", "Hmm, now I'm able to use the 24GB card so it's much easier to debug as I don't hit OOM all the time. Though I can't compare with apex, as I can't build it for cuda-11 - but I hope it should still be OK.\r\n\r\nWhat it appears to be is the beam search (size=4) consumes some 20GB just to search for 1 sample. It calls forward about 100 times each time allocating about 200MB - never releasing memory. I suppose because with apex the model is much more lean it requires much less memory.\r\n\r\nIf I add @ptrblck's suggestion to clear the autocast cache in forward it now consumes only 2.5GB - that's 1/8th of the same with the cache.\r\n\r\nW/o fp16 it consumes 5GB for the same beam_seach operation.\r\n\r\nLet's summarize:\r\n\r\ntype | memory\r\n----------|-------\r\nw/o fp16 | 5GB\r\nw/ fp16 | 20GB\r\nw/ fp16 + cache flush | 2.5GB\r\n\r\nSo definitely this is at least a huge part of the culprit. I think we are making progress. Much appreciation for your suggestion, @ptrblck~\r\n\r\nSo how do you recommend to proceed? Should PL have an option to clear the autocast cache?\r\n\r\nShould `autocast` cache be made smarter and flush automatically if gpu ram is 90% full and be called to check when this happens ala `gc.collect()`-timing and not just in the context of its use? Since now it created an additional cache in to add to to `cuda.cache`.", "If you follow along this is the command I use now: `PYTHONPATH=\"../../src\" CUDA_VISIBLE_DEVICES=0 python finetune.py --learning_rate 3e-5 --gpus 1 --do_train --val_check_interval 1 --num_train_epochs 1 --freeze_encoder --freeze_embeds --data_dir cnn_dm --max_target_length 142 --val_max_target_length 142 --train_batch_size 1 --eval_batch_size 1 --gradient_accumulation_steps 1 --model_name_or_path sshleifer/student_cnn_12_6 --tokenizer_name facebook/bart-large --warmup_steps 1 --output_dir distilbart-cnn-12-6 --overwrite_output_dir --num_sanity_val_steps 0 --n_train 1 --n_val 1 --fp16`", "Don't think this is a trainer issue, I've been able to replicate this OOM crash putting autocast directly into the forward call, wrapping the code found here with `torch.cuda.amp.autocast`:\r\n\r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bart.py#L419\r\n\r\nand turning off `--fp16`. Haven't been able to investigate further into why memory isn't being freed in this block!", "Thank you for looking into it, @SeanNaren!\r\n\r\n> Don't think this is a trainer issue\r\n\r\nIf it's an issue of `autocast` cache then it is by definition a PL issue - since it's the one managing it - but let's see what @ptrblck and @mcarilli say about what's the correct approach of taking advantage of native amp w/ incurring unreasonable increased memory demands. \r\n\r\nSurely there must be a way to manage it efficiently, and if caching is bad for whatever reason (application specific?) - there must be a way to turn it off in first place, rather than wasting resources/time copying/deleting it repeatedly.\r\n\r\n> Haven't been able to investigate further into why memory isn't being freed in this block!\r\n\r\n@ptrblck and I already did - it's the `autocast` cache (at least a huge part of it). See https://github.com/huggingface/transformers/issues/8403#issuecomment-725562117", "Thanks @stas00! Any idea why within that block of code I posteted above that the autocast cache is not being freed? Btw if you wrap that particular section with `with torch.cuda.amp.autocast(enabled=False)`, the code runs (which I assume just turns off autocast functionality for that region of code). ", "@SeanNaren, if PL still invokes the `autocast` context then this sub-context it won't free the cache on its exit. It will only free it when the most outer context will exit. You will have to remove the `autocast` call in PL for this to work.", "Updates so far:\r\n\r\nIt appears that `autocast` is not designed to handle a massive number of `forward` calls in a single context - it caches them all! If I understand it correctly, it has to be called as close as possible to the first `forward` call that actually needs the casting. Currently, in `finetune.py` we end up with PL calling `autocast` on `SummarizationModule` which has nothing to do with pytorch math (i.e. needs no casting), which then goes through a massive logic of generate/beam_search which again has nothing to do with math, and only when it hits `BartForConditionalGeneration`'s `forward` we need the `autocast`.\r\n\r\nThe problem is that `generate` (which already runs under `autocast` via PL - so caching is on) ends up calling `BartForConditionalGeneration`'s `forward` 100s of times, and every such call in the debug example was adding 200MB - so in 100 calls for beam search of size 4 it accumulated 20GB - we have a big problem.\r\n\r\nSo a workaround suggested by @ptrblck is to call \r\n```\r\nif torch.autocast_enabled:\r\n torch.clear_autocast_cache()\r\n```\r\ninside `BartForConditionalGeneration.forward` - but ideally it should be called in a more generic way in `generation_utils` `while` loop ([here](https://github.com/huggingface/transformers/blob/121c24efa4453e4e726b5f0b2cf7095b14b7e74e/src/transformers/generation_utils.py#L954)), so that it works for any `transformers` model:\r\n\r\n```\r\n while cur_len < max_length:\r\n model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)\r\n if torch.is_autocast_enabled():\r\n torch.clear_autocast_cache()\r\n outputs = self(**model_inputs, return_dict=True)\r\n```\r\nbut this is clearly far from optimal as a lot of resources will be wasted on filling the cache and immediately emptying it.\r\n\r\n(Perhaps there is a way to disable the cache completely, but I don't know about its performance implications).\r\n\r\nSo a more efficient solution would be to `autocast` here instead:\r\n```\r\n while cur_len < max_length:\r\n model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)\r\n if somehow_we_know_autocast_should_be_used():\r\n with autocast(): \r\n outputs = self(**model_inputs, return_dict=True)\r\n else:\r\n outputs = self(**model_inputs, return_dict=True) \r\n```\r\n\r\nbut we have two issues here:\r\n\r\n1. we have a problem if someone invoked `autocast()` sooner - e.g. as PL or HF trainer do it now. As this will defeat the purpose and the cache will blow up again. There is no cache-per-context, but only a single cache, regardless of whether the contexts are stacked. The outer call defines the scope for the cache and it'll clear only on the exit of that call. So the `autocast` call above for all means and purposes is a no-op if `autocast` has already been called in earlier frames.\r\n\r\n2. now we are mixing trainer logic with the middle-layer (`generate` is neither a trainer nor a model - it's in between, and `SummarizationModule` in `finetune.py` is definitely not a model, but more of a trainer). How would `generation_utils` know about `somehow_we_know_autocast_should_be_used()`?\r\n\r\nAt this point since the problem is better understood I invite @LysandreJik, @patrickvonplaten, @sgugger and others to chime in and suggest how to move forward.", "I found at least part of the culprit or trigger of the leak - it's `@torch.no_grad()` used for `generate` https://github.com/huggingface/transformers/blob/eb3bd73ce35bfef56eeb722d697f2d39a06a8f8d/src/transformers/generation_utils.py#L281-L282\r\n\r\nHere is a short script that reproduces the leakage. It removes all the generate/search logic and feeds the same random input_ids to `BartForConditionalGeneration` pre-trained model.\r\n\r\nPlease first run:\r\n```\r\npip install ipyexperiments \r\n```\r\nto get the memory tracing, but feel free to disable it if for some reason it's not working for you. (it should)\r\n\r\n```\r\n#!/usr/bin/env python\r\n\r\nimport os\r\nimport sys\r\nimport torch\r\nos.environ[\"USE_TF\"] = \"0\"\r\nsys.path.insert(1, \"src\")\r\n\r\n# !pip install ipyexperiments \r\nfrom ipyexperiments.utils.mem import gpu_mem_get_used_mbs, gpu_mem_get_used_no_cache_mbs\r\n\r\nfrom transformers import BartForConditionalGeneration\r\n\r\ndevice = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')\r\n\r\nmodel = BartForConditionalGeneration.from_pretrained('sshleifer/student_cnn_12_6').to(device)\r\nmodel.eval()\r\n\r\nvocab_size = 50264 # model.config.vocab_size\r\nlength = 10\r\n\r\nAUTOCAST = False if \"-f\" in sys.argv else True\r\nprint(f\"autocast: {AUTOCAST}\")\r\n\r\nclass MemReport():\r\n def __init__(self, gc_collect=True):\r\n self.get_mem = gpu_mem_get_used_no_cache_mbs if gc_collect else gpu_mem_get_used_mbs\r\n self.cur = self.get_mem()\r\n def delta(self, id):\r\n peak = torch.cuda.memory_stats()[\"allocated_bytes.all.peak\"]\r\n print(f\"{id}: {gpu_mem_get_used_mbs()-self.cur}MB (peak {peak>>20}MB)\")\r\n self.cur = self.get_mem()\r\n \r\nmr = MemReport(gc_collect=False)\r\n\r\n### reproducible code starts here ###\r\n\r\[email protected]_grad()\r\ndef logic():\r\n input_ids = torch.randint(vocab_size, (1,length)).to(device)\r\n mr.delta(0)\r\n for i in range(1,10):\r\n outputs = model(input_ids)\r\n mr.delta(i)\r\n\r\nif AUTOCAST:\r\n with torch.cuda.amp.autocast():\r\n logic()\r\nelse:\r\n logic()\r\n```\r\n\r\nSo if I run it with `-f` which disables `autocast`, I get:\r\n\r\n```\r\n./reproduce.py -f\r\nautocast: False\r\n0: 0MB (peak 1165MB)\r\n1: 12MB (peak 1167MB)\r\n2: 0MB (peak 1169MB)\r\n3: 0MB (peak 1169MB)\r\n4: 0MB (peak 1169MB)\r\n5: 0MB (peak 1169MB)\r\n6: 0MB (peak 1169MB)\r\n7: 0MB (peak 1169MB)\r\n8: 0MB (peak 1169MB)\r\n9: 0MB (peak 1169MB)\r\n```\r\nno leak.\r\n\r\nIf however I remove `-f` and `autocast` gets enabled, we get:\r\n```\r\n./reproduce.py -f\r\nautocast: True\r\n0: 0MB (peak 1165MB)\r\n1: 592MB (peak 1744MB)\r\n2: 580MB (peak 2324MB)\r\n3: 580MB (peak 2902MB)\r\n4: 580MB (peak 3480MB)\r\n5: 580MB (peak 4058MB)\r\n6: 580MB (peak 4636MB)\r\n7: 580MB (peak 5214MB)\r\n8: 580MB (peak 5793MB)\r\n9: 580MB (peak 6371MB)\r\n```\r\nthe memory logger prints the delta for each `forward` call in the loop and the peak memory.\r\n\r\nYou can see that we are leaking 600Mb per forward call here.\r\n\r\nIf I comment out `@torch.no_grad()`, the total memory usage doubles but there is no leak:\r\n\r\n```\r\nautocast: True\r\n0: 0MB (peak 1165MB)\r\n1: 602MB (peak 1754MB)\r\n2: 590MB (peak 2343MB)\r\n3: 0MB (peak 2343MB)\r\n4: 0MB (peak 2343MB)\r\n5: 0MB (peak 2343MB)\r\n6: 0MB (peak 2343MB)\r\n7: 0MB (peak 2343MB)\r\n8: 0MB (peak 2343MB)\r\n9: 0MB (peak 2343MB)\r\n```\r\n\r\nI was using pycharm to debug this and to write a small script and boy it got me so delayed as it leaks gpu ram on its own, since it has to save all those variables on cuda, but I wasn't aware of it. Well, now I know not to do that. Luckily I had https://github.com/stas00/ipyexperiments handy to give me easy memory tracing.\r\n\r\nNote I'm importing two gpu mem tracking functions - one of them clears cuda cache - but here it appears it's better not use that version. ", "One other issue to look into is that what happens under `autocast` to weights that are deterministic (such as positional weights [SinusoidalPositionalEmbedding](https://github.com/huggingface/transformers/blob/24184e73c441397edd51e9068e0f49c0418d25ab/src/transformers/modeling_bart.py#L1340)) as these are set with `requires_grad = False`.\r\n\r\nSeeing how the caching logic [works](https://github.com/pytorch/pytorch/blob/21f447ee2c6ebbd72b6c3608c4df17c74edd4784/aten/src/ATen/autocast_mode.cpp#L69-L71):\r\n```\r\n bool can_try_cache = (to_type == at::kHalf && arg.scalar_type() == at::kFloat && arg.requires_grad() && arg.is_leaf());\r\n```\r\nthe conversion of these to fp16 will not be cached if I read the code correctly. This probably belongs to a separate issue though." ]
1,604
1,608
1,607
CONTRIBUTOR
null
While working on https://github.com/huggingface/transformers/issues/8353 I discovered that `--fp16` causes a 10x+ increase in gpu memory demands. e.g. I can run bs=12 w/o `--fp16` ``` cd examples/seq2seq export BS=12; rm -rf distilbart-cnn-12-6; python finetune.py --learning_rate=3e-5 --gpus 1 \ --do_train --do_predict --val_check_interval 0.25 --n_val 500 --num_train_epochs 2 --freeze_encoder \ --freeze_embeds --data_dir cnn_dm --max_target_length 142 --val_max_target_length=142 \ --train_batch_size=$BS --eval_batch_size=$BS --gradient_accumulation_steps 1 \ --model_name_or_path sshleifer/student_cnn_12_6 --tokenizer_name facebook/bart-large \ --warmup_steps 500 --output_dir distilbart-cnn-12-6 ``` But if I add: ``` --fp16 ``` (w/ or w/o `--fp16_opt_level O1`) I get OOM even with bs=1 on a 8GB card and it barely manages on a 24GB card - I think the increase in memory demand is more than 10x. The OOM either right away when it does the sanity check step, or after just 10-20 batches - so within a few secs This is with pytorch-1.6. Same goes for pytorch-1.7 and 1.8-nightly. I wasn't able to test `--fp16` with pytorch-1.5, since I can't build apex on ubuntu-20.04. Without `--fp16` pytorch-1.5 works the same as pytorch-1.6 gpu memory-wise. I tested with pytorch-1.5 + apex and there is no problem there. Memory consumption is about half. Here is the table of the batch sizes that fit into a 8gb rtx-1070 (bigger BS leads to an instant OOM): bs | version ---|-------- 12 | pt15 20 | pt15+fp16 12 | pt16 1 | pt16+fp16 If you'd like to reproduce the problem here are the full steps: ``` # prep library git clone https://github.com/huggingface/transformers cd transformers pip install -e .[dev] pip install -r examples/requirements.txt cd examples/seq2seq # prep data wget https://cdn-datasets.huggingface.co/summarization/cnn_dm_v2.tgz tar -xzvf cnn_dm_v2.tgz # empty lines removed mv cnn_cln cnn_dm # run export BS=12; rm -rf distilbart-cnn-12-6 python finetune.py --learning_rate=3e-5 --gpus 1 \ --do_train --do_predict --val_check_interval 0.25 --n_val 500 --num_train_epochs 2 --freeze_encoder \ --freeze_embeds --data_dir cnn_dm --max_target_length 142 --val_max_target_length=142 \ --train_batch_size=$BS --eval_batch_size=$BS --gradient_accumulation_steps 1 \ --model_name_or_path sshleifer/student_cnn_12_6 --tokenizer_name facebook/bart-large \ --warmup_steps 500 --output_dir distilbart-cnn-12-6 ``` This issue is to track the problem and hopefully finding a solution. @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8403/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8403/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8402
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8402/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8402/comments
https://api.github.com/repos/huggingface/transformers/issues/8402/events
https://github.com/huggingface/transformers/pull/8402
738,361,408
MDExOlB1bGxSZXF1ZXN0NTE3MjIzMzM5
8,402
Add gpt2-medium-chinese model card
{ "login": "mymusise", "id": 6883957, "node_id": "MDQ6VXNlcjY4ODM5NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/6883957?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mymusise", "html_url": "https://github.com/mymusise", "followers_url": "https://api.github.com/users/mymusise/followers", "following_url": "https://api.github.com/users/mymusise/following{/other_user}", "gists_url": "https://api.github.com/users/mymusise/gists{/gist_id}", "starred_url": "https://api.github.com/users/mymusise/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mymusise/subscriptions", "organizations_url": "https://api.github.com/users/mymusise/orgs", "repos_url": "https://api.github.com/users/mymusise/repos", "events_url": "https://api.github.com/users/mymusise/events{/privacy}", "received_events_url": "https://api.github.com/users/mymusise/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "The Inference API is failing to load your model so there might be a configuration issue:\r\n\r\n<img width=\"770\" alt=\"Screenshot 2020-11-08 at 11 02 01\" src=\"https://user-images.githubusercontent.com/326577/98462120-a5bb2280-217f-11eb-8777-8456534b99ef.png\">\r\n\r\nI suspect this line is not working: `tokenizer = AutoTokenizer.from_pretrained(\"mymusise/gpt2-medium-chinese\")`\r\n\r\nWhat kind of tokenizer are you using?", "(cc @Narsil @JetRunner)", "Well many models actually use BERT Chinese's tokenizer so maybe that's the case. If so, how can we combine the BERT tokenizer with GPT model? (I think we've handled similar situations before)", "> The Inference API is failing to load your model so there might be a configuration issue:\r\n> \r\n> <img alt=\"Screenshot 2020-11-08 at 11 02 01\" width=\"770\" src=\"https://user-images.githubusercontent.com/326577/98462120-a5bb2280-217f-11eb-8777-8456534b99ef.png\">\r\n> \r\n> I suspect this line is not working: `tokenizer = AutoTokenizer.from_pretrained(\"mymusise/gpt2-medium-chinese\")`\r\n> \r\n> What kind of tokenizer are you using?\r\n\r\nYes, I use BertTokenizer, but how can i change the Tokenizer and Model class of the default example?\r\n```\r\nfrom transformers import AutoTokenizer, TFAutoModelWithLMHead\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"mymusise/gpt2-medium-chinese\")\r\n\r\nmodel = TFAutoModelWithLMHead.from_pretrained(\"mymusise/gpt2-medium-chinese\")\r\n```\r\nI specify `architectures=[\"TFGPT2LMHeadModel\"]` on my configs, but it didn't show in the model card.\r\n\r\nIs there any document I can reference?", "> Well many models actually use BERT Chinese's tokenizer so maybe that's the case. If so, how can we combine the BERT tokenizer with GPT model? (I think we've handled similar situations before)\r\n\r\nYes, I have tried to use `GPT2Tokenizer` by specific `vocab_file` and `merges_file`, but it didn't work.\r\n```\r\ntokenizer = GPT2Tokenizer(vocab_file=my_vocab_file, merges_file=my_merges_file)\r\n```\r\nIs there any examples about how to custom GPT2Tokenizer that I can reference? Many thanks.", "> > Well many models actually use BERT Chinese's tokenizer so maybe that's the case. If so, how can we combine the BERT tokenizer with GPT model? (I think we've handled similar situations before)\n> \n> \n> \n> Yes, I have tried to use `GPT2Tokenizer` by specific `vocab_file` and `merges_file`, but it didn't work.\n> \n> ```\n> \n> tokenizer = GPT2Tokenizer(vocab_file=my_vocab_file, merges_file=my_merges_file)\n> \n> ```\n> \n> Is there any examples about how to custom GPT2Tokenizer that I can reference? Many thanks.\n\ncc @julien-c ", "Yes, see the doc here: https://huggingface.co/transformers/master/main_classes/configuration.html – look for `tokenizer_class`\r\n\r\nIn config.json:\r\n\r\n> The name of the associated tokenizer class to use (if none is set, will use the tokenizer associated to the model by default).", "> Yes, see the doc here: https://huggingface.co/transformers/master/main_classes/configuration.html – look for `tokenizer_class`\r\n> \r\n> In config.json:\r\n> \r\n> > The name of the associated tokenizer class to use (if none is set, will use the tokenizer associated to the model by default).\r\n\r\n\r\nThanks julien, I have added `tokenizer_class=\"BertTokenizer\"` to [configs](https://s3.amazonaws.com/models.huggingface.co/bert/mymusise/gpt2-medium-chinese/config.json) and update it. But the web page still using `AutoTokenizer` as example. Cache?\r\n", "The webpage will still display `AutoTokenizer` – but it will now work.", "As you can see the Inference widget now works: https://huggingface.co/mymusise/gpt2-medium-chinese?text=%E6%88%91%E5%8F%AB%E6%9C%B1%E5%88%A9%E5%AE%89%EF%BC%8C%E6%88%91%E5%96%9C%E6%AC%A2", "@julien-c I think many Chinese models in our hub have the same problem. I'll have them fixed." ]
1,604
1,604
1,604
CONTRIBUTOR
null
Hi, this PR adds the model card for **gpt2-medium-chinese** model
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8402/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8402/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8402", "html_url": "https://github.com/huggingface/transformers/pull/8402", "diff_url": "https://github.com/huggingface/transformers/pull/8402.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8402.patch", "merged_at": 1604829620000 }
https://api.github.com/repos/huggingface/transformers/issues/8401
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8401/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8401/comments
https://api.github.com/repos/huggingface/transformers/issues/8401/events
https://github.com/huggingface/transformers/pull/8401
738,360,431
MDExOlB1bGxSZXF1ZXN0NTE3MjIyNjE2
8,401
[testing utils] get_auto_remove_tmp_dir more intuitive behavior
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> For example, if someone were to use `self.get_auto_remove_tmp_dir()` but didn't like where the tmp directory was set, they would specify the directory. They then re-run their code, but the behavior now has changed, even though they only updated the path. That seems unexpected and unwanted to me.\r\n\r\nYou're making a valid point, @LysandreJik and that's how I coded it in first place, except it lead to needing to type out many arguments. So I'm trying to make it smarter based on the experience of using it.\r\n\r\n1. I'd ask you - what do you mean specifically you don't like where it's located. Since by default it gets wiped out anyway what difference does its location make? The only time you'd want to set it to a desired location is when you want to see its contents - therefore, you will not want it wiped out ;) Does my thinking process make sense?\r\n\r\n If however the temp mechanism is wrong (say it fails to create a tmp dir under /tmp/ - dir is full), then it needs to be dealt with on the system/shell level and redefine where `/tmp` is located or empty it since this is a system-level problem. It'd be wrong to try to solve it in this helper function.\r\n\r\n2. The intention here is to make the debugging process as painless and quick as possible, so if the function behaves slightly differently than you'd normally expect from a function, but you use it all the time and you're well aware of its quirks and it saves you a lot of time and mistakes, then it serves the purpose, IMHO.\r\n\r\n3. I'm totally open to other solutions as long as they lead to easier debug.\r\n\r\nI don't know whether other devs have been using it and have some experiential input to share.\r\n", "Great, thanks for explaining!\r\n\r\n> I'd ask you - what do you mean specifically you don't like where it's located. Since by default it gets wiped out anyway what difference does its location make?\r\n\r\nYou're definitely right about this. Thanks for humoring me! Good to merge for me.", "this one is ready to merge when you get a chance - thank you!", "it should be safe to merge - the CI failures are unrelated" ]
1,604
1,605
1,605
CONTRIBUTOR
null
Now that I have been heavily using `get_auto_remove_tmp_dir` for a while, I realized that one of the defaults isn't most optimal and we had to type too much for what we need most of the time. tldr: 99% of the time when debugging we want the tmp dir 1. to be empty at the beginning of the test 2. to be left alone after the test This PR changes the behavior of `get_auto_remove_tmp_dir` to simplify things greatly and require much less typing. Before this PR when debugging a test we had to do this: ``` - output_dir = self.get_auto_remove_tmp_dir() + output_dir = self.get_auto_remove_tmp_dir("./xxx", before=True, after=False) ``` Now, all you need to do is: ``` - output_dir = self.get_auto_remove_tmp_dir() + output_dir = self.get_auto_remove_tmp_dir("./xxx") ``` You can still override `before` and `after` should you need to, but the main change is that if you pass a hardcoded path - you are most likely wanting to debug and see the results of the test in an easily locatable and repeatable location, and ensuring that dir is empty before the test. Done! @LysandreJik, @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8401/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8401/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8401", "html_url": "https://github.com/huggingface/transformers/pull/8401", "diff_url": "https://github.com/huggingface/transformers/pull/8401.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8401.patch", "merged_at": 1605027441000 }
https://api.github.com/repos/huggingface/transformers/issues/8400
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8400/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8400/comments
https://api.github.com/repos/huggingface/transformers/issues/8400/events
https://github.com/huggingface/transformers/pull/8400
738,357,746
MDExOlB1bGxSZXF1ZXN0NTE3MjIwNjYw
8,400
[s2s test_finetune_trainer] failing multigpu test
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Agreed that more than 8 records is the way to go. Feel free to address that in a future PR!" ]
1,604
1,604
1,604
CONTRIBUTOR
null
Sam, On a multigpu machine: ``` RUN_SLOW=1 pytest examples/seq2seq/test_finetune_trainer.py::TestFinetuneTrainer::test_finetune_trainer_slow ``` fails for me - not learning anything. ``` > assert first_step_stats["eval_bleu"] < last_step_stats["eval_bleu"] # model learned nothing E AssertionError: assert 0.0 < 0.0 ``` Looking at the logs, it gains some >0 bleu score in the first half of the epochs and then drops back to 0.00 in the last ones. On a single gpu it fluctuates between 0 and some small value - this is very fragile. Changing to lr 3e-3 (this PR) seems to make it slightly more stable, but it could be a card specific thing - this is with rtx3090+rtx1070. So please check on your setup. I tested that it passes with single gpu (one of each). Alternatively the test should compare not the first and last metrics, but perhaps something more flexible? like adding up all blue scores and checking the total >0? But either way it feels very fragile - depending too much on the hardware type - perhaps a long term approach to make it more resilient is by feeding it more than 8 records. @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8400/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8400/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8400", "html_url": "https://github.com/huggingface/transformers/pull/8400", "diff_url": "https://github.com/huggingface/transformers/pull/8400.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8400.patch", "merged_at": 1604871940000 }
https://api.github.com/repos/huggingface/transformers/issues/8399
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8399/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8399/comments
https://api.github.com/repos/huggingface/transformers/issues/8399/events
https://github.com/huggingface/transformers/pull/8399
738,356,614
MDExOlB1bGxSZXF1ZXN0NTE3MjE5ODI3
8,399
Fixed Trainer default labels for QA Model
{ "login": "ManavR123", "id": 17506262, "node_id": "MDQ6VXNlcjE3NTA2MjYy", "avatar_url": "https://avatars.githubusercontent.com/u/17506262?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ManavR123", "html_url": "https://github.com/ManavR123", "followers_url": "https://api.github.com/users/ManavR123/followers", "following_url": "https://api.github.com/users/ManavR123/following{/other_user}", "gists_url": "https://api.github.com/users/ManavR123/gists{/gist_id}", "starred_url": "https://api.github.com/users/ManavR123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ManavR123/subscriptions", "organizations_url": "https://api.github.com/users/ManavR123/orgs", "repos_url": "https://api.github.com/users/ManavR123/repos", "events_url": "https://api.github.com/users/ManavR123/events{/privacy}", "received_events_url": "https://api.github.com/users/ManavR123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
PR https://github.com/huggingface/transformers/pull/7191 added default labels for models. However, for QA Models it sets the default labels to ["start_positions, end_positions"] instead of ["start_positions", "end_positions"], which was the intention based on the PR description. This PR is a simple fix to this. Fixes #8390 @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8399/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8399/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8399", "html_url": "https://github.com/huggingface/transformers/pull/8399", "diff_url": "https://github.com/huggingface/transformers/pull/8399.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8399.patch", "merged_at": 1604844494000 }
https://api.github.com/repos/huggingface/transformers/issues/8398
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8398/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8398/comments
https://api.github.com/repos/huggingface/transformers/issues/8398/events
https://github.com/huggingface/transformers/pull/8398
738,355,825
MDExOlB1bGxSZXF1ZXN0NTE3MjE5MjU2
8,398
[s2s examples test] fix data path
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
This PR fixes a relative path in a test that should be a resolved full path instead - I forgot to test from the top of the repo in the previous PR and therefore missed this. @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8398/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8398/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8398", "html_url": "https://github.com/huggingface/transformers/pull/8398", "diff_url": "https://github.com/huggingface/transformers/pull/8398.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8398.patch", "merged_at": 1604871858000 }
https://api.github.com/repos/huggingface/transformers/issues/8397
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8397/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8397/comments
https://api.github.com/repos/huggingface/transformers/issues/8397/events
https://github.com/huggingface/transformers/pull/8397
738,353,815
MDExOlB1bGxSZXF1ZXN0NTE3MjE3ODI2
8,397
Fix DataCollatorForWholeWordMask again
{ "login": "cccntu", "id": 31893406, "node_id": "MDQ6VXNlcjMxODkzNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cccntu", "html_url": "https://github.com/cccntu", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "organizations_url": "https://api.github.com/users/cccntu/orgs", "repos_url": "https://api.github.com/users/cccntu/repos", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "received_events_url": "https://api.github.com/users/cccntu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
# What does this PR do? Fixes #8388 I tested this change by running `run_mlm_wwm.py` (did not test the chinese version). Seems to work after also fixing the example. #8394 @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8397/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8397/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8397", "html_url": "https://github.com/huggingface/transformers/pull/8397", "diff_url": "https://github.com/huggingface/transformers/pull/8397.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8397.patch", "merged_at": 1604847182000 }
https://api.github.com/repos/huggingface/transformers/issues/8396
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8396/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8396/comments
https://api.github.com/repos/huggingface/transformers/issues/8396/events
https://github.com/huggingface/transformers/issues/8396
738,352,365
MDU6SXNzdWU3MzgzNTIzNjU=
8,396
Inreproducible loss when train same setting bert model >=2 times
{ "login": "Backpackerice", "id": 7083541, "node_id": "MDQ6VXNlcjcwODM1NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/7083541?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Backpackerice", "html_url": "https://github.com/Backpackerice", "followers_url": "https://api.github.com/users/Backpackerice/followers", "following_url": "https://api.github.com/users/Backpackerice/following{/other_user}", "gists_url": "https://api.github.com/users/Backpackerice/gists{/gist_id}", "starred_url": "https://api.github.com/users/Backpackerice/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Backpackerice/subscriptions", "organizations_url": "https://api.github.com/users/Backpackerice/orgs", "repos_url": "https://api.github.com/users/Backpackerice/repos", "events_url": "https://api.github.com/users/Backpackerice/events{/privacy}", "received_events_url": "https://api.github.com/users/Backpackerice/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hi @Backpackerice did you solve the issue? I also encounter the same issue while training with `Trainer` API.", "I have found the issue. I was giving the model that I am initialising before `Trainer` object. So, it wasn't initialised with a seed. I solved the issue by setting the seed before initialising the model. It can be also solved by providing `model_init` arg for `Trainer`. " ]
1,604
1,611
1,610
NONE
null
Hi there, I am using my customized bert script to train a model. However, everything even I keep the same setting for lr, AdamW weight decay and epoch, and run on the same platform (cuda on SageMaker) with same torch (1.5.0) and transformers (2.11.0) versions, the results still change a lot in terms of the loss. This make my different experiments not comparable. Can someone who has experienced this before or have any ideas please advice me on what should I do? I really want to solve this inreproducible issue so that I can continue on my experiments. Super appreciated for your help! Details as below: For example, if I set epoch = 4, lr = 1e-5, decay for AdamW as 0.01. For one run I got this result for the first epoch only showing the last complete 100 batches result: ``` 2020-10-19 03:45:29,032 - utils - INFO - | epoch 1 | 1300/ 1320 batches | lr 2.261e-05 | loss 0.267 | Elapsed 0:12:29 2020-10-19 03:45:40,550 - utils - INFO - Training epoch took: 0:12:41 2020-10-19 03:45:40,550 - utils - INFO - Validating... 2020-10-19 03:46:14,588 - utils - INFO - | loss 0.019 | Elapsed 0:00:34 precision recall f1-score support False 0.906472 0.979875 0.941745 2087.000000 True 0.475000 0.152610 0.231003 249.000000 accuracy 0.891695 0.891695 0.891695 0.891695 macro avg 0.690736 0.566243 0.586374 2336.000000 weighted avg 0.860480 0.891695 0.865986 2336.000000 2020-10-19 03:46:15,403 - utils - INFO - Testing... 2020-10-19 03:46:55,182 - utils - INFO - use model: 1 batch / 1319 step precision recall f1-score support False 0.906 0.984 0.944 2344.000 True 0.413 0.098 0.159 265.000 accuracy 0.894 0.894 0.894 0.894 macro avg 0.659 0.541 0.551 2609.000 weighted avg 0.856 0.894 0.864 2609.000 2020-10-19 03:46:55,188 - utils - INFO - best test F1 score: 0.8638224640164368 ``` And for the second attempt I got this for the first epoch: ``` 2020-11-07 17:08:08,821 - utils - INFO - | epoch 1 | 1300/ 1320 batches | lr 2.261e-05 | loss 0.286 | Elapsed 0:12:25 2020-11-07 17:08:20,487 - utils - INFO - Training epoch took: 0:12:37 2020-11-07 17:08:20,487 - utils - INFO - Validating... 2020-11-07 17:08:54,609 - utils - INFO - | loss 0.018 | Elapsed 0:00:34 precision recall f1-score support False 0.893408 1.000000 0.943703 2087.000000 True 0.000000 0.000000 0.000000 249.000000 accuracy 0.893408 0.893408 0.893408 0.893408 macro avg 0.446704 0.500000 0.471852 2336.000000 weighted avg 0.798177 0.893408 0.843112 2336.000000 2020-11-07 17:08:55,313 - utils - INFO - Testing... 2020-11-07 17:09:34,934 - utils - INFO - use model: 1 batch / 1319 step precision recall f1-score support False 0.898 1.000 0.946 2344.000 True 0.000 0.000 0.000 265.000 accuracy 0.898 0.898 0.898 0.898 macro avg 0.449 0.500 0.473 2609.000 weighted avg 0.807 0.898 0.850 2609.000 2020-11-07 17:09:34,938 - utils - INFO - best test F1 score: 0.8503599608647853 ``` Note that, the last used lr rate per 100 batches are the same, while the average loss per 100 batches are slightly different. But this result in the predictions for the validation and testing data set very different. I already set the seed during my model with this function below: ``` def set_seed(seed): """ Set all seeds to make results reproducible (deterministic mode). When seed is a false-y value or not supplied, disables deterministic mode. """ random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False ``` And my model script is like this: ``` class ReviewClassification(BertPreTrainedModel): def __init__(self, config, add_agent_text, agent_text_heads): """ :param config: Bert configuration, can set up some parameters, like output_attention, output_hidden_states :param add_agent_text: whether to use the non text feature, and how. It can have three options: None, "concat" and "attention" :param agent_text_heads: number of the heads in agent attention mechanism. Only useful if add_agent_text are set to "attention" """ super().__init__(config) # self.num_labels = 2 self.add_agent_text = add_agent_text self.bert = BertModel(config) self.dropout = nn.Dropout(config.hidden_dropout_prob) embedding_size = config.hidden_size if self.add_agent_text == "concat": embedding_size = 2 * embedding_size elif self.add_agent_text == "attention": self.agent_attention = nn.MultiheadAttention(embedding_size, num_heads=agent_text_heads) else: # don't use the information in Agent text pass self.classifier = nn.Linear(embedding_size, 1) # self.classifier = nn.Linear(embedding_size, len(LABEL_NAME)) # bias: If set to False, the layer will not learn an additive bias self.init_weights() print( """ add agent text :{} agent text multi-head :{} """.format(self.add_agent_text, agent_text_heads) ) def forward( self, review_input_ids=None, review_attention_mask=None, review_token_type_ids=None, agent_input_ids=None, agent_attention_mask=None, agent_token_type_ids=None, labels=None, ): review_outputs = self.bert( review_input_ids, attention_mask=review_attention_mask, token_type_ids=review_token_type_ids, position_ids=None, head_mask=None, inputs_embeds=None, ) if self.add_agent_text is not None: # means that self.add_agent_text is "concat" or "attention" # TODO: we can try that agent_outputs do not share the same parameter agent_outputs = self.bert( agent_input_ids, attention_mask=agent_attention_mask, token_type_ids=agent_token_type_ids, position_ids=None, head_mask=None, inputs_embeds=None, ) if self.add_agent_text == "attention": review_hidden_states = review_outputs[0].transpose(0, 1) # before trans: (bs, seq_len, hidden_size) # want to take it as query, we need the it has the shape (#target_seq_len, batch_size, embedding_size) agent_hidden_states = agent_outputs[0].mean(axis=1).unsqueeze(dim=0) # (1, batch_size, hidden_size) attn_output, _ = self.agent_attention(agent_hidden_states, review_hidden_states, review_hidden_states) feature = attn_output.squeeze() # (batch_size, seq_len) else: feature = review_outputs[1] # (batch_size, seq_len) -? Should it be (batch_size, hidden_size) if self.add_agent_text == "concat": feature = torch.cat([feature, agent_outputs[1]], axis=1) logits = self.classifier(feature).squeeze() outputs = (logits,) # + outputs[2:] # add hidden states and attention if they are here if labels is not None: loss_fct = nn.BCEWithLogitsLoss().cuda() #pos_weight=pos_weight loss = loss_fct(logits, labels) outputs = (loss,) + outputs return outputs # (loss, logits, hidden_states, attentions) ``` The loss is calculated using BCEWithLogitsLoss() from torch.nn. The train, validation and test part script is as below: ``` import time import pickle from path import Path import numpy as np import pandas as pd from sklearn.metrics import precision_recall_fscore_support, classification_report, confusion_matrix import torch import torch.nn as nn from utils import LABEL_NAME, isnotebook, set_seed, format_time if isnotebook(): from tqdm.notebook import tqdm else: from tqdm import tqdm def model_train(model, train_data_loader, valid_data_loader, test_data_loader, logger, optimizer, scheduler, num_epochs, seed, out_dir): # move model to gpu device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model.to(device) if torch.cuda.device_count() > 1: model = nn.DataParallel(model) num_gpus = torch.cuda.device_count() logger.info("Let's use {} GPUs!".format(num_gpus)) # Set the seed value all over the place to make this reproducible. set_seed(seed=seed) # We'll store a number of quantities such as training and validation loss, # validation accuracy, and timings. training_stats = [] print_interval = 100 # Measure the total training time for the whole run. total_t0 = time.time() batch_size = train_data_loader.batch_size num_batch = len(train_data_loader) best_f1_score = { "weighted": 0, "averaged": 0 } best_test_f1_score = 0 # For each epoch... for epoch_i in range(0, num_epochs): # ======================================== # Training # ======================================== # Perform one full pass over the training set. logger.info("") logger.info('======== Epoch {:} / {:} ========'.format(epoch_i + 1, num_epochs)) logger.info('Training...') # Reset the total loss for this epoch. total_train_loss = 0 # Measure how long the training epoch takes. t_train = time.time() model.train() # For each batch of training data... for step, batch in tqdm(enumerate(train_data_loader), desc="Training Iteration", total=num_batch): # Progress update every 100 batches. if step % print_interval == 0 and not step == 0: # Calculate elapsed time in minutes. elapsed = format_time(time.time() - t_train) avg_train_loss = total_train_loss / print_interval # Report progress. logger.info('| epoch {:3d} | {:5d}/{:5d} batches | lr {:.3e} | loss {:5.3f} | Elapsed {:s}'.format( epoch_i+1, step, num_batch, scheduler.get_last_lr()[0], avg_train_loss, elapsed) ) total_train_loss = 0 training_stats.append( { 'epoch': epoch_i + 1, 'step': step, 'train loss': avg_train_loss, } ) # Unpack this training batch from our dataloader. # # As we unpack the batch, we'll also copy each tensor to the GPU using the # `to` method. # # `batch` contains four pytorch tensors: # "input_ids" # "attention_mask" # "token_type_ids" # "binarized_labels" b_review_input_ids = batch["review_input_ids"].to(device) b_review_attention_mask = batch["review_attention_mask"].to(device) b_review_token_type_ids = batch["review_token_type_ids"].to(device) b_agent_input_ids = batch["agent_input_ids"].to(device) b_agent_attention_mask = batch["agent_attention_mask"].to(device) b_agent_token_type_ids = batch["agent_token_type_ids"].to(device) b_binarized_label = batch["binarized_label"].to(device) model.zero_grad() (loss, _) = model(review_input_ids=b_review_input_ids, review_attention_mask=b_review_attention_mask, review_token_type_ids=b_review_token_type_ids, agent_input_ids=b_agent_input_ids, agent_attention_mask=b_agent_attention_mask, agent_token_type_ids=b_agent_token_type_ids, labels=b_binarized_label ) # Accumulate the training loss over all of the batches so that we can # calculate the average loss at the end. `loss` is a Tensor containing a # single value; the `.item()` function just returns the Python value # from the tensor. if num_gpus > 1: total_train_loss += loss.mean().item() loss.mean().backward() # use loss.mean().backward() instead of loss.backward() for multiple gpu trainings else: total_train_loss += loss.item() loss.backward() # Clip the norm of the gradients to 1.0. # This is to help prevent the "exploding gradients" problem. torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) # Update parameters and take a step using the computed gradient. # The optimizer dictates the "update rule"--how the parameters are # modified based on their gradients, the learning rate, etc. optimizer.step() scheduler.step() # End of training epoch # Measure how long this epoch took. training_time = format_time(time.time() - t_train) logger.info("") logger.info(" Training epoch took: {:s}".format(training_time)) # evaluate the model after one epoch. # ======================================== # Validation # ======================================== # After the completion of each training epoch, measure our performance on # our validation set. logger.info("") logger.info("Validating...") t_valid = time.time() model.eval() ave_valid_loss, valid_f1_table, cm_table, f1_score = model_validate(model=model, data_loader=valid_data_loader) # Measure how long this epoch took. validation_time = format_time(time.time() - t_valid) logger.info("") logger.info('| loss {:5.3f} | Elapsed {:s}'.format(ave_valid_loss, validation_time)) logger.info(" \n{:s}".format(valid_f1_table.to_string())) logger.info("") logger.info(" \n{:s}".format(cm_table.to_string())) # need to store the best model for key in best_f1_score.keys(): if best_f1_score[key] < f1_score[key]: # remove the old model: file_list = [f for f in out_dir.files() if f.name.endswith(".pt") and f.name.startswith(key)] for f in file_list: Path.remove(f) model_file = out_dir.joinpath('{:s}_epoch_{:02d}-f1_{:.3f}.pt'.format( key, epoch_i + 1, f1_score[key]) ) best_f1_score[key] = f1_score[key] if num_gpus > 1: torch.save(model.module.state_dict(), model_file) else: torch.save(model.state_dict(), model_file) # ======================================== # Test # ======================================== logger.info("") logger.info("Testing...") result_df = model_test(model=model, data_loader=test_data_loader) y_true = np.array(result_df["review_label"], dtype=np.bool) # This part may need double check y_pred = result_df["Probability"] > 0.5 report = classification_report(y_true, y_pred, output_dict=True) metrics_df = pd.DataFrame(report).transpose() metrics_df = metrics_df.sort_index() weighted_f1_score = metrics_df.loc['weighted avg', 'f1-score'] averaged_f1_score = metrics_df.loc['macro avg', 'f1-score'] best_test_f1_score = metrics_df.loc['weighted avg', 'f1-score'] \ if best_test_f1_score < metrics_df.loc['weighted avg', 'f1-score'] else best_test_f1_score metrics_df = metrics_df.astype(float).round(3) # Calculate confusion matrix tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel() cm_df = pd.DataFrame(columns = ['Predicted No', 'Predicted Yes'], index = ['Actual No', 'Actual Yes']) # adding rows to an empty # dataframe at existing index cm_df.loc['Actual No'] = [tn,fp] cm_df.loc['Actual Yes'] = [fn,tp] logger.info("use model: {} batch / {} step".format(epoch_i + 1, step)) logger.info("\n" + "=" * 50) logger.info("\n" + metrics_df.to_string()) logger.info("\n" + "=" * 50) logger.info("\n" + cm_df.to_string()) logger.info("best test F1 score: {}".format(best_test_f1_score)) logger.info("\n" + "=" * 50) # Below is to save the result files result_filename = "result_df_epoch_" + str(epoch_i + 1) + ".xlsx" result_df.to_excel(out_dir.joinpath(result_filename), index=False) logger.info("") logger.info("Training complete!") logger.info("Total training took {:} (h:mm:ss)".format(format_time(time.time() - total_t0))) # Save training_stats to csv file pd.DataFrame(training_stats).to_csv(out_dir.joinpath("model_train.log"), index=False) return model, optimizer, scheduler def model_validate(model, data_loader): # Put the model in evaluation mode--the dropout layers behave differently # during evaluation. model.eval() device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model.to(device) if torch.cuda.device_count() > 1: model = nn.DataParallel(model) label_prop = data_loader.dataset.dataset.label_prop() total_valid_loss = 0 batch_size = data_loader.batch_size num_batch = len(data_loader) y_pred, y_true = [], [] # Evaluate data for step, batch in tqdm(enumerate(data_loader), desc="Validation...", total=num_batch): b_review_input_ids = batch["review_input_ids"].to(device) b_review_attention_mask = batch["review_attention_mask"].to(device) b_review_token_type_ids = batch["review_token_type_ids"].to(device) b_agent_input_ids = batch["agent_input_ids"].to(device) b_agent_attention_mask = batch["agent_attention_mask"].to(device) b_agent_token_type_ids = batch["agent_token_type_ids"].to(device) b_binarized_label = batch["binarized_label"].to(device) # Tell pytorch not to bother with constructing the compute graph during # the forward pass, since this is only needed for backprop (training). with torch.no_grad(): (loss, logits,) = model(review_input_ids=b_review_input_ids, review_attention_mask=b_review_attention_mask, review_token_type_ids=b_review_token_type_ids, agent_input_ids=b_agent_input_ids, agent_attention_mask=b_agent_attention_mask, agent_token_type_ids=b_agent_token_type_ids, labels=b_binarized_label) total_valid_loss += loss.item() ### The sigmoid function is used for the two-class logistic regression, ### whereas the softmax function is used for the multiclass logistic regression # Version 1 # numpy_probas = logits.detach().cpu().numpy() # y_pred.extend(np.argmax(numpy_probas, axis=1).flatten()) # y_true.extend(b_binarized_label.cpu().numpy()) # Version 2 # transfored_logits = F.log_softmax(logits,dim=1) # numpy_probas = transfored_logits.detach().cpu().numpy() # y_pred.extend(np.argmax(numpy_probas, axis=1).flatten()) # y_true.extend(b_binarized_label.cpu().numpy()) # Version 3 # transfored_logits = torch.sigmoid(logits) # numpy_probas = transfored_logits.detach().cpu().numpy() # y_pred.extend(np.argmax(numpy_probas, axis=1).flatten()) # y_true.extend(b_binarized_label.cpu().numpy()) # New version - for num_label = 1 transfored_logits = torch.sigmoid(logits) numpy_probas = transfored_logits.detach().cpu().numpy() y_pred.extend(numpy_probas) y_true.extend(b_binarized_label.cpu().numpy()) # End of an epoch of validation # put model to train mode again. model.train() ave_loss = total_valid_loss / (num_batch * batch_size) y_pred = np.array(y_pred) y_pred[y_pred < 0.5] = 0 y_pred[y_pred >= 0.5] = 1 # Below is in case the input and target are not the same data format y_pred = np.array(y_pred, dtype=np.bool) y_true = np.array(y_true, dtype=np.bool) # compute the various f1 score for each label report = classification_report(y_true, y_pred, output_dict=True) metrics_df = pd.DataFrame(report).transpose() # metrics_df = pd.DataFrame(0, index=LABEL_NAME, columns=["Precision", "Recall", "F1","support"]) # metrics_df.Precision = precision_recall_fscore_support(y_true, y_pred)[0] # metrics_df.Recall = precision_recall_fscore_support(y_true, y_pred)[1] # metrics_df.F1 = precision_recall_fscore_support(y_true, y_pred)[2] # metrics_df.support = precision_recall_fscore_support(y_true, y_pred)[3] # y_pred = np.array(y_pred) # y_pred[y_pred < 0] = 0 # y_pred[y_pred > 0] = 1 # y_pred = np.array(y_pred, dtype=np.bool) # y_true = np.array(y_true, dtype=np.bool) # metrics_df = pd.DataFrame(0, index=LABEL_NAME, columns=["Precision", "Recall", "F1"], dtype=np.float) # # or_y_pred = np.zeros(y_pred.shape[0], dtype=np.bool) # # or_y_true = np.zeros(y_true.shape[0], dtype=np.bool) # for i in range(len(LABEL_NAME)): # metrics_df.iloc[i] = precision_recall_fscore_support( # y_true=y_true[:, i], y_pred=y_pred[:, i], average='binary', zero_division=0)[0:3] # or_y_pred = or_y_pred | y_pred[:, i] # or_y_true = or_y_true | y_true[:, i] metrics_df = metrics_df.sort_index() # metrics_df.loc['Weighted Average'] = metrics_df.transpose().dot(label_prop) # metrics_df.loc['Average'] = metrics_df.mean() # metrics_df.loc['Weighted Average', 'F1'] = 2 / (1/metrics_df.loc['Weighted Average', "Recall"] + # 1/metrics_df.loc['Weighted Average', "Precision"]) # metrics_df.loc['Average', 'F1'] = 2 / (1/metrics_df.loc['Average', "Recall"] + # 1/metrics_df.loc['Average', "Precision"]) weighted_f1_score = metrics_df.loc['weighted avg', 'f1-score'] averaged_f1_score = metrics_df.loc['macro avg', 'f1-score'] # Calculate confusion matrix tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel() cm_df = pd.DataFrame(columns = ['Predicted No', 'Predicted Yes'], index = ['Actual No', 'Actual Yes']) # adding rows to an empty # dataframe at existing index cm_df.loc['Actual No'] = [tn,fp] cm_df.loc['Actual Yes'] = [fn,tp] # pooled_f1_score = f1_score(y_pred=or_y_pred, y_true=or_y_true) return ave_loss, metrics_df, cm_df,{ "weighted": weighted_f1_score, "averaged": averaged_f1_score, } def model_test(model, data_loader): # Put the model in evaluation mode--the dropout layers behave differently # during evaluation. device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model.eval() model.to(device) if torch.cuda.device_count() > 1: model = nn.DataParallel(model) num_batch = len(data_loader) # Below need to modify if change the input review_id, review_label, hmd_text, head_cust_text = [], [], [], [] agent = [] pred_logits = [] # Evaluate data for step, batch in tqdm(enumerate(data_loader), desc="Inference...", total=num_batch): if "anecdote_lead_final" in batch.keys(): review_label.extend(batch["anecdote_lead_final"]) review_id.extend(batch["_id"].tolist()) hmd_text.extend(batch["hmd_comments"]) head_cust_text.extend(batch["head_cust"]) agent.extend(batch["new_transcript_agent"]) b_review_input_ids = batch["review_input_ids"].to(device) b_review_attention_mask = batch["review_attention_mask"].to(device) b_review_token_type_ids = batch["review_token_type_ids"].to(device) b_agent_input_ids = batch["agent_input_ids"].to(device) b_agent_attention_mask = batch["agent_attention_mask"].to(device) b_agent_token_type_ids = batch["agent_token_type_ids"].to(device) # Tell pytorch not to bother with constructing the compute graph during # the forward pass, since this is only needed for backprop (training). with torch.no_grad(): (logits,) = model(review_input_ids=b_review_input_ids, review_token_type_ids=b_review_token_type_ids, review_attention_mask=b_review_attention_mask, agent_input_ids=b_agent_input_ids, agent_token_type_ids=b_agent_token_type_ids, agent_attention_mask=b_agent_attention_mask ) if logits.detach().cpu().numpy().size == 1: pred_logits.extend(logits.detach().cpu().numpy().reshape(1,)) else: pred_logits.extend(logits.detach().cpu().numpy()) # End of an epoch of validation # put model to train mode again. model.train() pred_logits = np.array(pred_logits) pred_prob = np.exp(pred_logits) pred_prob = pred_prob / (1 + pred_prob) pred_label = pred_prob.copy() pred_label[pred_label < 0.5] = 0 pred_label[pred_label >= 0.5] = 1 # compute the f1 score for each tags d = {'Probability':pred_prob,'Anecdotes Prediction':pred_label} pred_df = pd.DataFrame(d, columns=['Probability','Anecdotes Prediction']) result_df = pd.DataFrame( { "review_id": review_id, "hmd_text": hmd_text, "head_cust_text": head_cust_text, "agent": agent } ) if len(review_label) != 0: result_df["review_label"] = [x.item() for x in review_label] return pd.concat([result_df, pred_df], axis=1).set_index("review_id") ``` optimizer and scheduler part are defined as below: ``` if args.full_finetuning: param_optimizer = list(model.named_parameters()) no_decay = ['bias', 'gamma', 'beta'] optimizer_grouped_parameters = [ {'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay_rate': args.decay}, {'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay_rate': 0.0} ] else: param_optimizer = list(model.classifier.named_parameters()) optimizer_grouped_parameters = [ {"params": [p for n, p in param_optimizer]} ] # Note: AdamW is a class from the huggingface library (as opposed to pytorch) optimizer = AdamW(optimizer_grouped_parameters, # or param_optimizer lr=args.lr, # args.learning_rate - default is 5e-5, our notebook had 1e-5 eps=1e-8) # args.adam_epsilon - default is 1e-8. # Create the learning rate scheduler. scheduler = get_linear_schedule_with_warmup( optimizer, num_warmup_steps=0, num_training_steps=len(data_loader["train"]) * args.num_epochs ) ``` And to run the model, I use below script: ``` model_train(model=model, train_data_loader=data_loader["train"], valid_data_loader=data_loader["valid"], test_data_loader=data_loader["test"], optimizer=optimizer, scheduler=scheduler, num_epochs=args.num_epochs, seed=args.seed, logger=logger, out_dir=out_dir) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8396/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8396/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8395
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8395/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8395/comments
https://api.github.com/repos/huggingface/transformers/issues/8395/events
https://github.com/huggingface/transformers/pull/8395
738,349,906
MDExOlB1bGxSZXF1ZXN0NTE3MjE0OTg2
8,395
[model card] fix md table
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Looks good now!\r\n<img width=\"645\" alt=\"Screenshot 2020-11-08 at 10 25 27\" src=\"https://user-images.githubusercontent.com/326577/98461482-748c2380-217a-11eb-9fe4-38d6dc555233.png\">\r\n" ]
1,604
1,604
1,604
CONTRIBUTOR
null
Just noticed, I had a borked md table that wasn't rendering as a table on the site in a few cards, this PR fixes that. Thank you! @julien-c
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8395/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8395/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8395", "html_url": "https://github.com/huggingface/transformers/pull/8395", "diff_url": "https://github.com/huggingface/transformers/pull/8395.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8395.patch", "merged_at": 1604827515000 }
https://api.github.com/repos/huggingface/transformers/issues/8394
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8394/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8394/comments
https://api.github.com/repos/huggingface/transformers/issues/8394/events
https://github.com/huggingface/transformers/pull/8394
738,347,610
MDExOlB1bGxSZXF1ZXN0NTE3MjEzMjQ2
8,394
Fix run_mlm_wwm example
{ "login": "cccntu", "id": 31893406, "node_id": "MDQ6VXNlcjMxODkzNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cccntu", "html_url": "https://github.com/cccntu", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "organizations_url": "https://api.github.com/users/cccntu/orgs", "repos_url": "https://api.github.com/users/cccntu/repos", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "received_events_url": "https://api.github.com/users/cccntu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Wait, the example for `run_mlm_wwm.py` doesn't seem to run:\r\n```\r\nTraceback (most recent call last):\r\n File \"run_mlm_wwm.py\", line 334, in <module>\r\n main()\r\n File \"run_mlm_wwm.py\", line 161, in main\r\n model_args, data_args, training_args = parser.parse_args_into_dataclasses()\r\n File \".../transformers/src/transformers/hf_argparser.py\", line 144, in parse_args_into_dataclasses\r\n raise ValueError(f\"Some specified arguments are not used by the HfArgumentParser: {remaining_args}\")\r\nValueError: Some specified arguments are not used by the HfArgumentParser: ['--dataset_name', 'wikitext', '--dataset_config_name', 'wikitext-2-raw-v1']\r\n```\r\n\r\nI will try to fix it too.", "Thanks for the fixes. I'm unsure the script can run on datasets hosted on the hub because of the need for reference files, that's why I hadn't added the `dataset_name` and `dataset_config_name` arguments in this one.", "You can still ignore the `dataset_name` arg and use the old behavior, so adding `dataset_name` should be fine.\r\nI want to mention that I did not run the example with ref file.", "Ok, then we should add a check in the post_init to make sure no reference file is passed if we're using `dataset_name`, otherwise it won't work,", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,604
1,619
1,619
CONTRIBUTOR
null
# What does this PR do? ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8394/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8394/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8394", "html_url": "https://github.com/huggingface/transformers/pull/8394", "diff_url": "https://github.com/huggingface/transformers/pull/8394.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8394.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8393
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8393/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8393/comments
https://api.github.com/repos/huggingface/transformers/issues/8393/events
https://github.com/huggingface/transformers/pull/8393
738,338,155
MDExOlB1bGxSZXF1ZXN0NTE3MjA2MjA2
8,393
Add barthez model
{ "login": "moussaKam", "id": 28675016, "node_id": "MDQ6VXNlcjI4Njc1MDE2", "avatar_url": "https://avatars.githubusercontent.com/u/28675016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/moussaKam", "html_url": "https://github.com/moussaKam", "followers_url": "https://api.github.com/users/moussaKam/followers", "following_url": "https://api.github.com/users/moussaKam/following{/other_user}", "gists_url": "https://api.github.com/users/moussaKam/gists{/gist_id}", "starred_url": "https://api.github.com/users/moussaKam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/moussaKam/subscriptions", "organizations_url": "https://api.github.com/users/moussaKam/orgs", "repos_url": "https://api.github.com/users/moussaKam/repos", "events_url": "https://api.github.com/users/moussaKam/events{/privacy}", "received_events_url": "https://api.github.com/users/moussaKam/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You might also test that the config is identical to `BartConfig`.\r\n", "Thank you @sshleifer for your review, I added some additional integration tests.", "Hi @sgugger, thank you for your review, I added all the proposed changes.\r\n\r\nAs for the tokenizer, the reason for having a vocab file, is that mBarthez uses the same sentencepiece tokenizer as mBart, while discarding tokens with non-Latin characters from the embedding layers. So basically the token-ids mapping has changed. I am not sure if it is possible to change the sentencepiece model itself. Anyway I think we can keep it like that for the moment. \r\n\r\nPlease let me know if you would like to recommend any other changes. ", "Hi @LysandreJik, thank you for the review. Yes you're right, actually I was hesitating whether to redefine barthez models or not. Anyway I modified the code as requested. Hope it's ok now. ", "@LysandreJik @julien-c Please let me know if any other modifications are required. Thank you in advance :) ", "Hi @moussaKam! Sorry about getting back to you so late. The issue with this PR is with the implementation of the tokenizer and its fast tokenizer counterpart. Right now there is no `BarthezTokenizerFast`, which we would really need as SentencePiece is not installed by default anymore.\r\n\r\nIt seems that the `BarthezTokenizer` is very similar to the `XLMRobertaTokenizer`, so I'm trying to see if we can't do a similar conversion between your tokenizer and the resulting tokenizer. It seems completely feasible.\r\n\r\nDo you think you could take a look at the [`convert_slow_tokenizers_checkpoints_to_fast.py` module](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_slow_tokenizers_checkpoints_to_fast.py), and at the [`XLMRobertaConverter` object available in the `convert_slow_tokenizer.py` module](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_slow_tokenizer.py#L466) to see if such a conversion if possible? Thank you.", "Hi @LysandreJik, I added the fast tokenizer, thank you for the tip! Please let me know if we're good now! :)", "Hi @LysandreJik, do you think we still need any changes?", "I think we can merge it as it is right now, and use the `legacy_format=False` when saving the slow tokenizer. We're thinking of a way to enable this by default for fast tokenizers, but this isn't blocking for this PR. Thanks!" ]
1,604
1,606
1,606
CONTRIBUTOR
null
# What does this PR do? Add BARThez models, tokenizer and docs. BARThez is a french seq2seq model that uses BART objective and architecture (https://arxiv.org/abs/2010.12321) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? @patrickvonplaten @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8393/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8393/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8393", "html_url": "https://github.com/huggingface/transformers/pull/8393", "diff_url": "https://github.com/huggingface/transformers/pull/8393.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8393.patch", "merged_at": 1606498303000 }
https://api.github.com/repos/huggingface/transformers/issues/8392
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8392/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8392/comments
https://api.github.com/repos/huggingface/transformers/issues/8392/events
https://github.com/huggingface/transformers/issues/8392
738,335,954
MDU6SXNzdWU3MzgzMzU5NTQ=
8,392
Training script "run_mlm.py" doesn't work for certain datasets
{ "login": "zeyuyun1", "id": 43428393, "node_id": "MDQ6VXNlcjQzNDI4Mzkz", "avatar_url": "https://avatars.githubusercontent.com/u/43428393?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zeyuyun1", "html_url": "https://github.com/zeyuyun1", "followers_url": "https://api.github.com/users/zeyuyun1/followers", "following_url": "https://api.github.com/users/zeyuyun1/following{/other_user}", "gists_url": "https://api.github.com/users/zeyuyun1/gists{/gist_id}", "starred_url": "https://api.github.com/users/zeyuyun1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zeyuyun1/subscriptions", "organizations_url": "https://api.github.com/users/zeyuyun1/orgs", "repos_url": "https://api.github.com/users/zeyuyun1/repos", "events_url": "https://api.github.com/users/zeyuyun1/events{/privacy}", "received_events_url": "https://api.github.com/users/zeyuyun1/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false } ]
[ "> The training script should work for all datasets in huggingface datasets.\r\n\r\nTo clarify our intent with the examples we provide, this is untrue. We can't support every use case of every user. The script can easily be modified and has plenty of comments to help the user tune them to their needs.\r\n\r\nIn this particular instance however, I think your proposed fix doesn't break anything and allows us to support more datasets with little change, so I think we can add it. Don't hesitate to suggest a PR and tag me on it.", "Thanks for the reply~ Since this is my first time to do make a pr, I followed on the [guide](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) and run `make test` as suggested. However, most of the test failed.\r\n`947 failed, 188 passed, 124 skipped, 25 warnings in 37.09s`\r\nThose test failed before I even makes any changes to the code. Sorry if this is a newbie question but should we expect all the test to pass?", "Yes, the guide is what is also used by the CI to run all tests, so they should all pass. Make sure you do have the installation from source with all the dev dependencies installed. Apart from that, there is little I can do to help without seeing any error message.", "I finally found out the reason I fail most of the test. It's because I have another training on my gpu which interferes with testing for some reason. \r\n\r\nNow I have all test passed except for 8 of them. Here are those 8 test: \r\n\r\n================================ short test summary info ================================\r\nFAILED tests/test_modeling_ctrl.py::CTRLModelTest::test_multi_gpu_data_parallel_forward\r\nFAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_multi_gpu_data_parallel_forward\r\nFAILED tests/test_modeling_xlnet.py::XLNetModelTest::test_multi_gpu_data_parallel_forward\r\nFAILED tests/test_trainer.py::TrainerIntegrationTest::test_dynamic_shapes - IndexError...\r\nFAILED tests/test_trainer.py::TrainerIntegrationTest::test_evaluate - AssertionError: ...\r\nFAILED tests/test_trainer.py::TrainerIntegrationTest::test_load_best_model_at_end - As...\r\nFAILED tests/test_trainer.py::TrainerIntegrationTest::test_number_of_steps_in_training\r\nFAILED tests/test_trainer_distributed.py::TestTrainerDistributed::test_trainer - Runti...\r\n======== 8 failed, 4035 passed, 688 skipped, 608 warnings in 2855.87s (0:47:35) =========\r\n\r\nAgain, those test are run after I follows the guide by installing from the source with the dev dependencies **but before I made any change to the source code**. So I think just installing from the source with dev dependencies and run \"make test\" will be enough to reproduce the error.\r\n\r\nHere's the traceback error message for 4 of those erorr:\r\n\r\n**FAILED tests/test_modeling_ctrl.py::CTRLModelTest::test_multi_gpu_data_parallel_forward**\r\n```\r\n\r\n ...[-1.1481e-01, -8.0696e-02, 7.7946e-02, ..., -6.4783e-02,\r\n 1.2656e-01, 7.6060e-02]]]]], device='cuda:2'))\r\ndim = 0, destination = 0\r\n\r\n def gather(tensors, dim=0, destination=None, *, out=None):\r\n r\"\"\"Gathers tensors from multiple GPU devices.\r\n \r\n Arguments:\r\n tensors (Iterable[Tensor]): an iterable of tensors to gather.\r\n Tensor sizes in all dimensions other than :attr:`dim` have to match.\r\n dim (int, optional): a dimension along which the tensors will be\r\n concatenated. Default: ``0``.\r\n destination (torch.device, str, or int, optional): the output device.\r\n Can be CPU or CUDA. Default: the current CUDA device.\r\n out (Tensor, optional, keyword-only): the tensor to store gather result.\r\n Its sizes must match those of :attr:`tensors`, except for :attr:`dim`,\r\n where the size must equal ``sum(tensor.size(dim) for tensor in tensors)``.\r\n Can be on CPU or CUDA.\r\n \r\n .. note::\r\n :attr:`destination` must not be specified when :attr:`out` is specified.\r\n \r\n Returns:\r\n - If :attr:`destination` is specified,\r\n a tensor located on :attr:`destination` device, that is a result of\r\n concatenating :attr:`tensors` along :attr:`dim`.\r\n - If :attr:`out` is specified,\r\n the :attr:`out` tensor, now containing results of concatenating\r\n :attr:`tensors` along :attr:`dim`.\r\n \"\"\"\r\n if out is None:\r\n if destination == -1:\r\n warnings.warn(\r\n 'Using -1 to represent CPU tensor is deprecated. Please use a '\r\n 'device object or string instead, e.g., \"cpu\".')\r\n destination = _get_device_index(destination, allow_cpu=True, optional=True)\r\n> return torch._C._gather(tensors, dim, destination)\r\nE RuntimeError: Input tensor at index 2 has invalid shape [2, 4, 4, 7, 8], but expected [2, 5, 4, 7, 8]\r\n\r\n../../test_env_6/lib/python3.8/site-packages/torch/nn/parallel/comm.py:230: RuntimeError\r\n\r\n```\r\n\r\n\r\n\r\n**FAILED tests/test_trainer.py::TrainerIntegrationTest::test_dynamic_shapes - IndexError: list index out of range**\r\n```\r\n\r\ntests/test_trainer.py:366: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\nsrc/transformers/trainer.py:1308: in evaluate\r\n output = self.prediction_loop(\r\nsrc/transformers/trainer.py:1416: in prediction_loop\r\n for step, inputs in enumerate(dataloader):\r\n../../test_env_6/lib/python3.8/site-packages/torch/utils/data/dataloader.py:435: in __next__\r\n data = self._next_data()\r\n../../test_env_6/lib/python3.8/site-packages/torch/utils/data/dataloader.py:475: in _next_data\r\n data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n../../test_env_6/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py:44: in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n../../test_env_6/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py:44: in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\nself = <tests.test_trainer.DynamicShapesDataset object at 0x7f547e1b0250>, i = 48\r\n\r\n def __getitem__(self, i):\r\n> return {\"input_x\": self.xs[i], \"labels\": self.ys[i]}\r\nE IndexError: list index out of range\r\n```\r\n\r\n\r\n**FAILED tests/test_trainer.py::TrainerIntegrationTest::test_evaluate - AssertionError: ...**\r\n```\r\n\r\n_______________________________________________________________________ TrainerIntegrationTest.test_evaluate ________________________________________________________________________\r\n\r\nself = <tests.test_trainer.TrainerIntegrationTest testMethod=test_evaluate>\r\n\r\n def test_evaluate(self):\r\n trainer = get_regression_trainer(a=1.5, b=2.5, compute_metrics=AlmostAccuracy())\r\n results = trainer.evaluate()\r\n \r\n x, y = trainer.eval_dataset.x, trainer.eval_dataset.ys[0]\r\n pred = 1.5 * x + 2.5\r\n expected_loss = ((pred - y) ** 2).mean()\r\n> self.assertAlmostEqual(results[\"eval_loss\"], expected_loss)\r\nE AssertionError: 0.37259840965270996 != 0.3790205 within 7 places (0.006422102451324463 difference)\r\n\r\ntests/test_trainer.py:312: AssertionError\r\n```\r\n\r\n\r\n**FAILED tests/test_trainer.py::TrainerIntegrationTest::test_load_best_model_at_end - As...**\r\n\r\n```\r\n> self.check_saved_checkpoints(tmpdir, 64 // self.batch_size, total)\r\n\r\ntests/test_trainer.py:565: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\ntests/test_trainer.py:211: in check_saved_checkpoints\r\n self.assertTrue(os.path.isdir(checkpoint))\r\nE AssertionError: False is not true\r\n```", "Again, this is exactly what the CI does, so those failures are linked to your particular environment. Since you didn't tell us what it is we can't reproduce and fix potential issues.\r\nOne way to make sure you don't use your GPU if it's busy is to run `CUDA_VISIBLE_DEVICES='' make tests`.", "Thanks for the reply. Sorry I forgot to list my environment. But as you suggest, `CUDA_VISIBLE_DEVICES=''`, all the test pass when I run them without GPU using. You are right that the test failures are linked to my environment. It's irrelevant to this issue so I think I'll make another post about it.", "Hi newbie questions are better suited for the forum at https://discuss.huggingface.co\r\n\r\nWe try to keep the issues for bug reports and features/model requests.", "> Hi newbie questions are better suited for the forum at https://discuss.huggingface.co\r\n> \r\n> We try to keep the issues for bug reports and features/model requests.\r\n\r\nYeah, you are right. I will make another post in the forum instead. " ]
1,604
1,605
1,605
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform:3.4.0 - Python version:3.8.3 - PyTorch version (GPU?):3.6.0 - Tensorflow version (GPU?): - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSTM: @stas00 examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> @LysandreJik ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) MLM ## To reproduce Steps to reproduce the behavior: Basically run the training script in examples/run_mlm.py using wikipedia dataset ``` python run_mlm.py \ --model_name_or_path roberta-base \ --dataset_name wikipedia \ --dataset_config_name 20200501.en \ --do_train \ --output_dir /tmp/test-mlm \ ``` ## Erorr Message ``` Traceback (most recent call last): File "run_mlm2.py", line 388, in <module> main() File "run_mlm2.py", line 333, in main tokenized_datasets = tokenized_datasets.map( File "/home/zeyuy/miniconda3/lib/python3.8/site-packages/datasets/dataset_dict.py", line 283, in map { File "/home/zeyuy/miniconda3/lib/python3.8/site-packages/datasets/dataset_dict.py", line 284, in <dictcomp> k: dataset.map( File "/home/zeyuy/miniconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1236, in map update_data = does_function_return_dict(test_inputs, test_indices) File "/home/zeyuy/miniconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1207, in does_function_return_dict function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "run_mlm2.py", line 315, in group_texts concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} File "run_mlm2.py", line 315, in <dictcomp> concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} TypeError: can only concatenate list (not "str") to list ``` ## Expected behavior The training script should work for all datasets in huggingface datasets. The problem is that the feature other than 'text' (or the feature we train on) interferes when we try to concatenate the tokenized 'text' ('input_ids', 'mask_ids' ...) from each instances. A quick fix would be on line [295](https://github.com/huggingface/transformers/blob/77a257fc210a56f1fd0d75166ecd654cf58111f3/examples/language-modeling/run_mlm.py#L295), change ` remove_columns=[text_column_name],` to ` remove_columns=column_names,` Should I open a PR or someone want to do a quick fix? <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8392/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8392/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8391
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8391/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8391/comments
https://api.github.com/repos/huggingface/transformers/issues/8391/events
https://github.com/huggingface/transformers/pull/8391
738,313,970
MDExOlB1bGxSZXF1ZXN0NTE3MTg5Mzk4
8,391
Bug fix for apply_chunking_to_forward chunking dimension check
{ "login": "pedrocolon93", "id": 5157240, "node_id": "MDQ6VXNlcjUxNTcyNDA=", "avatar_url": "https://avatars.githubusercontent.com/u/5157240?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pedrocolon93", "html_url": "https://github.com/pedrocolon93", "followers_url": "https://api.github.com/users/pedrocolon93/followers", "following_url": "https://api.github.com/users/pedrocolon93/following{/other_user}", "gists_url": "https://api.github.com/users/pedrocolon93/gists{/gist_id}", "starred_url": "https://api.github.com/users/pedrocolon93/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pedrocolon93/subscriptions", "organizations_url": "https://api.github.com/users/pedrocolon93/orgs", "repos_url": "https://api.github.com/users/pedrocolon93/repos", "events_url": "https://api.github.com/users/pedrocolon93/events{/privacy}", "received_events_url": "https://api.github.com/users/pedrocolon93/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,605
1,605
CONTRIBUTOR
null
Chunking should be in the chunking dimension, an exception was raised if the complete shape of the inputs was not the same rather than only the chunking dimension # What does this PR do? Fixes #8349 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patrickvonplaten Incorporating the fix from the issue at https://github.com/huggingface/transformers/issues/8349 Let me know if you want a unit test or something! Ran all the tests too: <img width="936" alt="Screen Shot 2020-11-07 at 3 29 26 PM" src="https://user-images.githubusercontent.com/5157240/98450932-1e35cb00-210f-11eb-855a-2457c44e243b.png">
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8391/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8391/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8391", "html_url": "https://github.com/huggingface/transformers/pull/8391", "diff_url": "https://github.com/huggingface/transformers/pull/8391.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8391.patch", "merged_at": 1605040391000 }
https://api.github.com/repos/huggingface/transformers/issues/8390
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8390/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8390/comments
https://api.github.com/repos/huggingface/transformers/issues/8390/events
https://github.com/huggingface/transformers/issues/8390
738,301,968
MDU6SXNzdWU3MzgzMDE5Njg=
8,390
Trainer QA Model Label Names
{ "login": "ManavR123", "id": 17506262, "node_id": "MDQ6VXNlcjE3NTA2MjYy", "avatar_url": "https://avatars.githubusercontent.com/u/17506262?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ManavR123", "html_url": "https://github.com/ManavR123", "followers_url": "https://api.github.com/users/ManavR123/followers", "following_url": "https://api.github.com/users/ManavR123/following{/other_user}", "gists_url": "https://api.github.com/users/ManavR123/gists{/gist_id}", "starred_url": "https://api.github.com/users/ManavR123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ManavR123/subscriptions", "organizations_url": "https://api.github.com/users/ManavR123/orgs", "repos_url": "https://api.github.com/users/ManavR123/repos", "events_url": "https://api.github.com/users/ManavR123/events{/privacy}", "received_events_url": "https://api.github.com/users/ManavR123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
In the Trainer Class, the default label names for a QA model is ["start_positions, end_positions"]. You can see this defined on the line I linked here. Is this supposed to be the case or should it be a list with 2 strings? The tutorials for using Trainer on the transformers documentation show setting up a custom dataset with encoding having two separate keys for the start and end positions. Should we be setting the label_names parameter in our training args or is this a bug? If this is not a bug, could the documentation for setting up a custom QA dataset be updated so others don't get confused as well. https://github.com/huggingface/transformers/blob/77a257fc210a56f1fd0d75166ecd654cf58111f3/src/transformers/trainer.py#L331
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8390/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8390/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8389
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8389/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8389/comments
https://api.github.com/repos/huggingface/transformers/issues/8389/events
https://github.com/huggingface/transformers/pull/8389
738,295,810
MDExOlB1bGxSZXF1ZXN0NTE3MTc2MDgw
8,389
[fsmt tokenizer] support lowercase tokenizer
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
This PR: * [x] implements support for `do_lower_case` in the fsmt tokenizer - requested [here](https://github.com/huggingface/transformers/pull/8374#issuecomment-723419221) * [x] adds test to validate the new feature * [x] adds case detector in the converter script @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8389/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8389/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8389", "html_url": "https://github.com/huggingface/transformers/pull/8389", "diff_url": "https://github.com/huggingface/transformers/pull/8389.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8389.patch", "merged_at": 1604936500000 }
https://api.github.com/repos/huggingface/transformers/issues/8388
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8388/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8388/comments
https://api.github.com/repos/huggingface/transformers/issues/8388/events
https://github.com/huggingface/transformers/issues/8388
738,292,018
MDU6SXNzdWU3MzgyOTIwMTg=
8,388
DataCollatorForWholeWordMask error persists after fix
{ "login": "uunal", "id": 2520197, "node_id": "MDQ6VXNlcjI1MjAxOTc=", "avatar_url": "https://avatars.githubusercontent.com/u/2520197?v=4", "gravatar_id": "", "url": "https://api.github.com/users/uunal", "html_url": "https://github.com/uunal", "followers_url": "https://api.github.com/users/uunal/followers", "following_url": "https://api.github.com/users/uunal/following{/other_user}", "gists_url": "https://api.github.com/users/uunal/gists{/gist_id}", "starred_url": "https://api.github.com/users/uunal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/uunal/subscriptions", "organizations_url": "https://api.github.com/users/uunal/orgs", "repos_url": "https://api.github.com/users/uunal/repos", "events_url": "https://api.github.com/users/uunal/events{/privacy}", "received_events_url": "https://api.github.com/users/uunal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The problem can be fixed by changing `e[\"input_ids\"].tolist()` to `e[\"input_ids\"]` " ]
1,604
1,604
1,604
NONE
null
After quick fix #8379 Tried same flow of #8378 **Gives error:** ``` /usr/local/lib/python3.6/dist-packages/transformers/data/data_collator.py in __call__(self, examples) 321 for e in examples: 322 ref_tokens = [] --> 323 for id in e["input_ids"].tolist(): 324 token = self.tokenizer._convert_id_to_token(id) 325 ref_tokens.append(token) AttributeError: 'list' object has no attribute 'tolist' ``` When I tried with _DataCollatorForLanguageModeling_, trainer works as usual.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8388/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8388/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8387
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8387/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8387/comments
https://api.github.com/repos/huggingface/transformers/issues/8387/events
https://github.com/huggingface/transformers/pull/8387
738,283,705
MDExOlB1bGxSZXF1ZXN0NTE3MTY3NDg2
8,387
Add LMHeadModel similar to BertLMHeadModel to modeling_distilbert.py
{ "login": "KMFODA", "id": 35491698, "node_id": "MDQ6VXNlcjM1NDkxNjk4", "avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KMFODA", "html_url": "https://github.com/KMFODA", "followers_url": "https://api.github.com/users/KMFODA/followers", "following_url": "https://api.github.com/users/KMFODA/following{/other_user}", "gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}", "starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions", "organizations_url": "https://api.github.com/users/KMFODA/orgs", "repos_url": "https://api.github.com/users/KMFODA/repos", "events_url": "https://api.github.com/users/KMFODA/events{/privacy}", "received_events_url": "https://api.github.com/users/KMFODA/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I don't know if this PR is in its final state and ready for review now, but here are a few pointers:\r\n\r\nThe PR looks good, but there are a few things to do before merging.\r\n\r\n- You should take a look at the failing tests and fix them if they're related to your PR (they should be)\r\n- You should implement the tests related to the added classes\r\n- You should add these classes to the auto models and init.py", "Hi @LysandreJik. Thanks for the comments. Apologies though, this PR is still a WIP so no need for review atm. I plan on adding my classes to auto models and init.py and running the appropriate tests before committing my final version.", "Great, thanks for letting me know. Will wait for your ping before reviewing again!", "HI @LysandreJik & @patrickvonplaten, I'm hoping to get your help with the only remaining test that's failing for this PR. The `tests/test_modeling_distilbert.py` script fails due to an error in the `tests/test_modeling_common.py` in line 966. I've detailed the error message below but in short what I think is happening is that check_equivalence fails on the following check:\r\n\r\n```\r\ncheck_equivalence(\r\n model, tuple_inputs, dict_inputs, {\"output_hidden_states\": True, \"output_attentions\": True}\r\n)\r\n```\r\n\r\nWhat's strange is that all the other equivalence checks pass and I haven't changed the hidden states or attention layers.\r\n\r\nError Message:\r\n\r\n```\r\ntests/test_modeling_common.py:1037: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\ntests/test_modeling_common.py:1001: in check_equivalence\r\n recursive_check(tuple_output, dict_output)\r\ntests/test_modeling_common.py:990: in recursive_check\r\n recursive_check(tuple_iterable_value, dict_iterable_value)\r\ntests/test_modeling_common.py:990: in recursive_check\r\n recursive_check(tuple_iterable_value, dict_iterable_value)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\ntuple_object = tensor([[[[0.4205, 0.0000, 0.2311, ..., 0.0000, 0.0000, 0.3484],\r\n [0.4074, 0.0000, 0.2666, ..., 0.0000, 0.0...71, 0.0000, 0.1483, ..., 0.1858, 0.1648, 0.1354],\r\n [0.1422, 0.0000, 0.2250, ..., 0.1523, 0.1632, 0.1669]]]])\r\ndict_object = tensor([[[ 1.3640e+00, 2.6618e-01, 5.9669e-01, ..., -7.1322e-01,\r\n -3.5144e-02, -1.3287e+00],\r\n [ 1....1.0870e-01],\r\n [-9.9694e-01, -7.1065e-01, -5.7949e-01, ..., 1.4234e+00,\r\n -2.1929e+00, 1.2390e-01]]])\r\n\r\n def recursive_check(tuple_object, dict_object):\r\n if isinstance(tuple_object, (List, Tuple)):\r\n for tuple_iterable_value, dict_iterable_value in zip(tuple_object, dict_object):\r\n recursive_check(tuple_iterable_value, dict_iterable_value)\r\n elif tuple_object is None:\r\n return\r\n else:\r\n self.assertTrue(\r\n torch.allclose(\r\n> set_nan_tensor_to_zero(tuple_object), set_nan_tensor_to_zero(dict_object), atol=1e-5\r\n ),\r\n msg=f\"Tuple and dict output are not equal. Difference: {torch.max(torch.abs(tuple_object - dict_object))}. Tuple has `nan`: {torch.isnan(tuple_object).any()} and `inf`: {torch.isinf(tuple_object)}. Dict has `nan`: {torch.isnan(dict_object).any()} and `inf`: {torch.isinf(dict_object)}.\",\r\n )\r\nE RuntimeError: The size of tensor a (7) must match the size of tensor b (768) at non-singleton dimension 3\r\n\r\ntests/test_modeling_common.py:996: RuntimeError\r\n```", "Hey @KMFODA, \r\n\r\nThanks for all the hard work already! We'll definitely help you get this merged :-) @patil-suraj, could you maybe help here since you recently added the caching mechanism for `BertForCausalLM`? Otherwise I'll try to take a look by the end of this week!", "Thanks @patrickvonplaten. I'm working through Lysandre's comment which was very helpful and hoping this will fix the remaining error. Will await any further comments / advice on how to merge this from you / suraj as well in case I've missed something else out.", "Hi @KMFODA \r\n\r\nThanks for all the work! \r\n\r\n`DistillBertForCausalLM` can be used as a decoder in `EncoderDecoder` models. To use it as a decoder we should now implement `cross attention` (or `encoder-decoder` attention) so that it can attend to `encoder_hidden_states` and caching the `past_key_values` to speed-up inference.\r\n\r\nWe should implement this feature analogous to how it is implemented in `Bert`. This means that we should\r\n\r\n- Add cross-attention and caching mechanism `MultiHeadSelfAttention` as shown here in Bert\r\nhttps://github.com/huggingface/transformers/blob/d1b14c9b548de34b6606946482946008622967db/src/transformers/models/bert/modeling_bert.py#L259\r\n\r\n- this means we need to pass `encoder_hidden_states`, `encoder_attention_mask`, `past_key_values` through the layers. So introduce these parameters in `forward` of `DistilBertForCausalLM`, `DistilBertModel`, `Transformer`, `TransformerBlock` and `MultiHeadSelfAttention`.\r\n\r\n- compute cross attention in `TransformerBlock` as shown here in `BertLayer`\r\nhttps://github.com/huggingface/transformers/blob/d1b14c9b548de34b6606946482946008622967db/src/transformers/models/bert/modeling_bert.py#L478-L480\r\n\r\n- Adjust the position embedding and attention masks accordingly as done in `BertEmbeddings` and `BertModel`\r\nhttps://github.com/huggingface/transformers/blob/d1b14c9b548de34b6606946482946008622967db/src/transformers/models/bert/modeling_bert.py#L194\r\nand\r\nhttps://github.com/huggingface/transformers/blob/d1b14c9b548de34b6606946482946008622967db/src/transformers/models/bert/modeling_bert.py#L194\r\n\r\nThe important thing to note here is that, when `past_key_values` are enabled, the actual seq length is `cur_seq_length + pas_key_values_length`\r\n\r\n- Add tests for `DistillBertForCausalLM` that checks that the decoder and caching mechanism works as expected:\r\nhttps://github.com/huggingface/transformers/blob/d1b14c9b548de34b6606946482946008622967db/tests/test_modeling_bert.py#L170\r\nhttps://github.com/huggingface/transformers/blob/d1b14c9b548de34b6606946482946008622967db/tests/test_modeling_bert.py#L203\r\nhttps://github.com/huggingface/transformers/blob/d1b14c9b548de34b6606946482946008622967db/tests/test_modeling_bert.py#L230\r\nhttps://github.com/huggingface/transformers/blob/d1b14c9b548de34b6606946482946008622967db/tests/test_modeling_bert.py#L263\r\n\r\n\r\nAll of this should be pretty similar to how it's implemented in Bert, so you could use that as a reference. Let me know if something is not clear. Happy to help you here :)\r\n\r\n", "Apolgoies, this PR got too messy when I tried to rebase to master and it closed automatically when I tried to revert back to an old commit. I'm working on @patil-suraj's very useful recommendations and will post a new PR when it passes all tests. Thanks!", "Hey @KMFODA \r\n\r\nGlad to know you're still working on this :) Don't hesitate to ping me if you're stuck or need some help." ]
1,604
1,615
1,614
CONTRIBUTOR
null
# What does this PR do? Similar to the `BertLMHeadModel` this PR aims to add a `DistilBertForCausalLM` model in `modeling_distilbert.py`. Fixes #7397 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8387/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8387/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8387", "html_url": "https://github.com/huggingface/transformers/pull/8387", "diff_url": "https://github.com/huggingface/transformers/pull/8387.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8387.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8386
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8386/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8386/comments
https://api.github.com/repos/huggingface/transformers/issues/8386/events
https://github.com/huggingface/transformers/issues/8386
738,281,030
MDU6SXNzdWU3MzgyODEwMzA=
8,386
ImportError: cannot import name 'is_flax_available' from 'transformers.file_utils'
{ "login": "brendan-AI", "id": 74021280, "node_id": "MDQ6VXNlcjc0MDIxMjgw", "avatar_url": "https://avatars.githubusercontent.com/u/74021280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brendan-AI", "html_url": "https://github.com/brendan-AI", "followers_url": "https://api.github.com/users/brendan-AI/followers", "following_url": "https://api.github.com/users/brendan-AI/following{/other_user}", "gists_url": "https://api.github.com/users/brendan-AI/gists{/gist_id}", "starred_url": "https://api.github.com/users/brendan-AI/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brendan-AI/subscriptions", "organizations_url": "https://api.github.com/users/brendan-AI/orgs", "repos_url": "https://api.github.com/users/brendan-AI/repos", "events_url": "https://api.github.com/users/brendan-AI/events{/privacy}", "received_events_url": "https://api.github.com/users/brendan-AI/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It seems you have a mismatch between several `transformers` versions.\r\n\r\nWe generally recommend using virtual environments not relying on `conda`, as full `transformers` support is not yet available on `conda`. Can you reproduce using a pip virtual env?\r\n\r\n```py\r\npython -m venv .env\r\nsource .env/bin/activate\r\npip install transformers\r\n```", "that works in the virtual env, thank you!" ]
1,604
1,604
1,604
NONE
null
- `transformers` version: 3.4.0 - Platform: macOS-10.13.6-x86_64-i386-64bit - Python version: 3.8.3 - PyTorch version (GPU?): 1.7.0 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no The problem arises when using trying to import: from transformers import AutoTokenizer, AutoModel Running on Jupyter notebook in conda virtual env full error message below: `--------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-50-13d0bae188dc> in <module> ----> 1 from transformers import AutoTokenizer, AutoModel 2 3 tokenizer = AutoTokenizer.from_pretrained("bashar-talafha/multi-dialect-bert-base-arabic") 4 model = AutoModel.from_pretrained("bashar-talafha/multi-dialect-bert-base-arabic") /opt/anaconda3/envs/py38/lib/python3.8/site-packages/transformers/__init__.py in <module> 20 # Integrations: this needs to come before other ml imports 21 # in order to allow any 3rd-party code to initialize properly ---> 22 from .integrations import ( # isort:skip 23 is_comet_available, 24 is_optuna_available, /opt/anaconda3/envs/py38/lib/python3.8/site-packages/transformers/integrations.py in <module> 57 58 from .file_utils import is_torch_tpu_available ---> 59 from .trainer_callback import TrainerCallback 60 from .trainer_utils import PREFIX_CHECKPOINT_DIR, BestRun 61 from .utils import logging /opt/anaconda3/envs/py38/lib/python3.8/site-packages/transformers/trainer_callback.py in <module> 24 from tqdm.auto import tqdm 25 ---> 26 from .trainer_utils import EvaluationStrategy 27 from .training_args import TrainingArguments 28 from .utils import logging /opt/anaconda3/envs/py38/lib/python3.8/site-packages/transformers/trainer_utils.py in <module> 23 24 from .file_utils import is_tf_available, is_torch_available ---> 25 from .tokenization_utils_base import ExplicitEnum 26 27 /opt/anaconda3/envs/py38/lib/python3.8/site-packages/transformers/tokenization_utils_base.py in <module> 30 import numpy as np 31 ---> 32 from .file_utils import ( 33 add_end_docstrings, 34 cached_path, ImportError: cannot import name 'is_flax_available' from 'transformers.file_utils' (/opt/anaconda3/envs/py38/lib/python3.8/site-packages/transformers/file_utils.py)`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8386/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8386/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8385
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8385/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8385/comments
https://api.github.com/repos/huggingface/transformers/issues/8385/events
https://github.com/huggingface/transformers/issues/8385
738,265,848
MDU6SXNzdWU3MzgyNjU4NDg=
8,385
TPU padding in seq2seq codes
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I understood this is implemented in Seq2SeqDataCollator with adding padding=longest. " ]
1,604
1,604
1,604
NONE
null
Hi In the readme of seq2seq models https://github.com/huggingface/transformers/tree/master/examples/seq2seq it is written: All sequences should be padded to be of equal length otherwise it leads to extremely slow training. (finetune_trainer.py does this automatically when running on TPU.) could you point me please where in the codes this is done? and what do you mean here, isn't it like normal case that in each batch, we pad the sentence to the equal length? could you tell me the differences from what is done normally? thanks Best Rabeeh
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8385/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8385/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8384
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8384/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8384/comments
https://api.github.com/repos/huggingface/transformers/issues/8384/events
https://github.com/huggingface/transformers/issues/8384
738,259,015
MDU6SXNzdWU3MzgyNTkwMTU=
8,384
comment correction in modeling_bart.py
{ "login": "jc-hou", "id": 30210529, "node_id": "MDQ6VXNlcjMwMjEwNTI5", "avatar_url": "https://avatars.githubusercontent.com/u/30210529?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jc-hou", "html_url": "https://github.com/jc-hou", "followers_url": "https://api.github.com/users/jc-hou/followers", "following_url": "https://api.github.com/users/jc-hou/following{/other_user}", "gists_url": "https://api.github.com/users/jc-hou/gists{/gist_id}", "starred_url": "https://api.github.com/users/jc-hou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jc-hou/subscriptions", "organizations_url": "https://api.github.com/users/jc-hou/orgs", "repos_url": "https://api.github.com/users/jc-hou/repos", "events_url": "https://api.github.com/users/jc-hou/events{/privacy}", "received_events_url": "https://api.github.com/users/jc-hou/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "Great catch, you are correct." ]
1,604
1,604
1,604
NONE
null
Hello, in here https://github.com/huggingface/transformers/blob/07708793f20ec3a949ccab32cc4fe0c7272dcc4c/src/transformers/modeling_bart.py#L588 I think the comment should be ``` # Convert to Bart output format: (BS, seq_len, model_dim) -> (seq_len, BS, model_dim) ``` Before transpose , shape of x and encoder_hidden_states are both (BS, seq_len, model_dim) to me. Thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8384/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8384/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8383
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8383/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8383/comments
https://api.github.com/repos/huggingface/transformers/issues/8383/events
https://github.com/huggingface/transformers/issues/8383
738,248,179
MDU6SXNzdWU3MzgyNDgxNzk=
8,383
finetune_trainer crashes in the beginning
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "My gpu instances does not have access to download the files, could you please assist me? could it be not hardcoded in the codes? so user could download the files and put them in the directory?", "could you tell me what is this trying to download in the beginning? ", "apparently the code downloads some stuff in datasets/utils/file_utils.py ? could you tell me how to resolve this issue, I cannot really get the code to run on a gpu without access to download from amazon? thanks ", "my gpu instance cannot download \"\"https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets\" inside this file." ]
1,604
1,608
1,608
NONE
null
Hi I am trying to run ``` python finetune_trainer.py \ --model_name_or_path /idiap/temp/rkarimi/pretrained_transformers/t5-small/ \ --data_dir=/home/rabeeh/data/test_data/wmt_en_ro \ --learning_rate=3e-5 \ --output_dir=/home/rabeeh/temp \ --max_source_length=512 \ --max_target_length=56 \ --do_train --do_predict \ --overwrite_output_dir\ "$@" ``` I got the following issue, thanks for your help (internship) rkarimi@vgnh001:/idiap/user/rkarimi/dev/internship/seq2seq$ bash run_idiap.sh Traceback (most recent call last): File "finetune_trainer.py", line 7, in <module> from seq2seq_trainer import Seq2SeqTrainer File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/seq2seq_trainer.py", line 7, in <module> from transformers import PreTrainedModel, Trainer, logging File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/transformers/__init__.py", line 22, in <module> from .integrations import ( # isort:skip File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/transformers/integrations.py", line 81, in <module> from .file_utils import is_torch_tpu_available # noqa: E402 File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/transformers/file_utils.py", line 87, in <module> import datasets # noqa: F401 File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/__init__.py", line 27, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 40, in <module> from .arrow_reader import ArrowReader File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_reader.py", line 31, in <module> from .utils import cached_path, logging File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/__init__.py", line 20, in <module> from .download_manager import DownloadManager, GenerateMode File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 25, in <module> from .file_utils import HF_DATASETS_CACHE, cached_path, get_from_cache, hash_url_to_filename File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 118, in <module> os.makedirs(HF_MODULES_CACHE, exist_ok=True) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/os.py", line 213, in makedirs makedirs(head, exist_ok=exist_ok) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/os.py", line 223, in makedirs mkdir(name, mode) FileNotFoundError: [Errno 2] No such file or directory: '/idiap/home/rkarimi/.cache/huggingface'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8383/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8383/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8382
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8382/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8382/comments
https://api.github.com/repos/huggingface/transformers/issues/8382/events
https://github.com/huggingface/transformers/pull/8382
738,243,811
MDExOlB1bGxSZXF1ZXN0NTE3MTM4NTkz
8,382
[s2s/distill] hparams.tokenizer_name = hparams.teacher
{ "login": "ShichaoSun", "id": 13548568, "node_id": "MDQ6VXNlcjEzNTQ4NTY4", "avatar_url": "https://avatars.githubusercontent.com/u/13548568?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ShichaoSun", "html_url": "https://github.com/ShichaoSun", "followers_url": "https://api.github.com/users/ShichaoSun/followers", "following_url": "https://api.github.com/users/ShichaoSun/following{/other_user}", "gists_url": "https://api.github.com/users/ShichaoSun/gists{/gist_id}", "starred_url": "https://api.github.com/users/ShichaoSun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ShichaoSun/subscriptions", "organizations_url": "https://api.github.com/users/ShichaoSun/orgs", "repos_url": "https://api.github.com/users/ShichaoSun/repos", "events_url": "https://api.github.com/users/ShichaoSun/events{/privacy}", "received_events_url": "https://api.github.com/users/ShichaoSun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You need to run `make fixup`" ]
1,604
1,605
1,605
CONTRIBUTOR
null
fix bug that initialization of student model without tokenizer name # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8382/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8382/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8382", "html_url": "https://github.com/huggingface/transformers/pull/8382", "diff_url": "https://github.com/huggingface/transformers/pull/8382.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8382.patch", "merged_at": 1605018722000 }
https://api.github.com/repos/huggingface/transformers/issues/8381
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8381/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8381/comments
https://api.github.com/repos/huggingface/transformers/issues/8381/events
https://github.com/huggingface/transformers/pull/8381
738,236,108
MDExOlB1bGxSZXF1ZXN0NTE3MTMyOTI4
8,381
initialize student's tokenizer name Update distillation.py
{ "login": "ShichaoSun", "id": 13548568, "node_id": "MDQ6VXNlcjEzNTQ4NTY4", "avatar_url": "https://avatars.githubusercontent.com/u/13548568?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ShichaoSun", "html_url": "https://github.com/ShichaoSun", "followers_url": "https://api.github.com/users/ShichaoSun/followers", "following_url": "https://api.github.com/users/ShichaoSun/following{/other_user}", "gists_url": "https://api.github.com/users/ShichaoSun/gists{/gist_id}", "starred_url": "https://api.github.com/users/ShichaoSun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ShichaoSun/subscriptions", "organizations_url": "https://api.github.com/users/ShichaoSun/orgs", "repos_url": "https://api.github.com/users/ShichaoSun/repos", "events_url": "https://api.github.com/users/ShichaoSun/events{/privacy}", "received_events_url": "https://api.github.com/users/ShichaoSun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
fix a bug, that student's tokenizer can be initialized without tokenizer_name
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8381/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8381/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8381", "html_url": "https://github.com/huggingface/transformers/pull/8381", "diff_url": "https://github.com/huggingface/transformers/pull/8381.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8381.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8380
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8380/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8380/comments
https://api.github.com/repos/huggingface/transformers/issues/8380/events
https://github.com/huggingface/transformers/issues/8380
738,232,434
MDU6SXNzdWU3MzgyMzI0MzQ=
8,380
training T5 on multiple datasets
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, these questions are better suited for the forum as we try to keep the github issues for bugs/feature requests only. Could you open an issue on the [forum](https://discuss.huggingface.co) instead?", "Hi, the link is broken. thanks\n\nOn Mon, Nov 9, 2020 at 3:50 PM Lysandre Debut <[email protected]>\nwrote:\n\n> Hi, these questions are better suited for the forum as we try to keep the\n> github issues for bugs/feature requests only. Could you open an issue on\n> the forum <https://hdiscuss.huggingface.co> instead?\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/8380#issuecomment-724060649>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ARPXHHZTAWRY5UBXSGMBOVTSO76R7ANCNFSM4TNT3AXA>\n> .\n>\n", "fixed!" ]
1,604
1,604
1,604
NONE
null
Hi I'd need to train finetune_trainer with T5 on multiple datasets. Could you give me some pointers, how I can form a dataloader with multiple datasets in the very simple form of it? thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8380/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8380/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8379
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8379/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8379/comments
https://api.github.com/repos/huggingface/transformers/issues/8379/events
https://github.com/huggingface/transformers/pull/8379
738,227,896
MDExOlB1bGxSZXF1ZXN0NTE3MTI2OTcx
8,379
Fix DataCollatorForWholeWordMask
{ "login": "cccntu", "id": 31893406, "node_id": "MDQ6VXNlcjMxODkzNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cccntu", "html_url": "https://github.com/cccntu", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "organizations_url": "https://api.github.com/users/cccntu/orgs", "repos_url": "https://api.github.com/users/cccntu/repos", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "received_events_url": "https://api.github.com/users/cccntu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Oh I had missed that this one was a subclass of `DataCollatorForLanguageModeling`, thanks for fixing. The last one isn't a subclass, but we can certainly change it to use the same function too!", "Merging the PR to have the fix as quickly as possible, but if you want to add a test of this data collator, please go ahead! That will make the library more robust." ]
1,604
1,604
1,604
CONTRIBUTOR
null
# What does this PR do? This is a quick fix for #8378 There is another `_tensorize_batch` in other class, I think we can replace it too. And there might be tests missing? @sgugger #8308
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8379/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8379/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8379", "html_url": "https://github.com/huggingface/transformers/pull/8379", "diff_url": "https://github.com/huggingface/transformers/pull/8379.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8379.patch", "merged_at": 1604771517000 }
https://api.github.com/repos/huggingface/transformers/issues/8378
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8378/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8378/comments
https://api.github.com/repos/huggingface/transformers/issues/8378/events
https://github.com/huggingface/transformers/issues/8378
738,223,809
MDU6SXNzdWU3MzgyMjM4MDk=
8,378
DataCollatorForWholeWordMask is missing _tensorize_batch method
{ "login": "RainIwakura", "id": 8593585, "node_id": "MDQ6VXNlcjg1OTM1ODU=", "avatar_url": "https://avatars.githubusercontent.com/u/8593585?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RainIwakura", "html_url": "https://github.com/RainIwakura", "followers_url": "https://api.github.com/users/RainIwakura/followers", "following_url": "https://api.github.com/users/RainIwakura/following{/other_user}", "gists_url": "https://api.github.com/users/RainIwakura/gists{/gist_id}", "starred_url": "https://api.github.com/users/RainIwakura/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RainIwakura/subscriptions", "organizations_url": "https://api.github.com/users/RainIwakura/orgs", "repos_url": "https://api.github.com/users/RainIwakura/repos", "events_url": "https://api.github.com/users/RainIwakura/events{/privacy}", "received_events_url": "https://api.github.com/users/RainIwakura/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@RedneckedCrake Hey please checkout my PR and let me know if it works!", "Should be fixed by #8379, thanks for flagging this!" ]
1,604
1,604
1,604
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.4.0 - Platform: Google Colab - Python version: 3.6 - PyTorch version (GPU?): Yes - Tensorflow version (GPU?): - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @sgugger <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSTM: @stas00 examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...):BERT The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) Not sure if it's considered official, but it's just finetuning for language modeling on a custom dataset. ## To reproduce Steps to reproduce the behavior: ```python from transformers import DataCollatorForWholeWordMask from transformers import Trainer, TrainingArguments from transformers import TextDataset from transformers import BertTokenizer from transformers import BertForMaskedLM tokenizer = BertTokenizer.from_pretrained('bert-large-uncased-whole-word-masking') tokenizer.add_tokens(["[new]"]) model = BertForMaskedLM.from_pretrained('bert-large-uncased-whole-word-masking') model.resize_token_embeddings(len(tokenizer)) model.train() dataset = TextDataset( tokenizer=tokenizer, file_path="./bert_train_set.txt", block_size=512 ) data_collator = DataCollatorForWholeWordMask( tokenizer ) training_args = TrainingArguments( output_dir="./BERT", overwrite_output_dir=True, num_train_epochs=10, per_gpu_train_batch_size=16, save_steps=500, save_total_limit=2, learning_rate = 2e-5 ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, prediction_loss_only=True, ) trainer.train() ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ``` /usr/local/lib/python3.6/dist-packages/transformers/data/data_collator.py in __call__(self, examples) 316 examples = [{"input_ids": e} for e in examples] 317 --> 318 batch_input = self._tensorize_batch(input_ids) 319 320 mask_labels = [] AttributeError: 'DataCollatorForWholeWordMask' object has no attribute '_tensorize_batch' ``` ## Expected behavior Trainer should proceed to feed tensors to the model. I think someone just forgot to copypaste _tensorize_batch from DataCollatorForPermutationLanguageModeling, either that or inheritance is off.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8378/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8378/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8377
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8377/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8377/comments
https://api.github.com/repos/huggingface/transformers/issues/8377/events
https://github.com/huggingface/transformers/pull/8377
738,171,351
MDExOlB1bGxSZXF1ZXN0NTE3MDg1OTU5
8,377
[fsmt convert script] fairseq broke chkpt data - fixing that
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "+1 It would be good to document supported/tested versions, but I think it would be useful to try to keep up to date (to the extent possible.) I think most people choose whether to upload to hf **after** they have trained a model, and simply won't upload if their version if it is not supported. (Rather than re-train with a supported `fairseq` version.) ", "@stas00 You could also write an integration test that runs only if fairseq is installed (probably in examples), that tests conversion. This wouldn't get run by CI, but might make trouble-shooting easier.", "Well, in this case, they haven't changed the output for the model - they changed their hub tools instead (i.e. how the model is loaded) - so no re-training is necessary. So requiring a fixed fairseq model would probably be sufficient.\r\n\r\nThe problem is that they stopped making releases last Dec. https://github.com/pytorch/fairseq/tags How can we tell a developer to install version X if it doesn't exist (or check for it)?\r\n\r\nI would leave it as is for now, since so far there is barely any interest in fsmt and should it start getting more traction I'll be all over it making it super-resilient. ", "Makes sense, feel free to merge @LysandreJik " ]
1,604
1,604
1,604
CONTRIBUTOR
null
This PR's main purpose: * [x] adjusts for breaking fairseq [changes](https://github.com/pytorch/fairseq/commit/3b27ed7996b0315f471c795cf9b7dfcc18467cbe) Plus: * [x] adds support for older `bpecodes` filenames - specifically `code` in iwslt14 * [x] improves reporting * [x] removes CDN note that will be shortly become irrelevant https://github.com/huggingface/transformers/pull/8324 The break was manifesting during conversion in: ``` Traceback (most recent call last): File "src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py", line 271, in <module> convert_fsmt_checkpoint_to_pytorch(args.fsmt_checkpoint_path, args.pytorch_dump_folder_path) File "src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py", line 118, in convert_fsmt_checkpoint_to_pytorch src_lang = args["source_lang"] KeyError: 'source_lang' ``` @sshleifer, @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8377/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8377/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8377", "html_url": "https://github.com/huggingface/transformers/pull/8377", "diff_url": "https://github.com/huggingface/transformers/pull/8377.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8377.patch", "merged_at": 1604941063000 }
https://api.github.com/repos/huggingface/transformers/issues/8376
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8376/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8376/comments
https://api.github.com/repos/huggingface/transformers/issues/8376/events
https://github.com/huggingface/transformers/pull/8376
738,125,902
MDExOlB1bGxSZXF1ZXN0NTE3MDQ3ODk3
8,376
[s2s] distill t5-large -> t5-small
{ "login": "sbhaktha", "id": 5631150, "node_id": "MDQ6VXNlcjU2MzExNTA=", "avatar_url": "https://avatars.githubusercontent.com/u/5631150?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sbhaktha", "html_url": "https://github.com/sbhaktha", "followers_url": "https://api.github.com/users/sbhaktha/followers", "following_url": "https://api.github.com/users/sbhaktha/following{/other_user}", "gists_url": "https://api.github.com/users/sbhaktha/gists{/gist_id}", "starred_url": "https://api.github.com/users/sbhaktha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sbhaktha/subscriptions", "organizations_url": "https://api.github.com/users/sbhaktha/orgs", "repos_url": "https://api.github.com/users/sbhaktha/repos", "events_url": "https://api.github.com/users/sbhaktha/events{/privacy}", "received_events_url": "https://api.github.com/users/sbhaktha/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "For your test, you could check the code runs correctly with `dict(teacher=BART_TINY_RANDOM, student=T5_TINY_RANDOM)`", "I've pushed some fixes to get the tests to run successfully! However, I realized that an issue I was wondering about earlier is indeed important, and that is, dealing with the student and teacher models having different tokenizers, which would make the `input_ids` different in the two cases. However since the (language) models we are looking at all have the same output space (set of tokens) as the input space, in order for the logits from the output layers to be comparable, they have to have the same output space, and hence the same set of tokens. Does this make sense? And if so, it is a fair assumption that the teacher and student models should have the same tokenizer (specified via the existing `tokenizer_name` parameter), correct? In this case, unit testing distillation from `BART_TINY` to `T5_TINY` is not valid (and I do run into errors). I did try `T5_TINY` to `T5_TINY` and that works fine but of course it's not very satisfying. Can I add a test case for `T5_SMALL` to `T5_TINY`? Worried about slowing down the unit tests. Please let me know your suggestion. Sorry for the long comment!", "For testing,\r\nI just ran\r\n```bash\r\npython make_student.py patrickvonplaten/t5-tiny-random t5-tinier-random -e 1 -d 1\r\ntransformers-cli upload t5-tinier-random\r\n```\r\n\r\nSo now you can test distilling `patrickvonplaten/t5-tiny-random` (2/2 layers) to `sshleifer/t5-tinier-random` (1/1 layer).\r\n\r\n\r\n\r\n", "You also need to run `make fixup`", "> You also need to run `make fixup`\r\n\r\nI get the following message:\r\n```\r\n/bin/sh: 3: black: not found\r\n/bin/sh: 4: isort: not found\r\n/bin/sh: 5: flake8: not found\r\nMakefile:7: recipe for target 'modified_only_fixup' failed\r\nmake: *** [modified_only_fixup] Error 127\r\n```\r\n\r\nAny idea?", "> > You also need to run `make fixup`\r\n> \r\n> I get the following message:\r\n> \r\n> ```\r\n> /bin/sh: 3: black: not found\r\n> /bin/sh: 4: isort: not found\r\n> /bin/sh: 5: flake8: not found\r\n> Makefile:7: recipe for target 'modified_only_fixup' failed\r\n> make: *** [modified_only_fixup] Error 127\r\n> ```\r\n> \r\n> Any idea?\r\n\r\nAh. I wasn't familiar with these pre-commit libraries. Looks like I need to `pip install` those...", "I've added a unit test to distill from `T5_TINIER` to `T5_TINY` and it passes. Also ran `make fixup` and fixed formatting issues. Could you please review again?\r\nAll CircleCI tests above pass except the first one, `check_code_quality`. I checked the details on that one and they seem to be unrelated:\r\n\r\n```\r\nRun python -c \"import torch; print(torch.hub.list('huggingface/transformers:$BRANCH'))\"\r\nDownloading: \"https://github.com/huggingface/transformers/archive/add_student_base_model.zip\" to /home/runner/.cache/torch/hub/add_student_base_model.zip\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/torch/hub.py\", line 269, in list\r\n repo_dir = _get_cache_or_reload(github, force_reload, True)\r\n File \"/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/torch/hub.py\", line 141, in _get_cache_or_reload\r\n download_url_to_file(url, cached_file, progress=False)\r\n File \"/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/torch/hub.py\", line 425, in download_url_to_file\r\n u = urlopen(req)\r\n File \"/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/urllib/request.py\", line 222, in urlopen\r\n return opener.open(url, data, timeout)\r\n File \"/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/urllib/request.py\", line 531, in open\r\n response = meth(req, response)\r\n File \"/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/urllib/request.py\", line 641, in http_response\r\n 'http', request, response, code, msg, hdrs)\r\n File \"/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/urllib/request.py\", line 563, in error\r\n result = self._call_chain(*args)\r\n File \"/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/urllib/request.py\", line 503, in _call_chain\r\n result = func(*args)\r\n File \"/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/urllib/request.py\", line 755, in http_error_302\r\n return self.parent.open(new, timeout=req.timeout)\r\n File \"/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/urllib/request.py\", line 531, in open\r\n response = meth(req, response)\r\n File \"/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/urllib/request.py\", line 641, in http_response\r\n 'http', request, response, code, msg, hdrs)\r\n File \"/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/urllib/request.py\", line 569, in error\r\n return self._call_chain(*args)\r\n File \"/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/urllib/request.py\", line 503, in _call_chain\r\n result = func(*args)\r\n File \"/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/urllib/request.py\", line 649, in http_error_default\r\n raise HTTPError(req.full_url, code, msg, hdrs, fp)\r\nurllib.error.HTTPError: HTTP Error 404: Not Found\r\nError: Process completed with exit code 1.\r\n```", "thank you for the style fix! are you going to merge?", "@danyaljj FYI", "CircleCI `check_code_quality` is failing. I see--\r\n```\r\n#!/bin/bash -eo pipefail\r\nblack --check examples tests src utils\r\nwould reformat /home/circleci/transformers/examples/seq2seq/distillation.py\r\nOh no! 💥 💔 💥\r\n1 file would be reformatted, 562 files would be left unchanged.\r\n\r\nExited with code exit status 1\r\nCircleCI received exit code 1\r\n```\r\n\r\nBut not sure how to format it. Tried `black --code examples/seq2seq/distillation.py` but that doesn't work cause it expects code as a string, not a filename. How do you format a file?", "pip install -e .[dev]\nmake fixup \n", "> make fixup\r\n\r\nI did run `make fixup` before committing. But just ran both your above steps again. I don't get any errors from `distillation.py`.", "On a high-level, do you have a fine-tuned t5-large that you want to distill?\r\nThis makes the code a lot more complex so I am trying to understand the use-case better.", "Yes. We wish to create a small model (under 500 MB), distilled from a large or 11B T5 model. For this we want to start with a small T5 model and train using the larger T5 as the teacher. Conceptually distillation can be done from any large model to any small model as long they share the same output space, so this enables that.", "> For this we want to start with a small T5 model and train using the larger T5 as the teacher. \r\n\r\nMy point was that this only really makes sense if the large t5 is fine-tuned.", "ah, yes. we are using a fine-tuned large model.", "Cleaned up some things and now ready to merge. Thanks for the contribution @sbhaktha !", "Thanks Sam!" ]
1,604
1,605
1,605
CONTRIBUTOR
null
# What does this PR do? @sshleifer : This PR addresses the feature I was requesting on [this thread](https://discuss.huggingface.co/t/distillation-create-student-model-from-a-different-base-model-than-teacher/1501/8). It add a way to specify a base model to initialize the student model with (in `distillation.py`), instead of necessarily starting from the same base model as the teacher and selectively copying layers. Following are the main changes: 1. Accepts a new command line argument `--student` that can take a base model name, such as `t5-small`. 2. If this new argument is specified, simply creates a student model from scratch, if not creates the student model per the original code. 3. In the `_step` function, calls the teacher decoder with the teacher's encoder outputs instead of student encoder outputs. 4. When teacher and student base models are different, since they can be of different architectures, and hence have different hidden sizes, does not calculate hidden loss. Sample command to run this: ``` python distillation.py --teacher t5-large --data_dir $NQOPEN_DIR \ --student t5-small --tokenizer_name t5-small \ --teacher_tokenizer_name t5-large \ --learning_rate=3e-4 --freeze_encoder --freeze_embeds \ --do_train --train_batch_size 32 \ --do_predict --n_train 3 \ --model_name_or_path t5-large --eval_beams 2 --eval_max_gen_length 142 \ --val_check_interval 0.25 --n_val 1 \ --output_dir distilled-t5 --gpus 1 --logger_name wandb ``` I was able to get this training to run. I am not sure what tests I need to run, and also noticed some automated test failure notifications to my email that I didn't quite follow. This is the first PR I am making here, so hope you can point me in the right direction as to what needs to be done if anything is missing. [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. https://discuss.huggingface.co/t/distillation-create-student-model-from-a-different-base-model-than-teacher/1501/8 ## Who can review? examples/seq2seq: @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8376/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8376/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8376", "html_url": "https://github.com/huggingface/transformers/pull/8376", "diff_url": "https://github.com/huggingface/transformers/pull/8376.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8376.patch", "merged_at": 1605135525000 }
https://api.github.com/repos/huggingface/transformers/issues/8375
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8375/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8375/comments
https://api.github.com/repos/huggingface/transformers/issues/8375/events
https://github.com/huggingface/transformers/issues/8375
738,047,685
MDU6SXNzdWU3MzgwNDc2ODU=
8,375
RobertaTokenizerFast is around 10 times slower than BertTokenizerFast #510
{ "login": "napsternxg", "id": 112678, "node_id": "MDQ6VXNlcjExMjY3OA==", "avatar_url": "https://avatars.githubusercontent.com/u/112678?v=4", "gravatar_id": "", "url": "https://api.github.com/users/napsternxg", "html_url": "https://github.com/napsternxg", "followers_url": "https://api.github.com/users/napsternxg/followers", "following_url": "https://api.github.com/users/napsternxg/following{/other_user}", "gists_url": "https://api.github.com/users/napsternxg/gists{/gist_id}", "starred_url": "https://api.github.com/users/napsternxg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/napsternxg/subscriptions", "organizations_url": "https://api.github.com/users/napsternxg/orgs", "repos_url": "https://api.github.com/users/napsternxg/repos", "events_url": "https://api.github.com/users/napsternxg/events{/privacy}", "received_events_url": "https://api.github.com/users/napsternxg/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Maybe related to https://github.com/huggingface/transformers/issues/6962", "Both tokenizers are very unrelated, so I don't think they can be directly compared like this.", "@LysandreJik can you elaborate on this unrelatedness? \r\nI think both are implemented in Rust and I assume the tokenizers use same algorithm for tokenization part? Maybe I am missing a detail here. ", "BERT's tokenizer uses WordPiece while RoBERTa's tokenizer is a byte-level BPE. They're not the same algorithm. You can have more information [here](https://huggingface.co/docs/tokenizers/python/latest/components.html#models).", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Thanks @LysandreJik for the explanation. I was not aware of this difference. " ]
1,604
1,618
1,610
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Platform: Linux-5.4.38-t2.el7.x86_64-x86_64-with-centos-7.7.1908-Core - Python version: 3.7.9 - PyTorch version (GPU?): 1.4.0+cu100 (True) - Tensorflow version (GPU?): 2.3.1 (True) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help @mfuntowicz <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSTM: @stas00 examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Also opened this issue on the tokenizers repo: https://github.com/huggingface/tokenizers/issues/510 `RobertaTokenizerFast` with 60k vocab size is around 50 times slower than the `BertTokenizerFast` for `transformers==3.3.1`. This is really slowing down the processing time for training the language model. Is there a suggested fix for this? Model I am using (Bert, XLNet ...): RobertaTokenizerFast The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce ```python In [37]: from transformers import BertTokenizerFast, RobertaTokenizerFast ...: In [38]: bert_tokenizer = BertTokenizerFast.from_pretrained(bert_tokenizer_dir, max_len=512, truncation=True) ...: roberta_tokenizer = RobertaTokenizerFast.from_pretrained(roberta_tokenizer_dir, max_len=512, truncation=True) ...: In [39]: line = "I am trying out this code for testing tokenizers and it is super good. Huge victory. But the difference in speed between BERT tokenizer and Robert ...: a tokenizers is quite slow." In [40]: bert_tokenizer.encode_plus(line, add_special_tokens=True, truncation=True) Out[40]: {'input_ids': [2, 51, 1881, 21557, 3212, 2937, 3590, 1945, 40605, 53576, 23981, 1013, 1985, 2179, 1943, 3863, 3841, 20, 38092, 65353, 20, 3180, 1931, 48800, 1822, 12679, 26777, 18732, 53576, 23981, 1985, 20839, 1016, 53576, 23981, 1013, 1943, 66009, 16390, 20, 3], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} In [41]: roberta_tokenizer.encode_plus(line, add_special_tokens=True, truncation=True) Out[41]: {'input_ids': [0, 47, 979, 28120, 3145, 2924, 6628, 1046, 6747, 458, 618, 2063, 781, 719, 994, 1527, 891, 6074, 6464, 20, 550, 11852, 21228, 3085, 20, 16339, 898, 20420, 40977, 494, 24738, 34187, 444, 13841, 618, 2063, 27250, 994, 12086, 16449, 618, 2063, 781, 719, 891, 517, 1492, 23819, 20, 2], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]} In [42]: %%timeit -n 10000 ...: tokens = roberta_tokenizer.encode_plus(line, add_special_tokens=True, truncation=True) ...: 10000 loops, best of 5: 9.09 ms per loop In [43]: %%timeit -n 10000 ...: tokens = bert_tokenizer.encode_plus(line, add_special_tokens=True, truncation=True) ...: 10000 loops, best of 5: 181 µs per loop In [44]: roberta_tokenizer.vocab_size Out[44]: 60000 In [45]: bert_tokenizer.vocab_size Out[45]: 100000 In [47]: import transformers In [48]: import tokenizers In [49]: transformers.__version__ Out[49]: '3.3.1' In [50]: tokenizers.__version__ Out[50]: '0.8.1.rc2' ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior I expect to not be much difference here. Why is this the case? <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8375/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8375/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8374
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8374/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8374/comments
https://api.github.com/repos/huggingface/transformers/issues/8374/events
https://github.com/huggingface/transformers/pull/8374
738,035,039
MDExOlB1bGxSZXF1ZXN0NTE2OTczMDM0
8,374
[wip] [fsmt] possible support of iwslt14 in fsmt
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You were right, it was an EN to EL model so the src-tgt were inversed, however I changed the args and I tried again. \r\nIt gave me an error: \r\n```\r\nRuntimeError: Error(s) in loading state_dict for FSMTForConditionalGeneration:\r\n size mismatch for model.decoder.embed_tokens.weight: copying a param with shape torch.Size([12896, 512]) from checkpoint, the shape in current model is torch.Size([9936, 512]).\r\n```\r\nwhich I fixed by changing one of the ignore keys due to a typo probably, in line 251 (see comment below):\r\n```\r\n # remove unneeded keys\r\n ignore_keys = [\r\n \"model.model\",\r\n \"model.encoder.version\",\r\n \"model.decoder.version\",\r\n \"model.encoder_embed_tokens.weight\", \r\n \"model.decoder.embed_tokens.weight\",#here, the original was: model.decoder_embed_tokens.weight\r\n \"model.encoder.embed_positions._float_tensor\",\r\n \"model.decoder.embed_positions._float_tensor\",\r\n ]\r\n for k in ignore_keys:\r\n model_state_dict.pop(k, None)\r\n```\r\nAfter that, the conversion script concludes succesfully: \r\n```\r\nGenerating data/wmt16-el-en-dist/vocab-src.json\r\nGenerating data/wmt16-el-en-dist/vocab-tgt.json\r\nGenerating data/wmt16-el-en-dist/merges.txt\r\nGenerating data/wmt16-el-en-dist/config.json\r\nGenerating data/wmt16-el-en-dist/tokenizer_config.json\r\nGenerating data/wmt16-el-en-dist/pytorch_model.bin\r\nConversion is done!\r\n\r\nLast step is to upload the files to s3\r\ncd data\r\ntransformers-cli upload wmt16-el-en-dist\r\n\r\n```\r\nI guess the next thing is to try to load this model locally to test that it's working.", "Fantastic! Yes, load it up and let us know whether it works.\r\n\r\nIf you want ready scripts to adapt from, perhaps try https://github.com/stas00/porting/blob/master/transformers/fairseq-wmt19/scripts/fsmt-translate.py\r\nand if you convert the reversed model (hint: just swap src and tgt languages in the conversion script) you can even do a paraphrase:\r\nhttps://github.com/stas00/porting/blob/master/transformers/fairseq-wmt19/scripts/fsmt-paraphrase.py\r\n", "> which I fixed by changing one of the ignore keys due to a typo probably, in line 251 (see comment below):\r\n> remove unneeded keys\r\n> ignore_keys = [\r\n> \"model.decoder.embed_tokens.weight\",#here, the original was: model.decoder_embed_tokens.weight\r\n\r\nI don't think that this solution works. While you made the conversion script complete, your ported model now has random weights for `model.decoder.embed_tokens.weight` - you probably are going to see garbage output.\r\n\r\nThe error is:\r\n```\r\nsize mismatch for model.decoder.embed_tokens.weight: copying a param with shape torch.Size([9936, 512]) from checkpoint, the shape in current model is torch.Size([12896, 512]).\r\n```\r\n* source here is 12892 (el)\r\n* and target is 9932 (en) \r\n\r\nFor some reason it's creating a decoder with the size of the encoder dict.\r\n\r\nI think there might have been a bug introduced since I wrote and used this script. I will debug and get back to you.", "Hmm, it looks like `fairseq` has introduced some breaking changes - that's why the script wasn't working out of the box. The `args` in the checkpoint appears to be mostly empty, so none of the wmt19 models can be converted either. Will investigate.\r\n\r\nThis is the breaking change: https://github.com/pytorch/fairseq/commit/3b27ed7996b0315f471c795cf9b7dfcc18467cbe\r\n\r\nThey did away with the `args` object", "OK, I updated the conversion script to support the latest fairseq and it now converts your model out of the box w/o needing any changes, please use the version in this PR if it hasn't been merged yet. https://github.com/huggingface/transformers/pull/8377\r\n\r\nPlease let me know whether the results are satisfactory and you get a good translation out of it - note it uses some default hparams (see the script) so you can adjust those to your liking. \r\n\r\nOnce you're happy you can upload the model to s3 as explained here: https://huggingface.co/transformers/model_sharing.html", "Resolved in https://github.com/huggingface/transformers/pull/8377", "This is great @stas00, I just tried it locally and it works as intended. Many thanks!\r\n\r\nOne quick question, during training with fairseq, the tokenization also converted all letters to lower-case (to reduce the vocab I assume) so now in order to get correct translations the input text needs to be lowercase only. I can of course add a line to my script to do that automatically, but I was wondering how I can force the uploaded model to do that (so that anyone wanting to test it doesn't have to download it locally and add that additional line). Probably with a config argument...?\r\n\r\nI am uploading this to s3 soon, your help has been invaluable :+1: ", "Excellent. I'm glad to hear we sorted it out.\r\n\r\nSo have you validated that the translation works and the bleu score evals are satisfactory? You can do it easily with `transformers` using the examples here: https://github.com/huggingface/transformers/tree/master/scripts/fsmt (scripts starting with `eval_`).\r\n\r\n> One quick question, during training with fairseq, the tokenization also converted all letters to lower-case (to reduce the vocab I assume) so now in order to get correct translations the input text needs to be lowercase only. I can of course add a line to my script to do that automatically, but I was wondering how I can force the uploaded model to do that (so that anyone wanting to test it doesn't have to download it locally and add that additional line). Probably with a config argument...?\r\n\r\nIt's currently not supported as all the models I worked with didn't have that restriction. I will add this functionality to the `transformers` implementation of FSMT now that I know that this is still needed. I will let you know when this is done. Perhaps hold off on making the release to ensure that your model works out of box. You will also need to update your ported model's config when this is done.\r\n\r\nOut of curiosity, if you don't mind sharing, is there a special reason why you chose to train an older more restricted architecture and not one of the newer ones? Surely, losing the normal casing would be a hurdle for practical use.", "\r\n> So have you validated that the translation works and the bleu score evals are satisfactory? You can do it easily with `transformers` using the examples here: https://github.com/huggingface/transformers/tree/master/scripts/fsmt (scripts starting with `eval_`).\r\n\r\nI validated the bleu and chrF scores on the fairseq equivalent of the model (before converting it to huggingface) on the Tatoeba testset, but now that there are additional evaluation scripts I will try these as well!\r\n\r\n> It's currently not supported as all the models I worked with didn't have that restriction. I will add this functionality to the `transformers` implementation of FSMT now that I know that this is still needed. I will let you know when this is done. Perhaps hold off on making the release to ensure that your model works out of box. You will also need to update your ported model's config when this is done.\r\n\r\nThanks for this, sure I can wait if there's the option of adding that feature too!\r\n\r\n> Out of curiosity, if you don't mind sharing, is there a special reason why you chose to train an older more restricted architecture and not one of the newer ones? Surely, losing the normal casing would be a hurdle for practical use.\r\n\r\nTbh, I was just following a fairseq [guide](https://github.com/pytorch/fairseq/tree/master/examples/translation#iwslt14-german-to-english-transformer) that was suggesting this arch over the following possible choices:\r\n`Possible choices: transformer, transformer_iwslt_de_en, transformer_wmt_en_de, transformer_vaswani_wmt_en_de_big, transformer_vaswani_wmt_en_fr_big, transformer_wmt_en_de_big, transformer_wmt_en_de_big_t2t, multilingual_transformer, multilingual_transformer_iwslt_de_en, fconv, fconv_iwslt_de_en, fconv_wmt_en_ro, fconv_wmt_en_de, fconv_wmt_en_fr, nonautoregressive_transformer, nonautoregressive_transformer_wmt_en_de, nacrf_transformer, iterative_nonautoregressive_transformer, iterative_nonautoregressive_transformer_wmt_en_de, cmlm_transformer, cmlm_transformer_wmt_en_de, levenshtein_transformer, levenshtein_transformer_wmt_en_de, levenshtein_transformer_vaswani_wmt_en_de_big, levenshtein_transformer_wmt_en_de_big, insertion_transformer, bart_large, bart_base, mbart_large, mbart_base, mbart_base_wmt20, lstm, lstm_wiseman_iwslt_de_en, lstm_luong_wmt_en_de, transformer_lm, transformer_lm_big, transformer_lm_baevski_wiki103, transformer_lm_wiki103, transformer_lm_baevski_gbw, transformer_lm_gbw, transformer_lm_gpt, transformer_lm_gpt2_small, transformer_lm_gpt2_medium, transformer_lm_gpt2_big, transformer_align, transformer_wmt_en_de_big_align, hf_gpt2, hf_gpt2_medium, hf_gpt2_large, hf_gpt2_xl, transformer_from_pretrained_xlm, lightconv, lightconv_iwslt_de_en, lightconv_wmt_en_de, lightconv_wmt_en_de_big, lightconv_wmt_en_fr_big, lightconv_wmt_zh_en_big, lightconv_lm, lightconv_lm_gbw, fconv_self_att, fconv_self_att_wp, fconv_lm, fconv_lm_dauphin_wikitext103, fconv_lm_dauphin_gbw, lstm_lm, roberta, roberta_base, roberta_large, xlm, masked_lm, bert_base, bert_large, xlm_base, s2t_berard, s2t_berard_256_3_3, s2t_berard_512_3_2, s2t_berard_512_5_3, s2t_transformer, s2t_transformer_s, s2t_transformer_sp, s2t_transformer_m, s2t_transformer_mp, s2t_transformer_l, s2t_transformer_lp, wav2vec, wav2vec2, wav2vec_ctc, wav2vec_seq2seq, dummy_model, transformer_lm_megatron, transformer_lm_megatron_11b, transformer_iwslt_de_en_pipeline_parallel, transformer_wmt_en_de_big_pipeline_parallel, model_parallel_roberta, model_parallel_roberta_base, model_parallel_roberta_large`\r\n\r\nIf you could indicate a more recent architecture that has about the same number of parameters (I have a constraint on complexity as I am using a single GTX2080SUPER) I would be happy to re-train!\r\n\r\n", "> > So have you validated that the translation works and the bleu score evals are satisfactory? You can do it easily with `transformers` using the examples here: https://github.com/huggingface/transformers/tree/master/scripts/fsmt (scripts starting with `eval_`).\r\n> \r\n> I validated the bleu and chrF scores on the fairseq equivalent of the model (before converting it to huggingface) on the Tatoeba testset, but now that there are additional evaluation scripts I will try these as well!\r\n\r\nMy only concern here is that the forced lower-case which won't be the case with the references bleu scores are evaled against.\r\n\r\n> > Out of curiosity, if you don't mind sharing, is there a special reason why you chose to train an older more restricted architecture and not one of the newer ones? Surely, losing the normal casing would be a hurdle for practical use.\r\n> \r\n> Tbh, I was just following a fairseq [guide](https://github.com/pytorch/fairseq/tree/master/examples/translation#iwslt14-german-to-english-transformer) that was suggesting this arch over the following possible choices:\r\n> `Possible choices: transformer, transformer_iwslt_de_en, transformer_wmt_en_de, transformer_vaswani_wmt_en_de_big, [...]\r\n> \r\n> If you could indicate a more recent architecture that has about the same number of parameters (I have a constraint on complexity as I am using a single GTX2080SUPER) I would be happy to re-train!\r\n\r\nI re-read the guide and I'm not sure what you mean when you said: \"was suggesting this arch over the following possible choices\" - I can't find any recommendations to use this particular model over the dozens of the ones you listed. e.g. how did you know that it's a smaller model than some others?\r\n\r\nI'm gradually getting to know the fairseq models and have only dealt with wmt-variations of `transformer`. I suppose you can see all the variations defined here and below: https://github.com/pytorch/fairseq/blob/master/fairseq/models/transformer.py#L985\r\nSo primarily these appear to differ in the size and shape of the model.\r\n\r\nWhen you did the training, was there an option not to force lowercase input or did it come automatic with the `transformer_iwslt_de_en`? I don't see an option to toggle this on/off in `fairseq-train` command. And looking around the code I don't quite see a configurable option to do so.", "True, the forced lower-case may give slightly higher BLEU score.\r\n\r\n> I re-read the guide and I'm not sure what you mean when you said: \"was suggesting this arch over the following possible choices\" - I can't find any recommendations to use this particular model over the dozens of the ones you listed. e.g. how did you know that it's a smaller model than some others?\r\n\r\nBy \"suggesting\" I mean that I just used the pre-defined arch on the available script (see below):\r\n```\r\nCUDA_VISIBLE_DEVICES=0 fairseq-train \\\r\n data-bin/iwslt14.tokenized.de-en \\\r\n --arch transformer_iwslt_de_en --share-decoder-input-output-embed \\\r\n --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \\\r\n --lr 5e-4 --lr-scheduler inverse_sqrt --warmup-updates 4000 \\\r\n --dropout 0.3 --weight-decay 0.0001 \\\r\n --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \\\r\n --max-tokens 4096 \\\r\n --eval-bleu \\\r\n --eval-bleu-args '{\"beam\": 5, \"max_len_a\": 1.2, \"max_len_b\": 10}' \\\r\n --eval-bleu-detok moses \\\r\n --eval-bleu-remove-bpe \\\r\n --eval-bleu-print-samples \\\r\n --best-checkpoint-metric bleu --maximize-best-checkpoint-metric\r\n```\r\nI didn't know it would lead to a smaller model before digging it up a bit and discovering that e.g. there's a difference to the ffd hidden layer dimension. I experimented with some of them (not all) and since I am not an expert ofc I ended up on that one based on a.the fact that it actually worked, b. that the perplexity I was getting was getting lower quicker that other cases, and c.more importantly based on whether I would get an OOM after a while (most of the cases) :P. \r\nDo you think that there would be a significant gain from trying a newer architecture that you have in mind?\r\n\r\n> When you did the training, was there an option not to force lowercase input or did it come automatic with the transformer_iwslt_de_en? I don't see an option to toggle this on/off in fairseq-train command. And looking around the code I don't quite see a configurable option to do so.\r\n\r\nThe lowercase command comes at the data preparation script provided in the guide: https://github.com/pytorch/fairseq/blob/master/examples/translation/prepare-iwslt14.sh\r\n(line 11). It was not part of the fairseq training/preprocessing.\r\n ", "> If you could indicate a more recent architecture that has about the same number of parameters (I have a constraint on complexity as I am using a single GTX2080SUPER) I would be happy to re-train!\r\n\r\nI think most of the recent ones are quite much bigger, but one that we ported that may warrant your attention is the distilled variation: https://github.com/jungokasai/deep-shallow/\r\n(ported scripts https://github.com/huggingface/transformers/tree/master/scripts/fsmt - start with `convert-allenai-`, \r\nthe wmt model cards are at the end of the list here https://huggingface.co/allenai)", "> I didn't know it would lead to a smaller model before digging it up a bit and discovering that e.g. there's a difference to the ffd hidden layer dimension. I experimented with some of them (not all) and since I am not an expert ofc I ended up on that one based on a.the fact that it actually worked, b. that the perplexity I was getting was getting lower quicker that other cases, and c.more importantly based on whether I would get an OOM after a while (most of the cases) :P.\r\n> Do you think that there would be a significant gain from trying a newer architecture that you have in mind?\r\n\r\nI'm relatively new to this myself, so I haven't tried enough variations yet to make such recommendations. Perhaps asking at the forums stating your limitations would lead to some excellent recommendations - or perhaps what you have done is just fine - it all depends on your needs. Do check out the distilled approach I mentioned in the comment above.\r\n\r\n> The lowercase command comes at the data preparation script provided in the guide: https://github.com/pytorch/fairseq/blob/master/examples/translation/prepare-iwslt14.sh\r\n> (line 11). It was not part of the fairseq training/preprocessing.\r\n\r\nOh, I see, this is totally circumstantial - you just trained on lower-cased input so this is the world it knows. This makes total sense. Thank you for helping me understand this nuance.", "OK, I implemented the lowercase config, and wanted to automate the discovery of when this option should be pre-set, but the latter didn't work - my detector code discovered up-case letters - I looked at both vocabs you supplied and both have upcase letters in them - a lot of them in the el one and some in en-one (merges/code too).\r\n\r\nI tried this very simple heuristic:\r\n```\r\n # detect whether this is a do_lower_case situation, which can be derived by checking whether we\r\n # have at least one upcase letter in the source vocab\r\n do_lower_case = True\r\n for k in src_vocab.keys():\r\n if not k.islower():\r\n do_lower_case = False\r\n break\r\n```\r\n\r\nI suppose this has to be set manually then because you know you trained mainly on lowercase - but perhaps there is a bug somewhere on the fairseq side and should not have any upcase letters in either of the 3 files if it were to be properly lower-cased?", "The PR that adds lower-case support is here https://github.com/huggingface/transformers/pull/8389, but for the converter to work with the recent fairseq https://github.com/huggingface/transformers/pull/8377 is needed to be merged first, or you can do just this over 8389:\r\n```\r\ndiff --git a/src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py b/src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py\r\nindex 2cc42718..61ef9010 100755\r\n--- a/src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py\r\n+++ b/src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py\r\n@@ -113,7 +113,7 @@ def convert_fsmt_checkpoint_to_pytorch(fsmt_checkpoint_path, pytorch_dump_folder\r\n fsmt_folder_path, checkpoint_file, data_name_or_path, archive_map=models, **kwargs\r\n )\r\n\r\n- args = dict(vars(chkpt[\"args\"]))\r\n+ args = vars(chkpt[\"args\"][\"model\"])\r\n```\r\nOr if you don't want to mess with these, we can wait until both are merged.\r\n\r\nIn either case before uploading to s3 you will need to manually set `\"do_lower_case\": true` in `tokenizer_config.json` of the converted model - since as I mentioned in the comment above there is no way of automatically detecting the need to preset `\"do_lower_case\": true` during conversion as all vocabs in your model have uppercase letters in them.\r\n", "> I think most of the recent ones are quite much bigger, but one that we ported that may warrant your attention is the distilled variation: https://github.com/jungokasai/deep-shallow/\r\n(ported scripts https://github.com/huggingface/transformers/tree/master/scripts/fsmt - start with convert-allenai-,\r\nthe wmt model cards are at the end of the list here https://huggingface.co/allenai)\r\n\r\nI will try this for sure, although I remember that the `--arch transformer` used in the script led to OOM in my machine.\r\n\r\n> Or if you don't want to mess with these, we can wait until both are merged.\r\n> In either case before uploading to s3 you will need to manually set \"do_lower_case\": true in tokenizer_config.json of the converted model - since as I mentioned in the comment above there is no way of automatically detecting the need to preset \"do_lower_case\": true during conversion as all vocabs in your model have uppercase letters in them.\r\n\r\nWell I couldn't wait, so I tried following your steps and it works perfectly! :D Kudos once again!\r\nNow, my only question is should I upload the EN2EL and EL2EN models or wait until the PR is merged? I guess that the transformers version that is currently loading all s3-uploaded models is not up-to-date yet, so it will actually miss the capital letters.\r\n", "> I will try this for sure, although I remember that the `--arch transformer` used in the script led to OOM in my machine.\r\n\r\nI'm not sure whether fairseq has some doc that compares the different arch configs, but this piece of code seems to be pretty clear on the differences: https://github.com/pytorch/fairseq/blob/master/fairseq/models/transformer.py#L985\r\n\r\nThe default `transformer` uses pretty big layers - so it requires a lot of gpu memory.\r\n\r\n> Well I couldn't wait, so I tried following your steps and it works perfectly! :D Kudos once again!\r\n\r\nFantastic!\r\n\r\nI suppose you're not concerned with the upcase letters in the dict/merge files of your pre-trained model. I'd have thought that fairseq pre-processor would lowercased all inputs. But if you think it's no problem, then all is good.\r\n\r\n> Now, my only question is should I upload the EN2EL and EL2EN models or wait until the PR is merged? I guess that the transformers version that is currently loading all s3-uploaded models is not up-to-date yet, so it will actually miss the capital letters.\r\n\r\nYou have to first wait till the lowercasing-PR merged - probably Mon or early next week, and then AFAIK [the online version](https://huggingface.co/models) doesn't get updated automatically - the models' code doesn't change often - so we will have to ask for this to happen. And once you see the demo working on the site, then it's in the clear to share with others.", "Thank you for the code showcasing the differences between models. I couldn't find a doc with that info.\r\n\r\n> I suppose you're not concerned with the upcase letters in the dict/merge files of your pre-trained model. I'd have thought that fairseq pre-processor would lowercased all inputs. But if you think it's no problem, then all is good.\r\n\r\nProbably the perl command included in the fairseq preparation script didn't catch all cases; I can't think of another explanation. In any case I will be modifying this script to re-train without lower-casing and with a bigger number of BPE tokens, just to see if I get a more convenient model that doesn't need the lower-case argument (and hopefully without losing much BLEU).\r\n\r\n> You have to first wait till the lowercasing-PR merged - probably Mon or early next week, and then AFAIK the online version doesn't get updated automatically - the models' code doesn't change often - so we will have to ask for this to happen. And once you see the demo working on the site, then it's in the clear to share with others.\r\n\r\nOK so I'm waiting for the merge, then upload and probably come back in this thread to request an update on the online version if possible. Thanks for the help @stas00 ! \r\n ", "FYI, the lower-casing PR has been merged, so please let me know whether you're waiting to re-train with mixed-casing or whether you want to upload the lower-case model and I will then ask to update the code on the models server.", "I already started the mixed-casing training and I was thinking I can upload all 4 of them (lower EN2EL, lower EL2EN, mixed EN2EL, mixed EL2EN) together. The mixed ones also have a bigger vocabulary (almost double) and BLEU scores are very similar to the older lower case ones.", "Great!\r\n\r\nI made the request to update the server - I will update when this is done.\r\n\r\nIt's good to know that there is not much difference with the bigger vocab. I'm curious to how the scores would have been different if your original model was truly lower-case (since it's not at the moment if you check the vocab). (this is just for my learning should you ever run this test)", "Hi @stas00, just pinging to check if the code on the model hub is updated.\r\nI also trained cased models, and I uploaded one to the hub already: https://huggingface.co/lighteternal/SSE-TUC-mt-en-el-cased\r\n\r\nBut it returns an error when used online from the link above: \r\n```\r\nUnrecognized configuration class for this kind of AutoModel: AutoModelForCausalLM. Model type should be one of CamembertConfig, XLMRobertaConfig, RobertaConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, XLMConfig, CTRLConfig, ReformerConfig, BertGenerationConfig, XLMProphetNetConfig, ProphetNetConfig.\r\n```\r\nI also noticed that the tag directly above the field box is falsely assigned as \"text-generation\".\r\n", "Hi, it seems your model card is defined in markdown format not in Yaml: https://huggingface.co/lighteternal/SSE-TUC-mt-en-el-cased leading to incorrect pipeline detection (hence the error you are seeing). Can you try setting the pieline correctly ? \r\n\r\nhttps://huggingface.co/docs#how-are-model-tags-determined\r\n\r\nLet us know if it works better", "Thanks @Narsil I updated the model card, but it doesn't seem to have made any difference yet. Is it possible that it takes some time to change pipeline?", "It appears that editing the README from the browser, doesn't work; after pulling, editing and pushing again it worked! Many thanks! :) ", "> Thanks @Narsil I updated the model card, but it doesn't seem to have made any difference yet. Is it possible that it takes some time to change pipeline?\r\n\r\nMaybe we forgot to hook a refresh on commits from the website, @Pierrci?", "Indeed, I pushed a fix that will be deployed soon, next time you edit a readme or another file on the website the changes will reflect instantly @lighteternal, thanks for reporting this!" ]
1,604
1,605
1,604
CONTRIBUTOR
null
This is an attempt to see whether fsmt can support older fairseq archs, based on this request: https://github.com/huggingface/transformers/issues/8233 Currently it's just changing the conversion code directly to see if it can be converted. ``` python src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py --fsmt_checkpoint_path ./fairseq-en-el-model/checkpoint_best.pt --pytorch_dump_folder_path ./model-data ``` @lighteternal, please have a look. I made the model configuration `args` based on the arch configuration. The only issue at the moment is sizes of encoder decoder - for some reason the vocab size seems to be reversed? ``` $wc -l fairseq-en-el-model/*txt 12892 fairseq-en-el-model/dict.el.txt 9932 fairseq-en-el-model/dict.en.txt ``` So this is English to Greek, correct? And not the other way around, correct? So the source is `9932` and target is `12892`-long. Your issue mentioned "Greek<->English", but this model must be one way - which is it? When I run the script: ``` size mismatch for model.encoder.embed_tokens.weight: copying a param with shape torch.Size([12896, 512]) from checkpoint, the shape in current model is torch.Size([9936, 512]). size mismatch for model.decoder.output_projection.weight: copying a param with shape torch.Size([9936, 512]) from checkpoint, the shape in current model is torch.Size([12896, 512]). ``` So it suggests that the encoder is `12896`- long, which should be the other way around, no? Unless it was trained on Greek to English. Well, you can also experiment with the conversion.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8374/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8374/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8374", "html_url": "https://github.com/huggingface/transformers/pull/8374", "diff_url": "https://github.com/huggingface/transformers/pull/8374.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8374.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8373
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8373/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8373/comments
https://api.github.com/repos/huggingface/transformers/issues/8373/events
https://github.com/huggingface/transformers/issues/8373
738,028,550
MDU6SXNzdWU3MzgwMjg1NTA=
8,373
removing runs folders
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hi,\r\n\r\nYou can move the path of the \"**_runs/_**\" folder by adding `logging_dir` to your `TrainingArguments` such like this:\r\n\r\n```python\r\ntraining_args = TrainingArguments(\r\n output_dir=OUTPUT_PATH,\r\n overwrite_output_dir=True,\r\n logging_dir=OUTPUT_PATH + \"/logs\",\r\n ...\r\n)\r\n```" ]
1,604
1,619
1,610
NONE
null
Hi Do you know where in the codes when training finetune_trainer.py a folder "runs" created? could it be moved to output_dir? so this is always created in the directory running codes than output_dir thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8373/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8373/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8372
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8372/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8372/comments
https://api.github.com/repos/huggingface/transformers/issues/8372/events
https://github.com/huggingface/transformers/issues/8372
738,024,380
MDU6SXNzdWU3MzgwMjQzODA=
8,372
Fine-tuning for QA: how to prepare custom dataset?
{ "login": "AndreyStille", "id": 65295663, "node_id": "MDQ6VXNlcjY1Mjk1NjYz", "avatar_url": "https://avatars.githubusercontent.com/u/65295663?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AndreyStille", "html_url": "https://github.com/AndreyStille", "followers_url": "https://api.github.com/users/AndreyStille/followers", "following_url": "https://api.github.com/users/AndreyStille/following{/other_user}", "gists_url": "https://api.github.com/users/AndreyStille/gists{/gist_id}", "starred_url": "https://api.github.com/users/AndreyStille/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AndreyStille/subscriptions", "organizations_url": "https://api.github.com/users/AndreyStille/orgs", "repos_url": "https://api.github.com/users/AndreyStille/repos", "events_url": "https://api.github.com/users/AndreyStille/events{/privacy}", "received_events_url": "https://api.github.com/users/AndreyStille/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is for **extractive** QA which means the answer needs to be in a continuous sequence in the context. The way the model works is by predicting the start and end indices of the answer. ", "> This is for **extractive** QA which means the answer needs to be in a continuous sequence in the context. The way the model works is by predicting the start and end indices of the answer.\r\n\r\nthanks for help \r\nI found what I needed" ]
1,604
1,604
1,604
NONE
null
I have csv dataset article,name I want to use fine-tuning(Albert), don't undestand how to prepare my dataset to suite for fine-tuning tutorial (https://huggingface.co/transformers/custom_datasets.html#question-answering-with-squad-2-0) Can't understand the reason why we add start and end idx of answer in context to 'answer' part of dataset. Could I use just text in answer, without start and end ? If context doesn't contain all answer words in a row, what indices should be selected then?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8372/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8372/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8371
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8371/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8371/comments
https://api.github.com/repos/huggingface/transformers/issues/8371/events
https://github.com/huggingface/transformers/pull/8371
737,993,953
MDExOlB1bGxSZXF1ZXN0NTE2OTM4NDQx
8,371
[make] rewrite modified_py_files in python to be cross-platform
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Nice!! Now I have with `make fixup`:\r\n\r\n```\r\n-n was unexpected.\r\nmake: *** [modified_only_fixup] Error 255\r\n```", "OK, so it sounds that perhaps I was trying to fix the wrong thing in first place. Let's see if we can find a different way to check whether the variable is empty that also works on windows.", "@jplu, please try now - I used `test -n` instead.", "same error", "ok, can you perhaps tell me which test would work on windows? \r\n\r\nWe are doing a very simple thing\r\n```\r\nif len(foo):\r\n do a\r\nelse:\r\n do b\r\n```\r\nwhat's a good test for us to use for `len(foo)` on windows?\r\n\r\nalso what's your `test -h` shows - perhaps no `-n` option there?\r\n\r\non unix:\r\n```\r\n\r\n -n STRING\r\n the length of STRING is nonzero\r\n\r\n STRING equivalent to -n STRING\r\n```\r\n\r\ndoes it work if you just remove \"-n\", i.e. `test -n` and just `test` are supposed to do the same.\r\n", "Doesn't matter which command I do with `test` it doesn't works. I don't think we will succeed to run any shell code, only the commands are available not the shell interpreter. Would it be possible to turn that into a Python subprocess that runs itself the `black`, `isort` and `flake8` command?", "It's possible, but surely there must be a way to do such a basic thing on windows.\r\n\r\nHow about this?\r\n\r\n```\r\n\t@if [ \"$(modified_py_files)\" != \"\" ]; then \\\r\n\t\techo \"Checking/fixing $(modified_py_files)\"; \\\r\n\telse \\\r\n\t\techo \"No library .py files were modified\"; \\\r\n\tfi\r\n```\r\n\r\nIf that doesn't work either, perhaps it doesn't like the spacing? remove space before `\\`?", "I get:\r\n```\r\n\"utils/check_repo.py\" was unexpected.\r\nmake: *** [modified_only_fixup] Erreur 255\r\n```\r\n\r\nWith:\r\n\r\n```\r\n$(eval modified_py_files := $(shell python utils/get_modified_files.py $(check_dirs)))\r\n@if [ \"$(modified_py_files)\" != \"\" ]; then \\\r\n\techo \"Checking/fixing $(modified_py_files)\"; \\\r\nelse \\\r\n\techo \"No library .py files were modified\"; \\\r\nfi\r\n```\r\n\r\n\"utils/check_repo.py\" is the value of `$(modified_py_files)`", "Thank you for checking that and overall your patience so far. \r\n\r\nThis back-n-force doesn't seem to lead to a productive way of sorting it out. I need to have direct access to experiment on.\r\n\r\nIs there a way I could get access to a windows box with a similar setup to yours - I won't suppose there is google colab like environment but on windows?\r\n\r\n@sgugger, is there someone on the hf team who groks windows shell/make and could help us out here? We need to be able to do such basic things in `Makefile` as checking if a variable is not \"\".", "If it can help, PowerShell is an opensource project https://github.com/PowerShell/PowerShell so you can install it (on Mac or Linux) to have a Windows interpreter like mine. But I don't know if you will have access to your tools from it, never tried this can of thing on my Linux.\r\n", "I don't think there is anyone that can help, maybe @mfuntowicz ? I personally only use WSL on Windows on my laptop, so the make commands work without any problem", "OK, so since the problematic on windows code is now hidden in the optional target, we can go ahead and merge this PR - as it no longer interferes with the normal functioning of Makefile for @jplu and other users on the same setup. And meanwhile we will look for a solution for this basic check. How does that sound, @jplu? \r\n\r\nOne possible solution would be to require windows users to use one of the approved set of shells, since as @sgugger says he doesn't have this issue on his windows box. Does it make sense?", "> If it can help, PowerShell is an opensource project https://github.com/PowerShell/PowerShell so you can install it (on Mac or Linux) to have a Windows interpreter like mine. But I don't know if you will have access to your tools from it, never tried this can of thing on my Linux.\r\n\r\nThank you, @jplu. I will give it a quick try. And if it takes too long I will defer to someone who is a windows user to sort this out, but it doesn't need to keep us from merging this fix.", "@jplu, fwiw I installed PowerShell and everything works there just fine:\r\n```\r\n/hf/transformers-fixup-python> pwsh\r\nPS /hf/transformers-fixup-python> make fixup\r\nChecking/fixing src/transformers/testing_utils.py utils/get_modified_files.py\r\nAll done! ✨ 🍰 ✨\r\n2 files left unchanged.\r\npython utils/check_copies.py\r\npython utils/check_dummies.py\r\npython utils/check_repo.py\r\n2020-11-06 21:02:02.519250: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0\r\nChecking all models are properly tested.\r\nChecking all models are properly documented.\r\npython utils/style_doc.py src/transformers docs/source --max_len 119\r\n```\r\n\r\nEither of these works:\r\n```\r\n @if test -n \"$(modified_py_files)\"; then \\\r\n @if [ \"$(modified_py_files)\" != \"\" ]; then \\\r\n```\r\n", "I confirm it is still not working on my side, there should be some differences between both version. I think in your case it is biased because you have access to a shell code interpreter. I checked a bit how to do a `if` in usual PowerShell and apparently it is not the same way https://docs.microsoft.com/en-gb/powershell/scripting/learn/deep-dives/everything-about-if?view=powershell-7 they don't if the `if` and `fi`. So on your side I think that `make` is still using your usual shell and not powershell.\r\n\r\n", "So we need a not shell-specific way.\r\n\r\nI don't know anything about the Windows world - but is the shell you use better/preferable to the one that @sgugger uses, which he says doesn't have this problem?\r\n\r\nUnrelated, could you please confirm that with this PR the rest of the `Makefile` works for you - that is you have no trouble running all targets, except `fixup`?\r\n\r\nAs I mentioned `fixup` is an optional target, as long as `style` and `quality` targets work. The former is much faster, but it's not required for the normal operation. And I trust that eventually we will find a way to sort it out for all setups.", "I'm using as well the one that uses @sgugger which is not a simple shell, but a real Linux distribution installed on Windows (mine for example is Ubuntu 20.04), this is called WSL, and this is mostly what I'm using. The advantage of this is that you can test your code on Windows + Linux in same time. But I don't think we can oblige the Windows users to install this.\r\n\r\nI can confirm that the \"problematic\" part is only the `if` now.\r\n\r\nThis issue is not blocking as you said, I'm not against dealing with it right now that the previous issue (the long egrep command) is solved with the new Python script.", "Thank you for confirming that this PR can be merged, @jplu and following up with the details on where it works.\r\n\r\nWould you kindly open a new issue and describe the specifics of this hurdle we couldn't overcome here, ideally listing all the ways we have tried and failed and most importantly how one could reproduce this situation by having the exact setup you discovered this problem on.\r\n\r\nThen we will seek out someone who knows how to solve this. \r\n\r\nI suppose this could be as simple as posting a tiny Makefile that's unrelated to `transformers` to SO, which demonstrates the issue and surely someone will have the answer.\r\n\r\nThank you!\r\n", "I can take care of it. I will open a PR to try to solve this in a pure Python way. It should not be too difficult.", "While you can do that - this is not the best way if we want to continue using `Makefile`. Otherwise just as well we do away with `Makefile` completely and write everything in python.\r\n\r\nIf we use `Makefile` we need to be able to use it properly, and because we have stumbled upon a tiny problem on some variation of Windows setup and don't have an immediate expertise to solve it is not a strong enough reason to warrant such change.", "Then what about using something like?\r\n```\r\nOS = `uname -a`\r\nifeq ($(OS), Win32)\r\n doing some Windows here\r\nelse\r\n doing some Linux here\r\n```", "Absolutely! I suggested that very early on - please make a version that works for your setup and then we can integrate it with the main version. I thought you didn't know how to make it work, hence I was seeking out variations that may work for you. Your suggestion makes total sense.\r\n\r\nPerhaps let's do it in a separate issue/PR so that we can move this one along in case some windows user is stuck needing this - but if you guys feel that there is no urgency then I will slap the [WIP] back on this PR and then we will go at it until it's perfect. Your call. I just don't want to waste time of reviewers who are probably very confused by now what to do here ;)", "Let's merge this one, and I will rebase on mine to take care of this 👍 ", "Looks to be a know bug in the make version of Windows http://gnu-make.2324884.n4.nabble.com/process-begin-CreateProcess-NULL-quot-quot-failed-td13819.html#a13820\r\n\r\nWe have to recompile Make to make it works, so I think we won't be able to fix this issue with a pure Makefile solution.", "Ok, I did install the last version of Make (4.3) and now it works perfectly. Problem solved 👍 ", "Amazing! Thank you for investigating this and getting to the root of it, @jplu! ", "I don't know whether it's worth documenting in the CONTRIBUTING.md as a possible caveat and giving the solution?", "Good idea! I will do a PR to add a section for Windows.", "Added here https://github.com/huggingface/transformers/pull/8436" ]
1,604
1,605
1,604
CONTRIBUTOR
null
As it can be seen here https://github.com/huggingface/transformers/pull/8359 some windows setups don't do well with the few lines of unix code that derive which files were modified, when called from `Makefile` so it looked like the best solution was to rewrite that logic in python which this PR does. **status update:** If I understand correctly this PR works on windows @sgugger uses, but not for @jplu, but it should be merged regardless as it removes the hurdle reported in https://github.com/huggingface/transformers/pull/8359, which was preventing @jplu from using any `make` targets. Since `fixup` is an optional target and isn't a show stopper I recommend we merge this and then move the remaining issue into a separate issue and then find a windows expert to help make this target work for all windows users. The rest of the post is the original request to validate that the PR works on windows. ---------------------------- @jplu - please kindly validate that this works on windows. test it with: 1. no changes to any files under src utils tests examples ``` make fixup ``` this one should not fail but not do much either, my output is: ``` No library .py files were modified python utils/check_copies.py python utils/check_dummies.py python utils/check_repo.py Checking all models are properly tested. Checking all models are properly documented. python utils/style_doc.py src/transformers docs/source --max_len 119 ``` 2. with 1 change in some .py file under 2 of the above dirs, by hand or via `echo` if it works on windows: ``` echo -n "#" >> tests/conftest.py echo -n "#" >> examples/README.md echo -n "#" >> src/transformers/testing_utils.py make fixup ``` this time I get: ``` Checking/fixing src/transformers/testing_utils.py tests/conftest.py reformatted /mnt/nvme1/code/huggingface/transformers-fixup-python/tests/conftest.py reformatted /mnt/nvme1/code/huggingface/transformers-fixup-python/src/transformers/testing_utils.py All done! ✨ 🍰 ✨ 2 files reformatted python utils/check_copies.py python utils/check_dummies.py python utils/check_repo.py Checking all models are properly tested. Checking all models are properly documented. python utils/style_doc.py src/transformers docs/source --max_len 119 ``` If all is good you should get the same output as mine, sans different base path. The key is to see: ``` Checking/fixing src/transformers/testing_utils.py tests/conftest.py ``` it may include `utils/get_modified_files.py` if you're checking out this PR as it's under git too, it doesn't matter. What we want to ensure is that out of 3 intentionally modified files 2 gets looked at as they are ending with `.py`. Just in case you don't know how to checkout a pr, you can for example use `gh` https://github.com/cli/cli: ``` gh pr checkout 8371 ``` but there are many other ways to do it. Thank you! @LysandreJik, @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8371/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8371/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8371", "html_url": "https://github.com/huggingface/transformers/pull/8371", "diff_url": "https://github.com/huggingface/transformers/pull/8371.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8371.patch", "merged_at": 1604771117000 }
https://api.github.com/repos/huggingface/transformers/issues/8370
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8370/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8370/comments
https://api.github.com/repos/huggingface/transformers/issues/8370/events
https://github.com/huggingface/transformers/issues/8370
737,985,448
MDU6SXNzdWU3Mzc5ODU0NDg=
8,370
BertTokenizerFast object has no attribute 'ids_to_tokens'
{ "login": "githubrandomuser2017", "id": 25097908, "node_id": "MDQ6VXNlcjI1MDk3OTA4", "avatar_url": "https://avatars.githubusercontent.com/u/25097908?v=4", "gravatar_id": "", "url": "https://api.github.com/users/githubrandomuser2017", "html_url": "https://github.com/githubrandomuser2017", "followers_url": "https://api.github.com/users/githubrandomuser2017/followers", "following_url": "https://api.github.com/users/githubrandomuser2017/following{/other_user}", "gists_url": "https://api.github.com/users/githubrandomuser2017/gists{/gist_id}", "starred_url": "https://api.github.com/users/githubrandomuser2017/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/githubrandomuser2017/subscriptions", "organizations_url": "https://api.github.com/users/githubrandomuser2017/orgs", "repos_url": "https://api.github.com/users/githubrandomuser2017/repos", "events_url": "https://api.github.com/users/githubrandomuser2017/events{/privacy}", "received_events_url": "https://api.github.com/users/githubrandomuser2017/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! `ids_to_token` is legacy code that should have its visibility set to private. We recommend using the `convert_ids_to_tokens` method, or the `get_vocab()` if you wish to take a look at the whole vocabulary. Both should be available on both the `BertTokenizer` and `BertTokenizerFast`." ]
1,604
1,604
1,604
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: Google Colab - Python version: 3.6.9 - PyTorch version (GPU?): 1.7 - Tensorflow version (GPU?): - Using GPU in script?: No - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSTM: @stas00 examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> @mfuntowicz ## Information BertTokenizer has an attribute dictionary `ids_to_tokens`. However, BertTokenizerFast does not have this dictionary. I would expect them to have the same interfaces. Model I am using (Bert, XLNet ...): BERT, BertTokenizerFast The problem arises when using: * [ ] the official example scripts: (give details below) * [x ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') print(type(tokenizer).__name__) # BertTokenizer print(tokenizer.ids_to_tokens[0]) # [PAD] tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', use_fast=True) print(type(tokenizer).__name__) # BertTokenizerFast print(tokenizer.ids_to_tokens[0]) # 'BertTokenizerFast' object has no attribute 'ids_to_tokens' ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior BertTokenizerFast should convert ID 0 to '[PAD]'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8370/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8370/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8369
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8369/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8369/comments
https://api.github.com/repos/huggingface/transformers/issues/8369/events
https://github.com/huggingface/transformers/pull/8369
737,971,662
MDExOlB1bGxSZXF1ZXN0NTE2OTIwMTky
8,369
Model card: T5-base fine-tuned on QuaRTz
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Should I add tags like`question answering`and so on? @julien-c ", "If it's intended to be consumed (through the inference API) as a QA model, you can add `pipeline_tag: question-answering`.\r\n\r\ncc @Narsil " ]
1,604
1,605
1,605
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8369/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8369/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8369", "html_url": "https://github.com/huggingface/transformers/pull/8369", "diff_url": "https://github.com/huggingface/transformers/pull/8369.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8369.patch", "merged_at": 1605724468000 }
https://api.github.com/repos/huggingface/transformers/issues/8368
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8368/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8368/comments
https://api.github.com/repos/huggingface/transformers/issues/8368/events
https://github.com/huggingface/transformers/pull/8368
737,964,432
MDExOlB1bGxSZXF1ZXN0NTE2OTE0MjYy
8,368
[TF generate] Cut encoder outptus to just last hidden states for now
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #8361 As shown in #8361, the current TF generate can throw errros if `config.output_attentions` is set to `True`. Because we cannot outptut `attentions` and `hidden_states` at the moment anyways in TF, this PR just cuts the encoder outputs to just the `last_hidden_states` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8368/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8368/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8368", "html_url": "https://github.com/huggingface/transformers/pull/8368", "diff_url": "https://github.com/huggingface/transformers/pull/8368.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8368.patch", "merged_at": 1604693005000 }
https://api.github.com/repos/huggingface/transformers/issues/8367
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8367/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8367/comments
https://api.github.com/repos/huggingface/transformers/issues/8367/events
https://github.com/huggingface/transformers/issues/8367
737,945,480
MDU6SXNzdWU3Mzc5NDU0ODA=
8,367
Which model to choose for seq2seq(generating headers for articles)?
{ "login": "AndreyStille", "id": 65295663, "node_id": "MDQ6VXNlcjY1Mjk1NjYz", "avatar_url": "https://avatars.githubusercontent.com/u/65295663?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AndreyStille", "html_url": "https://github.com/AndreyStille", "followers_url": "https://api.github.com/users/AndreyStille/followers", "following_url": "https://api.github.com/users/AndreyStille/following{/other_user}", "gists_url": "https://api.github.com/users/AndreyStille/gists{/gist_id}", "starred_url": "https://api.github.com/users/AndreyStille/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AndreyStille/subscriptions", "organizations_url": "https://api.github.com/users/AndreyStille/orgs", "repos_url": "https://api.github.com/users/AndreyStille/repos", "events_url": "https://api.github.com/users/AndreyStille/events{/privacy}", "received_events_url": "https://api.github.com/users/AndreyStille/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! Questions like this are welcome on the forum, we try to keep the github issues for bugs/feature requests only :) Thanks!\r\n\r\nForum link: https://discuss.huggingface.co" ]
1,604
1,604
1,604
NONE
null
Hi Task is generating headers for articles. I have dataset with articles and correct names. I think about using T5 for summarization or GPT-2 for seq2seq modeling. I'm not sure what way to choose Can you please give me advise which direction is more relevant for this task?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8367/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8367/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8366
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8366/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8366/comments
https://api.github.com/repos/huggingface/transformers/issues/8366/events
https://github.com/huggingface/transformers/pull/8366
737,938,463
MDExOlB1bGxSZXF1ZXN0NTE2ODkyODE5
8,366
Some added tests for TokenClassificationArgumentHandler
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8366/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8366/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8366", "html_url": "https://github.com/huggingface/transformers/pull/8366", "diff_url": "https://github.com/huggingface/transformers/pull/8366.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8366.patch", "merged_at": 1604686617000 }
https://api.github.com/repos/huggingface/transformers/issues/8365
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8365/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8365/comments
https://api.github.com/repos/huggingface/transformers/issues/8365/events
https://github.com/huggingface/transformers/issues/8365
737,886,441
MDU6SXNzdWU3Mzc4ODY0NDE=
8,365
TFTrainerArguments: ImportError: Method `device` requires PyTorch.
{ "login": "vlreinier", "id": 43336873, "node_id": "MDQ6VXNlcjQzMzM2ODcz", "avatar_url": "https://avatars.githubusercontent.com/u/43336873?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vlreinier", "html_url": "https://github.com/vlreinier", "followers_url": "https://api.github.com/users/vlreinier/followers", "following_url": "https://api.github.com/users/vlreinier/following{/other_user}", "gists_url": "https://api.github.com/users/vlreinier/gists{/gist_id}", "starred_url": "https://api.github.com/users/vlreinier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vlreinier/subscriptions", "organizations_url": "https://api.github.com/users/vlreinier/orgs", "repos_url": "https://api.github.com/users/vlreinier/repos", "events_url": "https://api.github.com/users/vlreinier/events{/privacy}", "received_events_url": "https://api.github.com/users/vlreinier/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Indeed. It has been fixed on master and will be in the next release (early next week). In the meantime you can install from source to have the fix.", "Ah ok, thanks for the quick response 👍 " ]
1,604
1,604
1,604
NONE
null
from transformers import TFTrainer, TFTrainingArguments training_args = TFTrainingArguments(output_dir='models', num_train_epochs=1, per_device_train_batch_size=32, per_device_eval_batch_size=32, warmup_steps=500, weight_decay=0.05, logging_dir='logs' ) Above code produces the following error: ImportError Traceback (most recent call last) <ipython-input-3-017a860e7c76> in <module> 5 warmup_steps=500, 6 weight_decay=0.05, ----> 7 logging_dir='logs' 8 ) <string> in __init__(self, output_dir, overwrite_output_dir, do_train, do_eval, do_predict, evaluate_during_training, evaluation_strategy, prediction_loss_only, per_device_train_batch_size, per_device_eval_batch_size, per_gpu_train_batch_size, per_gpu_eval_batch_size, gradient_accumulation_steps, eval_accumulation_steps, learning_rate, weight_decay, adam_beta1, adam_beta2, adam_epsilon, max_grad_norm, num_train_epochs, max_steps, warmup_steps, logging_dir, logging_first_step, logging_steps, save_steps, save_total_limit, no_cuda, seed, fp16, fp16_opt_level, local_rank, tpu_num_cores, tpu_metrics_debug, debug, dataloader_drop_last, eval_steps, dataloader_num_workers, past_index, run_name, disable_tqdm, remove_unused_columns, label_names, load_best_model_at_end, metric_for_best_model, greater_is_better, tpu_name, poly_power, xla) /usr/local/lib/python3.6/dist-packages/transformers/training_args.py in __post_init__(self) 352 self.run_name = self.output_dir 353 --> 354 if self.device.type != "cuda" and self.fp16: 355 raise ValueError("AMP (`--fp16`) can only be used on CUDA devices.") 356 /usr/local/lib/python3.6/dist-packages/transformers/file_utils.py in wrapper(*args, **kwargs) 1171 return func(*args, **kwargs) 1172 else: -> 1173 raise ImportError(f"Method `{func.__name__}` requires PyTorch.") 1174 1175 return wrapper ImportError: Method `device` requires PyTorch. Seems like a bug?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8365/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8365/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8364
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8364/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8364/comments
https://api.github.com/repos/huggingface/transformers/issues/8364/events
https://github.com/huggingface/transformers/pull/8364
737,866,122
MDExOlB1bGxSZXF1ZXN0NTE2ODMzMTI3
8,364
Patch token classification pipeline
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Merging now to ensure the bug is patched in v3.5.0. Will address your comments in a future PR @Narsil." ]
1,604
1,605
1,605
MEMBER
null
This PR patches issues found with the `TokenClassificationPipeline` since the merge of https://github.com/huggingface/transformers/pull/5970, namely not being able to load a slow tokenizer in the pipeline. It also sets the `ignore_subwords` to `False` by default, as this does not work with the slow tokenizers. No release have been done since the introduction of that argument, so it is not a breaking change.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8364/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8364/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8364", "html_url": "https://github.com/huggingface/transformers/pull/8364", "diff_url": "https://github.com/huggingface/transformers/pull/8364.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8364.patch", "merged_at": 1605011374000 }
https://api.github.com/repos/huggingface/transformers/issues/8363
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8363/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8363/comments
https://api.github.com/repos/huggingface/transformers/issues/8363/events
https://github.com/huggingface/transformers/pull/8363
737,846,286
MDExOlB1bGxSZXF1ZXN0NTE2ODE2ODkz
8,363
Create README.md
{ "login": "yfpeng", "id": 2766437, "node_id": "MDQ6VXNlcjI3NjY0Mzc=", "avatar_url": "https://avatars.githubusercontent.com/u/2766437?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yfpeng", "html_url": "https://github.com/yfpeng", "followers_url": "https://api.github.com/users/yfpeng/followers", "following_url": "https://api.github.com/users/yfpeng/following{/other_user}", "gists_url": "https://api.github.com/users/yfpeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/yfpeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yfpeng/subscriptions", "organizations_url": "https://api.github.com/users/yfpeng/orgs", "repos_url": "https://api.github.com/users/yfpeng/repos", "events_url": "https://api.github.com/users/yfpeng/events{/privacy}", "received_events_url": "https://api.github.com/users/yfpeng/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,604
1,605
1,605
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8363/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8363/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8363", "html_url": "https://github.com/huggingface/transformers/pull/8363", "diff_url": "https://github.com/huggingface/transformers/pull/8363.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8363.patch", "merged_at": 1605724384000 }
https://api.github.com/repos/huggingface/transformers/issues/8362
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8362/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8362/comments
https://api.github.com/repos/huggingface/transformers/issues/8362/events
https://github.com/huggingface/transformers/pull/8362
737,845,395
MDExOlB1bGxSZXF1ZXN0NTE2ODE2MTAz
8,362
Create README.md
{ "login": "yfpeng", "id": 2766437, "node_id": "MDQ6VXNlcjI3NjY0Mzc=", "avatar_url": "https://avatars.githubusercontent.com/u/2766437?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yfpeng", "html_url": "https://github.com/yfpeng", "followers_url": "https://api.github.com/users/yfpeng/followers", "following_url": "https://api.github.com/users/yfpeng/following{/other_user}", "gists_url": "https://api.github.com/users/yfpeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/yfpeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yfpeng/subscriptions", "organizations_url": "https://api.github.com/users/yfpeng/orgs", "repos_url": "https://api.github.com/users/yfpeng/repos", "events_url": "https://api.github.com/users/yfpeng/events{/privacy}", "received_events_url": "https://api.github.com/users/yfpeng/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,604
1,605
1,605
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8362/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8362/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8362", "html_url": "https://github.com/huggingface/transformers/pull/8362", "diff_url": "https://github.com/huggingface/transformers/pull/8362.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8362.patch", "merged_at": 1605724635000 }
https://api.github.com/repos/huggingface/transformers/issues/8361
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8361/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8361/comments
https://api.github.com/repos/huggingface/transformers/issues/8361/events
https://github.com/huggingface/transformers/issues/8361
737,804,159
MDU6SXNzdWU3Mzc4MDQxNTk=
8,361
TF generate() function is incompatible with output_attention and output_hidden_states
{ "login": "sebastianGehrmann", "id": 2212304, "node_id": "MDQ6VXNlcjIyMTIzMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2212304?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sebastianGehrmann", "html_url": "https://github.com/sebastianGehrmann", "followers_url": "https://api.github.com/users/sebastianGehrmann/followers", "following_url": "https://api.github.com/users/sebastianGehrmann/following{/other_user}", "gists_url": "https://api.github.com/users/sebastianGehrmann/gists{/gist_id}", "starred_url": "https://api.github.com/users/sebastianGehrmann/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sebastianGehrmann/subscriptions", "organizations_url": "https://api.github.com/users/sebastianGehrmann/orgs", "repos_url": "https://api.github.com/users/sebastianGehrmann/repos", "events_url": "https://api.github.com/users/sebastianGehrmann/events{/privacy}", "received_events_url": "https://api.github.com/users/sebastianGehrmann/received_events", "type": "User", "site_admin": false }
[ { "id": 1862634478, "node_id": "MDU6TGFiZWwxODYyNjM0NDc4", "url": "https://api.github.com/repos/huggingface/transformers/labels/Should%20Fix", "name": "Should Fix", "color": "FF0000", "default": false, "description": "This has been identified as a bug and should be fixed." } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Hey @sebastianGehrmann,\r\n\r\nThanks for the super in-detail error description! You're 100% correct. The PR attached below does an rather ugly fix for now (just cut the `encoder_outputs` to the last hidden states) since the `attentions` and `hidden_states` cannot be output anyways at the moment. \r\n\r\nWith the PT generate refactor: https://github.com/huggingface/transformers/pull/6949 it will now be pretty easy to add functionality to output `attentions` and `hidden_states` for generate in PT - so we should do this soon for PT. \r\n\r\nFor TF we first need to do the same `generate()` refactor and then we can implement this functionality as well. \r\n\r\nSorry that it takes such a long time :-/", "Thank you for the insanely quick fix @patrickvonplaten! No worries about the time, given that no one else has been bumping into this suggests that is is quite a niche requirement. \r\n\r\n", "Would this be resolved at any point? Thanks," ]
1,604
1,651
1,604
NONE
null
## Environment info - `transformers` version: 3.4.0 - Platform: Mac OS Catalina (10.15.6) - Python version: 3.6.8 - PyTorch version (GPU?): N/A - Tensorflow version (GPU?): 2.3.1 (no) - Using GPU in script?: No, but bug is persistent regardless of device. - Using distributed or parallel set-up in script?: No ### Who can help @sshleifer @TevenLeScao @patrickvonplaten ## Information The generate() function in modeling_tf_utils assumes that outputs from a model call have a static number of outputs. If either `output_attention` or `output_hidden_states` is set, the number of outputs is different, causing the function to fail. The fix should be pretty simple and only involve checking for the output size (or completely switching to dict/NamedTuple outputs from model modules since variable length returns are brittle :)). The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Create a TF model with one of the above mentioned flags set. 2. Call `.generate()` on the model. ```python import transformers model = transformers.TFT5ForConditionalGeneration.from_pretrained('t5-small', output_hidden_states=True, output_attentions=True) tokenizer = transformers.T5Tokenizer.from_pretrained('t5-small') input_ids = tokenizer.batch_encode_plus(['test 1', 'test 2', 'test 3'], return_tensors="tf", padding='longest') output_ids = model.generate(input_ids['input_ids'], attention_mask=input_ids['attention_mask']) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/transformers/generation_tf_utils.py", line 405, in generate use_cache=use_cache, File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/transformers/generation_tf_utils.py", line 445, in _generate_no_beam_search outputs = self(**model_inputs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 985, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/transformers/modeling_tf_t5.py", line 1352, in call training=training, File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 985, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/transformers/modeling_tf_t5.py", line 759, in call training=training, File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 985, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/transformers/modeling_tf_t5.py", line 450, in call assert len(past_key_value) == expected_num_past_key_values, error_message AssertionError: There should be 4 past states. 2 (past / key) for self attention.2 (past / key) for cross attention Got 3 past key / value states ``` ## Expected behavior This snippet should not crash and have the same behavior as the one below. Though one may argue that the `generate()` function in this case should also return states/attentions which would complicate things. However, even ignoring the flags when generating is better than crashing. ```python import transformers model = transformers.TFT5ForConditionalGeneration.from_pretrained('t5-small', output_hidden_states=False, output_attentions=False) tokenizer = transformers.T5Tokenizer.from_pretrained('t5-small') input_ids = tokenizer.batch_encode_plus(['test 1', 'test 2', 'test 3'], return_tensors="tf", padding='longest') output_ids = model.generate(input_ids['input_ids'], attention_mask=input_ids['attention_mask']) print(output_ids) tf.Tensor( [[ 0 2300 209 1 0] [ 0 2300 794 204 1] [ 0 2300 220 1 0]], shape=(3, 5), dtype=int32) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8361/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8361/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8360
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8360/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8360/comments
https://api.github.com/repos/huggingface/transformers/issues/8360/events
https://github.com/huggingface/transformers/pull/8360
737,745,813
MDExOlB1bGxSZXF1ZXN0NTE2NzMzNzgw
8,360
Update README.md
{ "login": "hassoudi", "id": 6810258, "node_id": "MDQ6VXNlcjY4MTAyNTg=", "avatar_url": "https://avatars.githubusercontent.com/u/6810258?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hassoudi", "html_url": "https://github.com/hassoudi", "followers_url": "https://api.github.com/users/hassoudi/followers", "following_url": "https://api.github.com/users/hassoudi/following{/other_user}", "gists_url": "https://api.github.com/users/hassoudi/gists{/gist_id}", "starred_url": "https://api.github.com/users/hassoudi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hassoudi/subscriptions", "organizations_url": "https://api.github.com/users/hassoudi/orgs", "repos_url": "https://api.github.com/users/hassoudi/repos", "events_url": "https://api.github.com/users/hassoudi/events{/privacy}", "received_events_url": "https://api.github.com/users/hassoudi/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
Fix websitr address # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8360/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8360/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8360", "html_url": "https://github.com/huggingface/transformers/pull/8360", "diff_url": "https://github.com/huggingface/transformers/pull/8360.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8360.patch", "merged_at": 1604681147000 }
https://api.github.com/repos/huggingface/transformers/issues/8359
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8359/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8359/comments
https://api.github.com/repos/huggingface/transformers/issues/8359/events
https://github.com/huggingface/transformers/pull/8359
737,745,736
MDExOlB1bGxSZXF1ZXN0NTE2NzMzNzE1
8,359
Fix some tooling for windows
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
# What does this PR do? This PR fixes a small issue on the `check_repo.py` script with the encodings. I have also updated the `Makefile` to move the creation of the variables `fork_point_sha`, `joined_dirs` and `modified_py_files` in the `modified_only_fixup` target because they are used only there. The makefile raises an error on Windows environment when creating the `modified_py_files` variable: ``` 'tests' is not recognized as an internal or external command, operable program or batch file ``` ping @stas00 and @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8359/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8359/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8359", "html_url": "https://github.com/huggingface/transformers/pull/8359", "diff_url": "https://github.com/huggingface/transformers/pull/8359.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8359.patch", "merged_at": 1604926238000 }
https://api.github.com/repos/huggingface/transformers/issues/8358
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8358/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8358/comments
https://api.github.com/repos/huggingface/transformers/issues/8358/events
https://github.com/huggingface/transformers/pull/8358
737,703,169
MDExOlB1bGxSZXF1ZXN0NTE2Njk4MTI0
8,358
[WIP] Add performer in flax
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,604
1,619
1,619
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR is a first draft to add the Performer: https://github.com/google-research/google-research/tree/master/performer in Trax to the library. As stated here: https://github.com/google-research/google-research/tree/master/performer/fast_self_attention#performers-fast-self-attention-module all one has to do is to replace the `attention_fn` in `flax.linen.nn.SelfAttention` with the output of `make_fast_softmax_attention` which is done in this PR. The function seems to integrate seamlessly with our current `FlaxBertModel` architecture, but it is questionable if the model can easily be fine-tuned on existing `BertModel` weights and if our current `FlaxBertModel` architecture corresponds to the architecture used by the Performer team to train & evaluate their models as described in the Paper. One can verify that a forward pass of the Performer model works by running: ```python pytest tests/test_modeling_performer.py ``` Tagging @TevenLeScao @mfuntowicz @thomwolf for information. @mfuntowicz - it seems like the weights can be loaded and a forward pass does throw any errors -> does that seem to work for you after taking a first look? ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 --> - TF code: https://github.com/google-research/google-research/tree/master/performer/fast_attention/tensorflow
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8358/reactions", "total_count": 8, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 6, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8358/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8358", "html_url": "https://github.com/huggingface/transformers/pull/8358", "diff_url": "https://github.com/huggingface/transformers/pull/8358.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8358.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/8357
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8357/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8357/comments
https://api.github.com/repos/huggingface/transformers/issues/8357/events
https://github.com/huggingface/transformers/issues/8357
737,674,669
MDU6SXNzdWU3Mzc2NzQ2Njk=
8,357
Cannot Load roberta tokenizer
{ "login": "Rogerspy", "id": 26625102, "node_id": "MDQ6VXNlcjI2NjI1MTAy", "avatar_url": "https://avatars.githubusercontent.com/u/26625102?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rogerspy", "html_url": "https://github.com/Rogerspy", "followers_url": "https://api.github.com/users/Rogerspy/followers", "following_url": "https://api.github.com/users/Rogerspy/following{/other_user}", "gists_url": "https://api.github.com/users/Rogerspy/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rogerspy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rogerspy/subscriptions", "organizations_url": "https://api.github.com/users/Rogerspy/orgs", "repos_url": "https://api.github.com/users/Rogerspy/repos", "events_url": "https://api.github.com/users/Rogerspy/events{/privacy}", "received_events_url": "https://api.github.com/users/Rogerspy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It seems that the tokenizer in that directory is not a Byte level BPE, or at least not in a format the library can understand. A byte-level BPE like the RoBERTa tokenizer should have a merges files as well.\r\n\r\nCould you try to load it in a BERT tokenizer? The BERT tokenizer saves its vocabulary as `vocab.txt` so it is possible this is the tokenizer that was used.", "> It seems that the tokenizer in that directory is not a Byte level BPE, or at least not in a format the library can understand. A byte-level BPE like the RoBERTa tokenizer should have a merges files as well.\r\n> \r\n> Could you try to load it in a BERT tokenizer? The BERT tokenizer saves its vocabulary as `vocab.txt` so it is possible this is the tokenizer that was used.\r\n\r\nThank you! It works." ]
1,604
1,604
1,604
NONE
null
# ❓ Questions & Help My `transformers`'s version is `transformers 3.4.0` I download a Chinese RoBERTa model, where: ``` models ├── RoBERTa_zh_Large_Pytorch │ ├── config.json │ ├── pytorch_model.bin │ └── vocab.txt ``` I want to load the tokenizer from local: ```python tokenizer_zh = RobertaTokenizer.from_pretrained('./models/RoBERTa_zh_Large_Pytorch/') ``` but I get : ``` OSErrorTraceback (most recent call last) <ipython-input-4-bdedc9589347> in <module> ----> 1 tokenizer_zh = RobertaTokenizer.from_pretrained('./models/RoBERTa_zh_Large_Pytorch/') /anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 1647 ", ".join(s3_models), 1648 pretrained_model_name_or_path, -> 1649 list(cls.vocab_files_names.values()), 1650 ) 1651 ) OSError: Model name './models/RoBERTa_zh_Large_Pytorch/' was not found in tokenizers model name list (roberta-base, roberta-large, roberta-large-mnli, distilroberta-base, roberta-base-openai-detector, roberta-large-openai-detector). We assumed './models/RoBERTa_zh_Large_Pytorch/' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8357/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8357/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8356
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8356/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8356/comments
https://api.github.com/repos/huggingface/transformers/issues/8356/events
https://github.com/huggingface/transformers/issues/8356
737,644,036
MDU6SXNzdWU3Mzc2NDQwMzY=
8,356
assert tgt_line, f"empty tgt line for index {index}" with t5
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@patil-suraj ", "I am guessing maybe this issue is making it slower on tpus? on en-ro which it does not happen it is faster", "@rabeehkarimimahabadi did you solve the issue? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,604
1,619
1,619
NONE
null
Hi I am trying to run fineutne_trainer with this command and I got assert error thanks for your help export TPU_IB_ADDRESS=10.160.244.2 start=`date +%s` python xla_spawn.py --num_cores 8 \ finetune_trainer.py \ --tokenizer_name t5-small --model_name_or_path t5-small \ --data_dir wmt_en_de \ --output_dir /home/rabeeh/outputs/verify --overwrite_output_dir \ --learning_rate=3e-4 \ --warmup_steps 500 \ --per_device_train_batch_size=128 --per_device_eval_batch_size=128\ --num_train_epochs=1 \ --save_steps 500 --eval_steps 500 \ --logging_steps 200 \ --max_source_length 128 --max_target_length 128 --val_max_target_length 128 --test_max_target_length 128 \ --task translation --label_smoothing 0.1 \ --freeze_encoder --freeze_embeds \ --num_train_epochs=1 \ --logging_first_step --logging_steps 200 \ --do_train --do_eval --evaluate_during_training \ --prediction_loss_only \ "$@" end=`date +%s` runtime=$((end-start)) echo running time $runtime logs Exception in thread Thread-4: Traceback (most recent call last): File "/root/anaconda3/envs/pytorch/lib/python3.6/threading.py", line 916, in _bootstrap_inner self.run() File "/root/anaconda3/envs/pytorch/lib/python3.6/threading.py", line 864, in run self._target(*self._args, **self._kwargs) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/parallel_loader.py", line 141, in _loader_worker _, data = next(data_iter) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 435, in __next__ data = self._next_data() File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/root/home/rabeeh/examples/seq2seq/utils.py", line 246, in __getitem__ assert tgt_line, f"empty tgt line for index {index}" AssertionError: empty tgt line for index 4549523
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8356/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8356/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8355
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8355/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8355/comments
https://api.github.com/repos/huggingface/transformers/issues/8355/events
https://github.com/huggingface/transformers/issues/8355
737,597,274
MDU6SXNzdWU3Mzc1OTcyNzQ=
8,355
finetune_trainer segfault
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This seems like a memory error. Can you try reducing the batch size and see if that works?", "thanks worked\n\nOn Fri, Nov 6, 2020 at 5:28 PM Lysandre Debut <[email protected]>\nwrote:\n\n> This seems like a memory error. Can you try reducing the batch size and\n> see if that works?\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/8355#issuecomment-723172299>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ARPXHHYNQGEEYF4TXY4LNQLSOQP3XANCNFSM4TMNPQEQ>\n> .\n>\n" ]
1,604
1,608
1,608
NONE
null
Hi I wand to run finetune_trainer on tpu: export TPU_IB_ADDRESS=10.160.244.2 start=`date +%s` python xla_spawn.py --num_cores 8 \ finetune_trainer.py \ --tokenizer_name t5-base --model_name_or_path t5-base \ --data_dir /home/rabeeh/ruse/seq2seq/seq2seq/data \ --output_dir /home/rabeeh/outputs/verify --overwrite_output_dir \ --learning_rate=3e-4 \ --warmup_steps 500 \ --per_device_train_batch_size=256 --per_device_eval_batch_size=256\ --num_train_epochs=6 \ --save_steps 500 --eval_steps 500 \ --logging_steps 200 \ --max_source_length 128 --max_target_length 128 --val_max_target_length 128 --test_max_target_length 128 \ --task translation --label_smoothing 0.1 \ "$@" end=`date +%s` runtime=$((end-start)) echo running time $runtime here are the logs, thank for your help/ 2020-11-06 09:05:12.060270: E 1335 tensorflow/compiler/xla/xla_client/xla_util.cc:76] [[XRTExecute_G12]] 2020-11-06 09:05:12.060282: E 1335 tensorflow/compiler/xla/xla_client/xla_util.cc:76] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. 2020-11-06 09:05:12.060292: E 1335 tensorflow/compiler/xla/xla_client/xla_util.cc:76] 2020-11-06 09:05:12.060303: E 1335 tensorflow/compiler/xla/xla_client/xla_util.cc:76] (1) Resource exhausted: Attempting to reserve 7.10G at the bottom of memory. That was not possible. There are 6.88G free, 0B reserved, and 6.88G reservable. 2020-11-06 09:05:12.060314: E 1335 tensorflow/compiler/xla/xla_client/xla_util.cc:76] [[{{node XRTExecute}}]] 2020-11-06 09:05:12.060325: E 1335 tensorflow/compiler/xla/xla_client/xla_util.cc:76] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. 2020-11-06 09:05:12.060336: E 1335 tensorflow/compiler/xla/xla_client/xla_util.cc:76] 2020-11-06 09:05:12.060347: E 1335 tensorflow/compiler/xla/xla_client/xla_util.cc:76] 0 successful operations. 2020-11-06 09:05:12.060358: E 1335 tensorflow/compiler/xla/xla_client/xla_util.cc:76] 0 derived errors ignored. ###### Arguments (ModelArguments(model_name_or_path='t5-base', config_name=None, tokenizer_name='t5-base', cache_dir=None, freeze_encoder=True, freeze_embeds=True), DataTrainingArguments(data_dir='data/wmt_en_de', task='translation', max_source_length=128, max_target_length=128, val_max_target_length=128, test_max_target_length=128, n_train=-1, n_val=-1, n_test=-1, src_lang=None, tgt_lang=None, eval_beams=None, ignore_pad_token_for_loss=True), Seq2SeqTrainingArguments(output_dir='/home/rabeeh/outputs/verify', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=False, evaluate_during_training=True, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=True, per_device_train_batch_size=64, per_device_eval_batch_size=64, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=0.0003, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=6.0, max_steps=-1, warmup_steps=500, logging_dir='runs/Nov06_09-03-01_eee96b6103a2', logging_first_step=True, logging_steps=200, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=8, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name='/home/rabeeh/outputs/verify', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, label_smoothing=0.1, sortish_sampler=False, predict_with_generate=False, adafactor=False, encoder_layerdrop=None, decoder_layerdrop=None, dropout=None, attention_dropout=None, lr_scheduler='linear')) ######### data_file data/wmt_en_de/train.source ######### data_file data/wmt_en_de/val.source {'loss': 10624.0, 'learning_rate': 6e-07, 'epoch': 0.00011253657438667567} Exception in device=TPU:0: Resource exhausted: From /job:tpu_worker/replica:0/task:0: 2 root error(s) found. (0) Resource exhausted: Attempting to reserve 7.10G at the bottom of memory. That was not possible. There are 6.88G free, 0B reserved, and 6.88G reservable. [[{{node XRTExecute}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. [[XRTExecute_G12]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. (1) Resource exhausted: Attempting to reserve 7.10G at the bottom of memory. That was not possible. There are 6.88G free, 0B reserved, and 6.88G reservable. [[{{node XRTExecute}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. 0 successful operations. 0 derived errors ignored. Traceback (most recent call last): File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn fn(gindex, *args) File "/root/home/rabeeh/seq2seq/finetune_trainer.py", line 330, in _mp_fn app.run(main, flags_parser=parse_flags) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/absl/app.py", line 300, in run _run_main(main, args) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/absl/app.py", line 251, in _run_main sys.exit(main(argv)) File "/root/home/rabeeh/seq2seq/finetune_trainer.py", line 279, in main model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py", line 757, in train for step, inputs in enumerate(epoch_iterator): File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/parallel_loader.py", line 31, in __next__ return self.next() File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/parallel_loader.py", line 37, in next xm.mark_step() File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 716, in mark_step wait=xu.getenv_as('XLA_SYNC_WAIT', bool, False)) RuntimeError: Resource exhausted: From /job:tpu_worker/replica:0/task:0: 2 root error(s) found. (0) Resource exhausted: Attempting to reserve 7.10G at the bottom of memory. That was not possible. There are 6.88G free, 0B reserved, and 6.88G reservable. [[{{node XRTExecute}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. [[XRTExecute_G12]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. (1) Resource exhausted: Attempting to reserve 7.10G at the bottom of memory. That was not possible. There are 6.88G free, 0B reserved, and 6.88G reservable. [[{{node XRTExecute}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. 0 successful operations. 0 derived errors ignored. Traceback (most recent call last): File "xla_spawn.py", line 77, in <module> app.run(main, flags_parser=parse_args) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/absl/app.py", line 300, in run _run_main(main, args) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/absl/app.py", line 251, in _run_main sys.exit(main(argv)) File "xla_spawn.py", line 71, in main xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores, start_method='fork') File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 395, in spawn start_method=start_method) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 157, in start_processes while not context.join(): File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 112, in join (error_index, exitcode) Exception: process 0 terminated with exit code 17 running time 153
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8355/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8355/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8354
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8354/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8354/comments
https://api.github.com/repos/huggingface/transformers/issues/8354/events
https://github.com/huggingface/transformers/issues/8354
737,559,327
MDU6SXNzdWU3Mzc1NTkzMjc=
8,354
update from v3.0.0 to v3.4.0 got an error
{ "login": "zyh3826", "id": 31238754, "node_id": "MDQ6VXNlcjMxMjM4NzU0", "avatar_url": "https://avatars.githubusercontent.com/u/31238754?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zyh3826", "html_url": "https://github.com/zyh3826", "followers_url": "https://api.github.com/users/zyh3826/followers", "following_url": "https://api.github.com/users/zyh3826/following{/other_user}", "gists_url": "https://api.github.com/users/zyh3826/gists{/gist_id}", "starred_url": "https://api.github.com/users/zyh3826/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zyh3826/subscriptions", "organizations_url": "https://api.github.com/users/zyh3826/orgs", "repos_url": "https://api.github.com/users/zyh3826/repos", "events_url": "https://api.github.com/users/zyh3826/events{/privacy}", "received_events_url": "https://api.github.com/users/zyh3826/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi! Yes, you should use the `from_pretrained` method to ensure compatibility within versions. Have you tried using that method instead? It should point to a directory containing your model file named as `pytorch_model.bin` and the configuration file of the model named as `config.json`.", "> Hi! Yes, you should use the `from_pretrained` method to ensure compatibility within versions. Have you tried using that method instead? It should point to a directory containing your model file named as `pytorch_model.bin` and the configuration file of the model named as `config.json`.\r\n\r\nHi LysandreJik, I think you may misunderstand my problem. At the beginning of this project, I used `from_pretrained`, and when the training ended I use the `torch.save` saving the model to a .pth file(Under transformers v3.0.0). But under transformers v3.4.0, I init the model use `model = BertModel.from_pretrained(BertConfig.from_pretrained(configpath))`, then `model.load_state_dict(torch.load(modelpath)` and get this error. But under transformers v3.0.0, it's ok.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: v3.4.0 - Platform: redhat - Python version: 3.8.3 64bit - PyTorch version (GPU?): 1.5.1 - Tensorflow version (GPU?): - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSTM: @stas00 examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): Bert The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: yesterday i used transformers v3.0.0 trained my model and save it use `torch.save(model.state_dict(), config.save_path)`, and today i update transformers to v3.4.0, when i use `torch.load` i got this error ![image](https://user-images.githubusercontent.com/31238754/98341317-a2be1600-2049-11eb-9672-749896b9c2a4.png) could anybody help me!!! thx <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8354/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8354/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8353
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8353/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8353/comments
https://api.github.com/repos/huggingface/transformers/issues/8353/events
https://github.com/huggingface/transformers/issues/8353
737,503,653
MDU6SXNzdWU3Mzc1MDM2NTM=
8,353
[s2s trainer examples] a tight quality regression test
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "(FYI, this seems less important than the OOM issue you've found separately.)\r\n\r\n1) definitely S3. Otherwise there will be lots of pressure to have small data which could hurt learning objectives. I would try to satisfy learning objectives then make data smaller. If it gets to 1MB you could check it in, but that sounds hard!\r\n\r\n2) Yes or `gigaword` \r\nNote that these two commands are roughly equivalent:\r\n\r\n```bash\r\ns3cmd get s3://datasets.huggingface.co/summarization/gigaword.tgz\r\nwget https://cdn-datasets.huggingface.co/summarization/gigaword.tgz\r\n```\r\n\r\n3) As fast as possible while learning stuff. I would expect it to need `@slow` and take 1-5min. But if you can think of a smart way (e.g. high learning rate) to make this faster, it would be cool.\r\n\r\n4) \r\n```bash \r\npython make_student.py facebook/bart-large-xsum student_xsum_1_1 -e 1 -d 1`\r\n```\r\nand upload that to S3 \r\n+ I didn't do it for you so that you can get comfortable, this should add tokenizer files for you\r\n\r\n5) I would lean towards running it once, manually introducing regressions locally to see if they are caught.\r\n\r\n\r\n", "Thank you for the awesome answers\r\n\r\n> 5. I would lean towards running it once, manually introducing regressions locally to see if they are caught.\r\n\r\nOK, I guess we could make it as part of the debug code where on demand it'd pre-run below-threshold settings and ensures that the test fails and then moves on to what it really tests.\r\n", "As we won't have native amp working correctly until pytorch-1.8 is going to be out I'm closing this feature for now, as it'd be a pointless effort at this point." ]
1,604
1,607
1,607
CONTRIBUTOR
null
Continuing from https://github.com/huggingface/transformers/issues/6049 and https://github.com/huggingface/transformers/issues/8154, we are trying to build a test that will be good at regression detection in the finetune training code, quoting @sshleifer: > try to make a fast command line test/script for summarization that meets some reasonable learning requirements (using rouge score. Try to get rouge2 above 4?) > For example, if you set dropout=0.95 or freeze all parameters, or set the LR too low, or mess up the special tokens logic, the test should fail. I have some follow up questions: 1. should I make a subset dataset on s3? or should I attempt to build a tiny one and check it in git? 2. Speed-wise would xsum be the best candidate to derive the dataset from? 3. define fast? do you mean something that would not need `@slow` - but summarization is inherently a slow task so it doesn't go well with "fast" 4. if summarization it is, which model would you recommend that is light enough to fit with not `@slow` to be downloadable from CI 5. if you want a really tight test we will need to first test that the hparams in question do lead to a failure (which will be caught) when they are lower than whatever threshold we choose, which means that ideally we will need to re-run the training multiple times in the same test to make the test really good. Which again leads to the question of how fast is fast. Of course, one of the difficulties with tight params would be that different GPUs will produce different results, but I suppose we will tune it up as we go. Thank you! @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8353/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8353/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8352
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8352/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8352/comments
https://api.github.com/repos/huggingface/transformers/issues/8352/events
https://github.com/huggingface/transformers/issues/8352
737,425,470
MDU6SXNzdWU3Mzc0MjU0NzA=
8,352
I encountered an error when I was running code that the object could not be called.But the BertTokenizer doesn't exist in my code.
{ "login": "YeNiTing", "id": 50485311, "node_id": "MDQ6VXNlcjUwNDg1MzEx", "avatar_url": "https://avatars.githubusercontent.com/u/50485311?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YeNiTing", "html_url": "https://github.com/YeNiTing", "followers_url": "https://api.github.com/users/YeNiTing/followers", "following_url": "https://api.github.com/users/YeNiTing/following{/other_user}", "gists_url": "https://api.github.com/users/YeNiTing/gists{/gist_id}", "starred_url": "https://api.github.com/users/YeNiTing/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YeNiTing/subscriptions", "organizations_url": "https://api.github.com/users/YeNiTing/orgs", "repos_url": "https://api.github.com/users/YeNiTing/repos", "events_url": "https://api.github.com/users/YeNiTing/events{/privacy}", "received_events_url": "https://api.github.com/users/YeNiTing/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi, this is probably due to a mismatch of your transformers version.\r\n\r\nPlease complete the template, otherwise we cannot help you.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
## Error details - `transformers` version:2.11.0 Traceback (most recent call last): File "main.py", line 108, in fire.Fire() File "D:\software\Anaconda3\envs\ynt2\lib\site-packages\fire\core.py", line 138, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "D:\software\Anaconda3\envs\ynt2\lib\site-packages\fire\core.py", line 468, in _Fire target=component.name) File "D:\software\Anaconda3\envs\ynt2\lib\site-packages\fire\core.py", line 672, in _CallAndUpdateTrace component = fn(*varargs, **kwargs) File "main.py", line 69, in train output = model(x) File "D:\software\Anaconda3\envs\ynt2\lib\site-packages\torch\nn\modules\module.py", line 550, in call result = self.forward(*input, **kwargs) File "C:\Users\YNT\Desktop\重要文件夹\数据集\EMLo相关\pytorch_bert_elmo_example-master\model.py", line 40, in forward word_embs = self.get_bert(x) File "C:\Users\YNT\Desktop\重要文件夹\数据集\EMLo相关\pytorch_bert_elmo_example-master\model.py", line 90, in get_bert ids = self.tokenizer(sentence_lists, padding=True, return_tensors="pt") TypeError: 'BertTokenizer' object is not callable
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8352/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8352/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8351
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8351/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8351/comments
https://api.github.com/repos/huggingface/transformers/issues/8351/events
https://github.com/huggingface/transformers/pull/8351
737,410,638
MDExOlB1bGxSZXF1ZXN0NTE2NDU2NDI5
8,351
Fix typo
{ "login": "cccntu", "id": 31893406, "node_id": "MDQ6VXNlcjMxODkzNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cccntu", "html_url": "https://github.com/cccntu", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "organizations_url": "https://api.github.com/users/cccntu/orgs", "repos_url": "https://api.github.com/users/cccntu/repos", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "received_events_url": "https://api.github.com/users/cccntu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
# What does this PR do? - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8351/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8351/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8351", "html_url": "https://github.com/huggingface/transformers/pull/8351", "diff_url": "https://github.com/huggingface/transformers/pull/8351.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8351.patch", "merged_at": 1604679582000 }
https://api.github.com/repos/huggingface/transformers/issues/8350
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8350/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8350/comments
https://api.github.com/repos/huggingface/transformers/issues/8350/events
https://github.com/huggingface/transformers/issues/8350
737,398,748
MDU6SXNzdWU3MzczOTg3NDg=
8,350
torch 1.4.0 transformers segment fault
{ "login": "namenotexist", "id": 40298904, "node_id": "MDQ6VXNlcjQwMjk4OTA0", "avatar_url": "https://avatars.githubusercontent.com/u/40298904?v=4", "gravatar_id": "", "url": "https://api.github.com/users/namenotexist", "html_url": "https://github.com/namenotexist", "followers_url": "https://api.github.com/users/namenotexist/followers", "following_url": "https://api.github.com/users/namenotexist/following{/other_user}", "gists_url": "https://api.github.com/users/namenotexist/gists{/gist_id}", "starred_url": "https://api.github.com/users/namenotexist/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/namenotexist/subscriptions", "organizations_url": "https://api.github.com/users/namenotexist/orgs", "repos_url": "https://api.github.com/users/namenotexist/repos", "events_url": "https://api.github.com/users/namenotexist/events{/privacy}", "received_events_url": "https://api.github.com/users/namenotexist/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, please complete the template, otherwise we cannot help you." ]
1,604
1,604
1,604
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> torch 1.4.0 transformers segment fault <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8350/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8350/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8349
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8349/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8349/comments
https://api.github.com/repos/huggingface/transformers/issues/8349/events
https://github.com/huggingface/transformers/issues/8349
737,390,210
MDU6SXNzdWU3MzczOTAyMTA=
8,349
apply_chunking_to_forward should only be the same in the chunking dimension
{ "login": "pedrocolon93", "id": 5157240, "node_id": "MDQ6VXNlcjUxNTcyNDA=", "avatar_url": "https://avatars.githubusercontent.com/u/5157240?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pedrocolon93", "html_url": "https://github.com/pedrocolon93", "followers_url": "https://api.github.com/users/pedrocolon93/followers", "following_url": "https://api.github.com/users/pedrocolon93/following{/other_user}", "gists_url": "https://api.github.com/users/pedrocolon93/gists{/gist_id}", "starred_url": "https://api.github.com/users/pedrocolon93/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pedrocolon93/subscriptions", "organizations_url": "https://api.github.com/users/pedrocolon93/orgs", "repos_url": "https://api.github.com/users/pedrocolon93/repos", "events_url": "https://api.github.com/users/pedrocolon93/events{/privacy}", "received_events_url": "https://api.github.com/users/pedrocolon93/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Great catch @pedrocolon93 ! Do you feel like opening a PR to fix it? :-) Feel free to tag me and I'll help you!", "Sounds good! I'll get it in and link it later today!\r\n" ]
1,604
1,605
1,605
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.4.0 - Platform: All - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSTM: @stas00 examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): XLNet The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Simply send in 2 tensor to the apply_chunking_to_forward that have the same batch length, same batch size, but different dimensionality and it will pop up with an exception <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Should only chunk if they are the same in the chunk dimension ``` assert len(input_tensors) > 0, "{} has to be a tuple/list of tensors".format(input_tensors) tensor_shape = input_tensors[0].shape assert all( input_tensor.shape == tensor_shape for input_tensor in input_tensors ), "All input tenors have to be of the same shape" ``` Should be: ``` tensor_shape = input_tensors[0].shape[chunk_dim] assert all( input_tensor.shape[chunk_dim] == tensor_shape for input_tensor in input_tensors ), "All input tenors have to be of the same shape" ``` In here if there are 2 input tensors with the shapes: [512,2,768] and [512,2,300] the method throws an exception when it should only chunk based on the chunk dimension (in this case 2).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8349/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8349/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8348
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8348/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8348/comments
https://api.github.com/repos/huggingface/transformers/issues/8348/events
https://github.com/huggingface/transformers/issues/8348
737,366,502
MDU6SXNzdWU3MzczNjY1MDI=
8,348
PEGASUS generation/decoding VERY Slow
{ "login": "muggin", "id": 4559861, "node_id": "MDQ6VXNlcjQ1NTk4NjE=", "avatar_url": "https://avatars.githubusercontent.com/u/4559861?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muggin", "html_url": "https://github.com/muggin", "followers_url": "https://api.github.com/users/muggin/followers", "following_url": "https://api.github.com/users/muggin/following{/other_user}", "gists_url": "https://api.github.com/users/muggin/gists{/gist_id}", "starred_url": "https://api.github.com/users/muggin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muggin/subscriptions", "organizations_url": "https://api.github.com/users/muggin/orgs", "repos_url": "https://api.github.com/users/muggin/repos", "events_url": "https://api.github.com/users/muggin/events{/privacy}", "received_events_url": "https://api.github.com/users/muggin/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1841528858, "node_id": "MDU6TGFiZWwxODQxNTI4ODU4", "url": "https://api.github.com/repos/huggingface/transformers/labels/Summarization", "name": "Summarization", "color": "b6f97f", "default": false, "description": "" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "this is expected behavior. pegasus has more layers and higher num_beams than BART.\ntry using num_beams=2 for both models, and or using a distilled model for faster inference. BART also works in fp16.", "I used greedy decoding when generating outputs, so the number of beams should not matter in this case.\r\nAlso, PEGASUS is 30% larger than BART, so why does it translate to at least 5x performance drop at inference?", "I can't investigate deeply without a code snippet, but a 5X difference in greedy decoding time for roughly the same number of output characters is larger than I would have expected.\r\n\r\nI'd try to verify if the finetuning is, in fact, a part of this problem or whether you're generation parameters/data can demonstrate the slowdown using public models, e.g.\r\n\r\nI just ran:\r\n\r\n```python\r\nfrom transformers import *\r\nimport time \r\ntorch_device = 'cuda'\r\nfor mname in ['facebook/bart-large-xsum', 'google/pegasus-xsum']:\r\n model = AutoModelForSeq2SeqLM.from_pretrained(mname).to(torch_device)\r\n tok = AutoTokenizer.from_pretrained(mname)\r\n batch = tok(['I am a small frog'], return_tensors='pt').to(torch_device)\r\n t0 = time.time()\r\n model.generate(**batch, min_length=0, max_length=10, num_beams=1)\r\n runtime = time.time() - t0\r\n print(f'{mname}: {runtime:.3f}')\r\n```\r\n\r\nand got\r\n```\r\nfacebook/bart-large-xsum: 0.089\r\ngoogle/pegasus-xsum: 0.116\r\n```\r\nwhich is a < 30% difference.", "In our case, the model is part of a larger system, so can't share the code I am using. \r\nHowever, I will create a minimal example that allows me to reproduce this issue (most likely by Monday).\r\n\r\nOne difference I already see is that in our code we work with long sequences that max out the model's limits both on the input and output side. I just started another batch of generation and on the exact same input data BART-large is projected to finish in 45 minutes, PEGASUS-large in 14 hours, both models running on A100 GPUs (one per model).\r\n\r\n\r\nEdit:\r\n@sshleifer running the script you provided gives the following (avg. across 5 runs):\r\n```\r\nfacebook/bart-large-xsum: 0.570\r\ngoogle/pegasus-xsum: 0.952\r\n```\r\n", "Cool.\r\n+ If you want smaller models you can checkout the distilbart/distill-pegasus variants [here](https://huggingface.co/models?search=sshleifer%2Fdistil) [Paper w Details](https://arxiv.org/abs/2010.13002)\r\n+ `bart-large/pegasus-large` probably can't generate very well (unless you've fine-tuned them)", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
## Environment info - `transformers` version: 3.4.0 - Platform: Linux-4.19.112+-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyTorch version (GPU?): 1.7.0+cu110 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @sshleifer ## Information Model I am using: *PEGASUS-large*, *PEGASUS-cnn_dailymail*, *PEGASUS-xsum* The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) I compared the generation/decoding performance of BART and PEGASUS both loaded via AutoModelSeq2SeqForLM fine-tuned on a custom dataset for the same amount of time. Generation was done using basic greedy decoding. PEGASUS models are anywhere between 5-15x slower than BART. Fine-tuning speed was on-par for both models. ## To reproduce Steps to reproduce the behavior: 1. Load PEGASUS and BART from any of the mentioned checkpoints (using AutoModelSeq2SeqForLM) 1. Fine-tune models 2. Decode using greedy decoding 3. Compare fine-tuning performance with other Seq2Seq Model (BART-large) ## Expected behavior Decoding performance on-par with BART-large?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8348/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8348/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8347
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8347/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8347/comments
https://api.github.com/repos/huggingface/transformers/issues/8347/events
https://github.com/huggingface/transformers/issues/8347
737,342,833
MDU6SXNzdWU3MzczNDI4MzM=
8,347
TFTrainer stuck in evaluation
{ "login": "soufianeelalami", "id": 16280778, "node_id": "MDQ6VXNlcjE2MjgwNzc4", "avatar_url": "https://avatars.githubusercontent.com/u/16280778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/soufianeelalami", "html_url": "https://github.com/soufianeelalami", "followers_url": "https://api.github.com/users/soufianeelalami/followers", "following_url": "https://api.github.com/users/soufianeelalami/following{/other_user}", "gists_url": "https://api.github.com/users/soufianeelalami/gists{/gist_id}", "starred_url": "https://api.github.com/users/soufianeelalami/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/soufianeelalami/subscriptions", "organizations_url": "https://api.github.com/users/soufianeelalami/orgs", "repos_url": "https://api.github.com/users/soufianeelalami/repos", "events_url": "https://api.github.com/users/soufianeelalami/events{/privacy}", "received_events_url": "https://api.github.com/users/soufianeelalami/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Tagging @jplu as I'm not familiar with the code of TFTrainer.", "Hello!\r\n\r\nPlease provide a piece of code in order for us to be able to reproduce your issue.", "Since i am using a custom dataset, here are some stats about the input:\r\n\r\ntrain_dataset = tf.data.Dataset.from_tensor_slices((\r\n {'input_ids': masked_train_texts, 'attention_mask': tokenized_train_texts['attention_mask']},\r\n train_labels\r\n))\r\n\r\ntrain_dataset contains 16268 instance.\r\n\r\nval_dataset = tf.data.Dataset.from_tensor_slices((\r\n {'input_ids': masked_val_texts, 'attention_mask': tokenized_val_texts['attention_mask']},\r\n val_labels\r\n)) \r\n\r\nval_dataset contains 4067 instance.\r\n\r\nhere is only the part where i launch the trainer: \r\n\r\n```python\r\ntraining_args = TFTrainingArguments(\r\n output_dir='./results', # output directory\r\n num_train_epochs=4, # total number of training epochs\r\n per_device_train_batch_size=64, #64 batch size per device during training\r\n per_device_eval_batch_size=128, #128 batch size for evaluation\r\n learning_rate=1e-5,\r\n evaluate_during_training=True,\r\n prediction_loss_only=False,\r\n logging_dir='./logs', # directory for storing logs\r\n logging_steps=1,\r\n logging_first_step=True,\r\n eval_steps=110,\r\n save_steps=30,\r\n save_total_limit=2,\r\n)\r\n\r\n\r\nwith training_args.strategy.scope():\r\n model = TFCamembertForMaskedLM.from_pretrained(CAMEMBERT_MODEL)\r\n\r\ntrainer = TFTrainer(\r\n model=model, # the instantiated 🤗 Transformers model to be trained\r\n args=training_args, # training arguments, defined above\r\n train_dataset=train_dataset, # training dataset\r\n eval_dataset=val_dataset # evaluation dataset\r\n)\r\n\r\ntrainer.train()\r\nprint('******')\r\n```\r\n\r\nAfter debug the TFTrainer class, i see it enters an infinite loop because if ```prediction_loss_only=True``` the condition that stops the loop ```step==steps```", "Unfortunately without a colab or a piece of code that I can copy/paste to properly reproduce your issue I cannot really help you. Nevertheless few remarks:\r\n\r\n- Try the master version of the lib or at least the last released version (v3.4) in order to check if your problem has not been already solved.\r\n- Which Tensorflow version you are using?\r\n- I see in your example that you are using a masked LM model, the TF Trainer is not compliant with these models for now. The available tasks are only: Token Classification, Sequence Classification, Multiple Choice and Question Answering. The possibility to train a LM model from scratch will arrive in a future release.", "I will try to reproduce this issue with some synthetic data since i cannot share my own. The problem here comes from the code. I was able to fix the issue by setting the ```prediction_loss_only``` parameter to false or by moving the ```step==steps``` condition out of the big if condition. I dont understand why it was there in the first place.\r\n\r\n- I have checked the 3.4 version of the code and the issue is still there. It is purely an error in the code as i pointed out. I would be surprised if the code works for anyone using the same configuration as me.\r\n- I am using Tensorflow 2.3.\r\n- Yes i built my tools to transform the data for a MLM tasks by following the methods implemented in Torch (with data collators). \r\n\r\nGlobally, the TFTrainer lags very far behind the torch version functionality and quality of code wise. I am struggling with it since the beginning as it is now, whereas the torch version is much more solid and complete. \r\n\r\nThank for all the great work you are doing with the lib :)", "Ok, thanks, I will keep this issue open and further check this once I have a bit of time for this.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.1.0 - Platform: Darwin-17.7.0-x86_64-i386-64bit - Python version: 3.6.10 - PyTorch version (GPU?): 1.4.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: NO - Using distributed or parallel set-up in script?: YES ### Who can help Trainer: @sgugger ## Information Model I am using (Bert, XLNet ...): TFCamembertModel and TFTrainer to fine tune model on MLM The problem arises when using: * [x ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x ] my own task or dataset: (give details below) ## To reproduce Behavior of prediction_loop function causes evaluation to go in infinite loop if prediction_loss_only is set to True. The step==steps condition is never reached Steps to reproduce the behavior: Read code from TFTrainer below ```python if not prediction_loss_only: if isinstance(logits, tuple): logits = logits[0] if isinstance(labels, tuple): labels = labels[0] if self.args.n_replicas > 1: for val in logits.values: if preds is None: preds = val.numpy() else: preds = np.append(preds, val.numpy(), axis=0) for val in labels.values: if label_ids is None: label_ids = val.numpy() else: label_ids = np.append(label_ids, val.numpy(), axis=0) else: if preds is None: preds = logits.numpy() else: preds = np.append(preds, logits.numpy(), axis=0) if label_ids is None: label_ids = labels.numpy() else: label_ids = np.append(label_ids, labels.numpy(), axis=0) if step == steps: break ``` ## Expected behavior Once the number of steps reached, we should be able to get out of the for loop that draws batches
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8347/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8347/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8346
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8346/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8346/comments
https://api.github.com/repos/huggingface/transformers/issues/8346/events
https://github.com/huggingface/transformers/issues/8346
737,323,161
MDU6SXNzdWU3MzczMjMxNjE=
8,346
Adding kNN language modeling and Machine Translation
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" }, { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "are you interested @stas00 ? this will be a tough one but cool!", "This definitely sounds interested, thank you for thinking of me, @sshleifer!\r\n\r\nBut I also want to make sure that since I will be shortly starting on a big project which could take time, if someone wants to work on this one by all means do that. \r\n\r\nSo I propose to assign me tentatively but if another developer/contributor wants to do it then they are welcome to assign themselves. Thank you!\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "@LysandreJik, can we have some kind of way to tell the stale bot off for feature requests?\r\n\r\nSurely if an issue is a bug and it has become old and everybody lost interest in it, it's probably ok to close it automatically, but setting such a short life span for feature-request type of issue just calls for a wasted time to get the bot off one's back and then one is too busy to ping the issue and the feature request disappears. \r\n\r\nThank you!", "Looks like I never had the time to start on this one. I have unassigned myself for now as I honestly won't have time to work on it any time soon. And now it's open to others to work on.", "I think using this more efficient version of the KNN-LM might make a future KNNDecoder class more practical: https://github.com/jxhe/efficient-knnlm. Maybe it can be a flag, but anyways it does dimensionality reduction on the embeddings that are generated to reduce the size of the datastore.\r\n\r\nI'm seeing the following task to implementing this in the library:\r\n- [ ] The construction of the faiss index using HF `datasets` built in version and running the model over the examples with some given context length and grabbing the final hidden state for the embedding\r\n- [ ] Optionally reducing the embedding using PCA or UMAP\r\n- [ ] Get the embeddings of the input and performing the nearest neighbor look up, also using the HF `datasets` built in version\r\n- [ ] Interpolating the softmax predictions as done in the paper and using the new softmax to sample from with whatever other strategies are enabled.\r\n\r\nIs there any other steps anyone can think for implementing this?", "Hi @yjernite @sshleifer @stas00 @ncoop57 !\r\n\r\nI just released an implementation of [kNN-LM](https://arxiv.org/pdf/1911.00172.pdf) and [our new RetoMaton model (ICML'2022)](https://arxiv.org/pdf/2201.12431.pdf) based on the `transformers` library at [https://github.com/neulab/knn-transformers](https://github.com/neulab/knn-transformers) .\r\n\r\n**All** previous implementations that I know about of kNN-based models are:\r\n* Implemented in `fairseq`.\r\n* Forking/duplicating the entire library (`fairseq`) to implement their modification. This makes it very hard to work with, fetch new updates to the library, and incorporate into other projects.\r\n\r\nWhat I really like about my implementation is that I used hooks (using `layer.register_forward_hook`) to avoid forking the entire library, making the `KNNWrapper` standalone, and it can simply wrap any language model by:\r\n```python\r\nknn_wrapper = KNNWrapper(...) # or: RetomatonWrapper(...)\r\nknn_wrapper.break_into(model)\r\n```\r\nThat's it! The model now internally uses kNN-LM or RetoMaton.\r\nIn theory, this implementation should also make it very easy to reproduce the kNN-**MT** experiments, but I haven't tried yet.\r\n\r\nLet me know if anyone has questions or wants to implement this inside `transformers`.\r\n\r\nBest,\r\nUri" ]
1,604
1,657
null
MEMBER
null
# 🌟 Adding the kNN-LM and kNN-MT models ## Model description The kNN [Language Model](https://arxiv.org/pdf/1911.00172.pdf) and [Machine Translation](https://arxiv.org/pdf/2010.00710.pdf) leverage additional data at test time by looking up examples in the data store that are similar to the test example and using them to inform the prediction. This leads to decreased perplexity for language modeling and increased BLEU for MT. Ideally, we'd have a general KNNDecoder class which can use any model in the library along with a [🤗datasets](https://github.com/huggingface/datasets) data store, similarly to what RAG does currently. ## Open source status * [x] the model implementation is available: for kNN-LM, an implementation using Fairseq and FAISS can be found [here](https://github.com/urvashik/knnlm) * [x] the model weights are available: the methods use pre-trained models which are already in the library, [GPT-2](https://huggingface.co/gpt2) and the [Facebook WMT'19 models](https://huggingface.co/facebook/wmt19-de-en) * [x] who are the authors: first author @urvashik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8346/reactions", "total_count": 7, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 7, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8346/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/8345
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8345/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8345/comments
https://api.github.com/repos/huggingface/transformers/issues/8345/events
https://github.com/huggingface/transformers/issues/8345
737,315,240
MDU6SXNzdWU3MzczMTUyNDA=
8,345
Error in RAG finetuning script
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
[ "It is related with the optimizer initialization in the finetune.py script. Seems like even in the lightning_base.py there is no initialization for the optimizer.", "@lhoestq \r\n\r\nany idea on this?\r\n\r\nI managed to work around by calling the optimizer initialization inside the train_dataloader function in finetune.py. ", "Well the optimizer/scheduler is already defined in examples/lightningbase.py in `BaseTransoformer.configure_optimizers`. Not sure why the train_dataloader() function in finetune.py tries to define the scheduler. This must have been a bad copy paste...\r\n\r\nI think we should remove those lines\r\nhttps://github.com/huggingface/transformers/blob/17b1fd804f2ade052e40505a695ec7c9996178a9/examples/rag/finetune.py#L326-L336\r\n\r\nI just tried to remove them and now I'm getting this other issue #7816 \r\nI'll fix this one as well and make a PR", "That what I was thinking since there is a specific def in lightningbase.py." ]
1,604
1,605
1,605
CONTRIBUTOR
null
## Environment info ut the missing fields in that output! --> - `transformers` version: 3.4.0 - Platform: Linux-4.18.0-147.5.1.el8_1.x86_64-x86_64-with-centos-8.1.1911-Core - Python version: 3.6.8 - PyTorch version (GPU?): 1.7.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help @patrickvonplaten, @lhoestq ## Information I am using RAG fine-tuning script. During the fine-tuning process, it says **torch.nn.modules.module.ModuleAttributeError: 'GenerativeQAModule' object has no attribute 'opt'** The bug exactly appears in [line 332 in finetune.py (https://github.com/huggingface/transformers/blob/master/examples/rag/finetune.py#L332) ## To reproduce I have installed the transformers library from the source. Not the pip.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8345/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8345/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8344
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8344/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8344/comments
https://api.github.com/repos/huggingface/transformers/issues/8344/events
https://github.com/huggingface/transformers/issues/8344
737,296,485
MDU6SXNzdWU3MzcyOTY0ODU=
8,344
model parallelism for BART
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 2627272588, "node_id": "MDU6TGFiZWwyNjI3MjcyNTg4", "url": "https://api.github.com/repos/huggingface/transformers/labels/Model%20Parallel", "name": "Model Parallel", "color": "8B66A5", "default": false, "description": "Model Parallelilsm Implementations" } ]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "That sounds exciting - please assign it to me.\r\n\r\nOnce I finish with the tests I will study what you shared and ask follow up questions!", "Almost there: https://github.com/huggingface/transformers/pull/9384", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Hi, \r\n@stas00 Is model parallelism already implemented for BART and/or BlenderBot models? I do not see any 'parallelize' method in the modeling_bart or modeling_blenderbot. \r\n\r\nThanks for comments.", "This line of work has been abandoned as it's highly inefficient. Please use DeeepSpeed which works with any model https://huggingface.co/docs/transformers/main/main_classes/deepspeed", "Thanks @stas00 for the pointer. Is there any example on using DeepSpeed with our own Trainer? I see [this](https://huggingface.co/docs/transformers/main/main_classes/deepspeed#deepspeed-non-trainer-integration) Sec. in the link you shared but that simply is not really helpful. Is it not clear to me what else I need to take care of besides instantiating `HfDeepSpeedConfig` before instantiating the model.", "When you don't use HF Trainer, you're on your own, as you're outside of the domain of HF Transformers. For non-HF-Trainer use we only provide a way for `from_pretrained` to load the model directly to multiple gpu via `zero.Init` and that's what the link you added points to.\r\n\r\nBasically you have to study the deepspeed documentation https://www.deepspeed.ai/ - and follow their documentation. ", "> When you don't use HF Trainer, you're on your own, as you're outside of the domain of HF Transformers. For non-HF-Trainer use we only provide a way for `from_pretrained` to load the model directly to multiple gpu via `zero.Init` and that's what the link you added points to.\r\n> \r\n> Basically you have to study the deepspeed documentation https://www.deepspeed.ai/ - and follow their documentation.\r\n\r\nThanks @stas00 \r\nSo, if I subclass and overrides the Trainer methods like `compute_loss` `training_step`, `evaluate` and etc. and follow the steps you take regarding deepspeed in those methods, should I be fine?", "as long as your subclass methods don't involve methods that do deepspeed integration it'd work out of the box, yes.\r\n\r\nif they do, make sure that you either copy the deepspeed integration code, or simply adjust the methods to do what you want. \r\n\r\nit should be easy to see if deepspeed integration code is involved by simply grep'ing for the string: `deepspeed`." ]
1,604
1,654
1,619
CONTRIBUTOR
null
For @stas00: High Level Goal: allow large Seq2Seq transformers, (many of which inherit from BART) to be run on/accross multiple GPUs with model-parallelism. + This is a prerequisite for [adding m2m100](https://github.com/huggingface/transformers/issues/8054), and can be done in the same or a separate PR.I prefer separate given all the boilerplate associated with new model additions. + This has been attempted for GPT-2, and is days-weeks away from merging: #7772 . + fairseq has a different scheme, as shown by the last 4 clargs in [this command](https://github.com/pytorch/fairseq/tree/master/examples/m2m_100#generation-for-the-12b-model) + That model requires more hardware than you have locally, which is another good reason to try a 2 GPU case first, then try to As such, the test you should try to get passing is roughly: ```python n = 20 # set so that model.cuda() OOMs on 1 GPU (in a few lines) cfg = BartConfig(encoder_layers=n, decoder_layers=n) model = BartForConditonalGeneration(cfg) # this device_map is taken from #7772, feel free to make your own signature, like model.cuda() model.generate(**batch) # should OOM here or the line before device_map = {0: [0, 1, 2, 3, 4, 5, 6, 7, 8], 1: [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21], 2: [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34], 3: [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]} model.parallelize(device_map) # i might call this model.split batch = tokenizer(['I am a small frog']) model.save_pretrained('parallelized_model') model.from_pretrained('parallelized_model') # should be parallelized model.deparallelize() # Puts the model back on cpu and calls torch.cuda.empty_cache() to liberate GPU memory ``` ### Requirements - user can save_pretrained/load_pretrained without losing the partitioning of layers -> devices. - some forward/generate call that would `OOM` on a single GPU does not `OOM` after calling `.parallelize` - user can repartition by loading full model to cpu, calling `model.parallelize(device_map)` ### Brain Dump - You should read the whole document https://github.com/pytorch/fairseq/tree/master/examples/m2m_100#beyond-english-centric-multilingual-machine-translation before starting before starting - I would also read discussion, code for https://github.com/huggingface/transformers/pull/7772 - you have a lot of flexibility in naming/API and should feel empowered to make choices as you see fit. - To the extent possible, keep the PR small and avoid interfering existing single-gpu functionality. - You could add a fairscale dependency during your experiments/local dev, but it would be a battle to get `fairscale` added as a dependency. If that is a worthwhile battle, however, you should argue for it. - I suspect that this will take nearly as long as `FSMT`, but be much less code. What do you think?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8344/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8344/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8343
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8343/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8343/comments
https://api.github.com/repos/huggingface/transformers/issues/8343/events
https://github.com/huggingface/transformers/pull/8343
737,291,456
MDExOlB1bGxSZXF1ZXN0NTE2MzU5NDg1
8,343
[model_cards] Update Italian BERT models and introduce new Italian XX…
{ "login": "stefan-it", "id": 20651387, "node_id": "MDQ6VXNlcjIwNjUxMzg3", "avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stefan-it", "html_url": "https://github.com/stefan-it", "followers_url": "https://api.github.com/users/stefan-it/followers", "following_url": "https://api.github.com/users/stefan-it/following{/other_user}", "gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}", "starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions", "organizations_url": "https://api.github.com/users/stefan-it/orgs", "repos_url": "https://api.github.com/users/stefan-it/repos", "events_url": "https://api.github.com/users/stefan-it/events{/privacy}", "received_events_url": "https://api.github.com/users/stefan-it/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,604
1,605
1,604
COLLABORATOR
null
Hi, this PR updates some of our @dbmdz model cards for Italian BERT models: * Add details about issues with vocab for BERT models * Introduce new repository for downstream evaluations (NER and PoS tagging) It also adds new model cards for our newly trained Italian XXL ELECTRA model (discriminator and generator part).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8343/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8343", "html_url": "https://github.com/huggingface/transformers/pull/8343", "diff_url": "https://github.com/huggingface/transformers/pull/8343.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8343.patch", "merged_at": 1604650624000 }
https://api.github.com/repos/huggingface/transformers/issues/8342
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8342/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8342/comments
https://api.github.com/repos/huggingface/transformers/issues/8342/events
https://github.com/huggingface/transformers/issues/8342
737,274,047
MDU6SXNzdWU3MzcyNzQwNDc=
8,342
got an unexpected keyword argument 'early_stop_callback'
{ "login": "yxu1168", "id": 50936877, "node_id": "MDQ6VXNlcjUwOTM2ODc3", "avatar_url": "https://avatars.githubusercontent.com/u/50936877?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yxu1168", "html_url": "https://github.com/yxu1168", "followers_url": "https://api.github.com/users/yxu1168/followers", "following_url": "https://api.github.com/users/yxu1168/following{/other_user}", "gists_url": "https://api.github.com/users/yxu1168/gists{/gist_id}", "starred_url": "https://api.github.com/users/yxu1168/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yxu1168/subscriptions", "organizations_url": "https://api.github.com/users/yxu1168/orgs", "repos_url": "https://api.github.com/users/yxu1168/repos", "events_url": "https://api.github.com/users/yxu1168/events{/privacy}", "received_events_url": "https://api.github.com/users/yxu1168/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, we would need you to fill the issue template to have an idea of what's going on and how we can help.", "if you have `pytorch-lightning=1.0.4` and the code on `master` this shouldn't happen.", "Thank you Sam, it works!", "I have 1.8.6 and it still happens. " ]
1,604
1,674
1,604
NONE
null
__init__() got an unexpected keyword argument 'early_stop_callback' ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSTM: @stas00 examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8342/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8342/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8341
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8341/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8341/comments
https://api.github.com/repos/huggingface/transformers/issues/8341/events
https://github.com/huggingface/transformers/pull/8341
737,271,719
MDExOlB1bGxSZXF1ZXN0NTE2MzQzMDMw
8,341
[github CI] add a multi-gpu job for all example tests
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Ah, thank you for clarifying the state of things, @LysandreJik \r\n\r\nSome possible solutions I can think of:\r\n\r\n1. fix the situation quickly by adding `@require_torch_non_multigpu` to any such failing tests and make a plan to port those.\r\n2. rename all good tests to include `_multigpu` and run the test suite with `-k multigpu` - but we already have one test that can run on either 1 gpu or many so it'd exclude it despite it being just fine (which makes it is the least favorite suggestion)\r\n3. create a file with a list of passing tests and feed it to `pytest $(cat examples/passing_multigpu_tests.txt)`, which we maintain as tests get ported and eventually remove it.\r\n\r\nI think solution (3) is the most sensible to me. (1) will potentially sweep things under the carpet. (2) isn't great because it doesn't cover all bases.\r\n\r\nI went ahead and created `examples/ported-multigpu-tests.txt` and changed the new job to run:\r\n```\r\npython -m pytest -n 1 --dist=loadfile -s --make-reports=examples_torch_multiple_gpu $(tr '\\n' ' ' < examples/ported-multigpu-tests.txt)\r\n```\r\nI used a little `tr` to make `ported-multigpu-tests.txt` easy to maintain: \r\n* one entry per line - or multiple if desired\r\n* it can take whole test files or specific tests\r\n\r\nWe also have one distributed test under `tests` - but it automatically works with either number of gpus.\r\n\r\nMoreover, we can actually run tests from both tests suites at once - i.e. in the same `pytest` session.\r\n", "> Regarding your question concerning the `source .env/bin/activate` statement, it is actually necessary to do it at all steps. It got me by surprise in the slow TF test suite where I had forgotten it and it used the system interpreter instead.\r\n\r\nThank you for clarifying that! I took the liberty to add a note in `.github/workflows/self-scheduled.yml` to why this is done, to save time in the future others asking the same question.\r\n\r\nPerhaps there is a way to make it persistent across steps. A quick look landed: https://github.com/actions/virtual-environments - that is you need to use their venv and then it's persistent across steps I think.\r\n\r\nI wish the CIs were using the same API, as currently each one re-invents the wheel. We need huggingface `ci` interface like `transformers` that supports any major CI using the same API ;)", "Another small nit I'd like to suggest: the ci yml files use `multiple_gpu` and `multi_gpu`, tests use `multigpu` in one word - let's pick one of them and use it consistently?\r\n", "I would definitely be in favor of adding the `require_torch_non_multigpu` decorator to these tests, as much as I would be in favor of adding a `require_torch` to all current examples that require torch (which is all of them, I believe). Right now we can't run the TF CI on examples because it fails on all examples, and while we have no immediate need to do so as there are no examples to test in TF, this is bound to change.\r\n\r\nI think the absolute best approach would be to make those tests robust to multi-gpu, as most scripts can handle it but the tests are not designed that way. However, that would be a considerable time investiment for limited return, so I'm fine with the decorator option. I don't feel like that would be to sweep things under the carpet, so I may have misunderstood what you meant, could you elaborate on that?\r\n\r\nAh, I didn't know that you could leverage the GA virtual environment to keep it persistent. I currently don't have the bandwidth to look into it, but if you do, please do! Otherwise we can do this once things settle down.\r\n\r\nI have no opinion regarding `multigpu` vs `multi_gpu`, but consistency would be most welcome. Maybe `multigpu` is a bit harder to read, so let's go the `multi_gpu` route if you agree on that?", "> I would definitely be in favor of adding the `require_torch_non_multigpu` decorator to these tests, as much as I would be in favor of adding a `require_torch` to all current examples that require torch (which is all of them, I believe). \r\n\r\nExcellent! \r\n\r\nA possible next action:\r\n1. add `require_torch` to all current examples tests\r\n2. add `require_torch_non_multigpu` to unported/untested under multi-gpu examples tests\r\n3. make the multigpu CI job run on `examples` w/o specifying test names\r\n\r\n> I think the absolute best approach would be to make those tests robust to multi-gpu, as most scripts can handle it but the tests are not designed that way. However, that would be a considerable time investiment for limited return, so I'm fine with the decorator option. I don't feel like that would be to sweep things under the carpet, so I may have misunderstood what you meant, could you elaborate on that?\r\n\r\nWhen I used the idiom \"sweeping under the carpet\", my concern with `require_torch_non_multigpu` was that it was actually added to decorate tests that by design shouldn't run on multi-gpu, but here we use it as a band-aid, and then we won't be able to quickly tell which tests are flagged so because they are in line to be upgraded, and which are meant to not run on multigpu (can't be upgraded or if for whatever reason it needs to run on a single gpu). I hope this clarifies my concern.\r\n\r\nHow about we add a new decorator `require_torch_non_multigpu_but_fix_me` or something like that, so it's then clear this one needs to be upgraded. If I remember correctly we have a couple `require_torch_non_multigpu` that are meant to stay so. It'd just be defined as:\r\n```\r\nrequire_torch_non_multigpu_but_fix_me = require_torch_non_multigpu\r\n```\r\nso it's just a visual clue for us.\r\n\r\nOr it's possible that I'm overreacting and we somehow will keep track of what was ported and what not.\r\n\r\n> Ah, I didn't know that you could leverage the GA virtual environment to keep it persistent. I currently don't have the bandwidth to look into it, but if you do, please do! Otherwise we can do this once things settle down.\r\n\r\nI'm not 100% sure it is persistent as I haven't used it on GA yet. But one would think so. The current solution works for me.\r\n\r\n> I have no opinion regarding `multigpu` vs `multi_gpu`, but consistency would be most welcome. Maybe `multigpu` is a bit harder to read, so let's go the `multi_gpu` route if you agree on that?\r\n\r\n`multi_gpu` is perfect with me. Thank you!\r\n", "I went ahead and added `@require_torch_non_multigpu` to unported/untested under multi-gpu examples tests.\r\n\r\n```\r\nfind examples -name \"test_*\" -exec perl -pi -e 's|^ def test_| \\@require_torch_non_multigpu_but_fix_me\\n def test_|' {} \\;\r\nfind examples -name \"test_*\" -exec perl -pi -e 's|from transformers.testing_utils import |from transformers.testing_utils import require_torch_non_multigpu_but_fix_me, |' {} \\;\r\nmake fixup\r\n```\r\n\r\nI cleaned up one test file that doesn't really use gpu and removed the new decorator from there. Plus removed it from all the tests that have been ported.\r\n\r\nWhen the porting is done we can remove this decorator altogether.\r\n\r\nOn a multi-gpu machine we now have:\r\n```\r\npytest examples\r\n====================================================================== test session starts ======================================================================\r\nplatform linux -- Python 3.8.5, pytest-6.1.2, py-1.9.0, pluggy-0.13.1\r\nrootdir: /mnt/nvme1/code/huggingface/transformers-examples-multigpu-ci\r\nplugins: hydra-core-1.0.3, forked-1.3.0, xdist-2.1.0, instafail-0.4.2\r\ncollected 82 items\r\n\r\nexamples/test_examples.py ssssss [ 7%]\r\nexamples/bert-loses-patience/test_run_glue_with_pabee.py s [ 8%]\r\nexamples/deebert/test_glue_deebert.py s [ 9%]\r\nexamples/rag/test_distributed_retriever.py sss [ 13%]\r\nexamples/seq2seq/test_bash_script.py sss [ 17%]\r\nexamples/seq2seq/test_calculate_rouge.py ...... [ 24%]\r\nexamples/seq2seq/test_datasets.py ssssssssssssssss [ 43%]\r\nexamples/seq2seq/test_finetune_trainer.py s.s [ 47%]\r\nexamples/seq2seq/test_fsmt_bleu_score.py ssss [ 52%]\r\nexamples/seq2seq/test_make_student.py sssss [ 58%]\r\nexamples/seq2seq/test_seq2seq_examples.py ssssssssssssssssss [ 80%]\r\nexamples/seq2seq/test_seq2seq_examples_multi_gpu.py s. [ 82%]\r\nexamples/seq2seq/test_tatoeba_conversion.py ss [ 85%]\r\nexamples/seq2seq/bertabs/test_utils_summarization.py .......... [ 97%]\r\nexamples/token-classification/test_ner_examples.py ss [100%]\r\n\r\n========================================================== 18 passed, 64 skipped, 2 warnings in 32.04s =============\r\n```\r\n\r\nWe can add a new task to go over tests with `@require_torch_non_multigpu` - review those and either port them or remove the decorator, since no porting is needed there. And we can now run examples on multigpu.\r\n", "So let's discuss how we designate the various single/multiple/flexible gpu decorators:\r\n\r\nLet's focus just on pytorch tests for now\r\n\r\nWe have tests that require:\r\n- `>= 0` gpus: `@require_torch`\r\n- `>= 1` gpus: `@require_torch_gpu`\r\n- `>= 2` gpus: `@require_torch_multigpu`\r\n- `< 2` gpu: `@require_torch_non_multigpu`\r\n- `== 1` gpu only: we don't have such decorator yet, so perhaps we need to add `require_torch_single_gpu` as you suggested?\r\n- `== 0 ` gpu only: we don't have such decorator yet, so perhaps we need to add `require_torch_non_gpu`?\r\n\r\nPlease check that I didn't miss any edge cases. The last one I'm not sure if we have such tests - probably not.\r\n\r\nThe second to last case (`== 1` gpu only) is slightly ambiguous - should we not run the test if there is more than 1 gpu? or should the test be smart and set the env so that only 1 gpu is seen? it's easy to do if it's a `n_gpu` arg, so it can just force it to `n_gpu=1` - it'd be trickier if the tested system derives this from the env - then we have to tweak `CUDA_VISIBLE_DEVICES`. we have such test already - it just sets `n_gpu=1` if there is `>= 1` - so should such test run on multi-gpu machine or not?\r\n\r\nplus we want a distributed test that runs on a single gpu as well - @sshleifer wanted to see that it gives better results than non-distributed - so distributed shouldn't automatically mean multigpu either.\r\n\r\nAny other nuances?\r\n", "I'm not sure we have a need for the last two tests. If a test isn't working on a GPU even though it works on CPU, this is a bug. If a test isn't working on CPU even though it works on a GPU, that is a bug as well. Both of these bugs must be solved ASAP.\r\n\r\nI think the other four decorators are enough, and cover the whole test suite. What do you think?", "> If a test isn't working on CPU even though it works on a GPU, that is a bug as well. Both of these bugs must be solved ASAP.\r\n\r\nYour comment then addresses @sshleifer question:\r\n\r\n> should the decorator be called require_torch_single_gpu?\r\n\r\nSo you currently can't think of such situation, since it'd have to work on cpu too.\r\n\r\n---------------------\r\n\r\nI was thinking that perhaps there will be a test checking some feature that is only available on gpu or perhaps it's too slow to run on cpu, yet doesn't require `slow` on gpu...\r\n\r\nI guess if we encounter such situation we can add a new decorator.", "So the next stage could be: \r\n1) to make an issue that tracks a list of tests that need to be ported to multigpu\r\n2) invite those in charge of the different sections of `examples` to review the tests that they maintain, remove this fix_me decorator if it's not needed and otherwise add it to the issue in #1 (or perhaps list all the available tests right away and check off what still needs to be worked on)\r\n3) port the tests that need to be ported \r\n - mechanically it is just a few lines of code to add and update the issue - but it's the best done by those who wrote/maintain the tests since they know the nuances and will see that the tests are still valid \r\n - the main issue I noticed with porting to distributed is that the tests then need either more data or more iterations or a different learning rate. And to support 3+ gpus it really has to have some kind of multiplier so as n_gpus grows the number of data needs to grow proportionally. \r\n\r\nor we can just leave it as is and port as devs choose to/find need to..." ]
1,604
1,604
1,604
CONTRIBUTOR
null
As discussed in https://github.com/huggingface/transformers/pull/8315#issuecomment-722398166 we now have a growing set of tests that do distributed testing and require a multi-gpu setup, and which aren't being tested at the moment by any CI. This PR adds a new job: * [x] run all example tests on multi-gpu Since these are slow, I added it only to the scheduled job. Question: Is this really needed for every step? ``` source .env/bin/activate ``` It's in all test steps, even when there is multiple test suites run in sequence it's re-run. Won't it be enough to do it once in the env prep step for all steps in the same job? @sshleifer, @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8341/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8341/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8341", "html_url": "https://github.com/huggingface/transformers/pull/8341", "diff_url": "https://github.com/huggingface/transformers/pull/8341.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8341.patch", "merged_at": 1604954859000 }
https://api.github.com/repos/huggingface/transformers/issues/8340
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8340/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8340/comments
https://api.github.com/repos/huggingface/transformers/issues/8340/events
https://github.com/huggingface/transformers/pull/8340
737,270,966
MDExOlB1bGxSZXF1ZXN0NTE2MzQyNDI4
8,340
Add new token classification example
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Hi @sgugger , thanks for these new features :hugs: \r\n\r\nI have a some comments:\r\n\r\n* Only training and validation files can be passed via command line argument. The `do_predict` method is already supported, so a new argument for specifiying the test file would be great to have!\r\n* A new version of the `run.sh` script would also be a great feature to start fine-tuning just be executing a shell script. The current `run.sh` script should also get an \"_old\" suffix then :)" ]
1,604
1,604
1,604
COLLABORATOR
null
# What does this PR do? This PR adds a new example script leveraging the dataset library, the offsets from the fast tokenizers of the tokenizers library and the Trainer API.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8340/reactions", "total_count": 3, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8340/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8340", "html_url": "https://github.com/huggingface/transformers/pull/8340", "diff_url": "https://github.com/huggingface/transformers/pull/8340.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8340.patch", "merged_at": 1604939996000 }
https://api.github.com/repos/huggingface/transformers/issues/8339
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8339/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8339/comments
https://api.github.com/repos/huggingface/transformers/issues/8339/events
https://github.com/huggingface/transformers/issues/8339
737,244,516
MDU6SXNzdWU3MzcyNDQ1MTY=
8,339
finetune_trainer being really slow on TPU
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "could you give me some numbers how much time it could take for training WMT on TPUs so I could compare? I appreciate letting me know how to do the setting on TPUs @patil-suraj ", "AFAIK, there is a recent slowdown on TPU that @patil-suraj is investigating. \r\n\r\nI simple speedup would be to use `--model_name_or_path Helsinki-NLP/opus-mt-en-ro`\r\n", "Hi Sam. I need to use T5 and I really got a very slow performance. could you tell me also the batch_size_per_host for v2-8 machine? since I also got SEGFAULT when using batch_size of 32 even per host. I really need to make this work for a project and I really need help. I appreciate assisting me with setting things up on TPU. this is really very slow and I do not know what I can do about it. thanks ", "Hi Sam, \r\nIs there any chance to set a 10 minutes time with me, if I could ask some questions on setting up things on TPUs? I really do not have access to ask anyone questions on this and I need help to set things up. I greatly appreciate it thanks ", "could you tell me the type of instance I need to create? maybe I am making mistake with using cloud? thanks", "Sorry for interrupting.\r\n\r\nI'm sorry if I'm wrong, but I'm thinking that how about using `--tpu_num_cores` instead of `--num_cores`.\r\n\r\nI'm a complete newbie to TPU and I'm currently seeking a way to use of TPU in the Transformer Trainer and I found this issue.\r\nCurrently, the `TrainingArguments` class seems to use `tpu_num_cores` as an argument. You may be able to use `tpu_num_cores` instead of `num_cores`.\r\nI'm trying to figure out what `tpu_num_cores` has to do with `num_cores`.\r\n\r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/training_args.py#L290\r\n\r\nI'm sorry if I'm misguided.", "Excuse me, you are using `xla_spawn.py`, and it passes its `--num_cores` as `--tpu_num_cores` to `finetune_trainer.py`.\r\nI apologize for commenting without fully understanding the situation.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
Hi I need to run finetune_trainer for a large scale project on TPU, here is the command I use `python xla_spawn.py --num_cores 8 finetune_trainer.py --learning_rate=3e-5 --n_val 1000 --model_name_or_path t5-small --data_dir data/wmt_en_ro/ --output_dir /home/rabeeh/temp/ --overwrite_output_dir --tpu_num_cores=8 --max_source_length=64 --max_target_length=64 --per_device_train_batch_size=32 --per_device_eval_batch_size=32 --label_smoothing=0.1 --task="translation" --logging_steps=200 --eval_steps=500 --num_train_epochs=1 --save_steps=500 --max_source_length=128 --max_target_length=128 --val_max_target_length=128 --test_max_target_length=128 ` It is running slow for me, and I was wondering if I can make it faster, can I make it run distributedly on multiple host machines? I really appreciate providing me with command to run this code efficiently on TPUs on cluster. Is there any setting I am missing in my command? thank you.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8339/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8339/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8338
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8338/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8338/comments
https://api.github.com/repos/huggingface/transformers/issues/8338/events
https://github.com/huggingface/transformers/pull/8338
737,229,703
MDExOlB1bGxSZXF1ZXN0NTE2MzA3OTIx
8,338
Update README.md
{ "login": "hassoudi", "id": 6810258, "node_id": "MDQ6VXNlcjY4MTAyNTg=", "avatar_url": "https://avatars.githubusercontent.com/u/6810258?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hassoudi", "html_url": "https://github.com/hassoudi", "followers_url": "https://api.github.com/users/hassoudi/followers", "following_url": "https://api.github.com/users/hassoudi/following{/other_user}", "gists_url": "https://api.github.com/users/hassoudi/gists{/gist_id}", "starred_url": "https://api.github.com/users/hassoudi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hassoudi/subscriptions", "organizations_url": "https://api.github.com/users/hassoudi/orgs", "repos_url": "https://api.github.com/users/hassoudi/repos", "events_url": "https://api.github.com/users/hassoudi/events{/privacy}", "received_events_url": "https://api.github.com/users/hassoudi/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Plz merge", "Plz metge" ]
1,604
1,604
1,604
CONTRIBUTOR
null
fixes # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8338/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8338/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8338", "html_url": "https://github.com/huggingface/transformers/pull/8338", "diff_url": "https://github.com/huggingface/transformers/pull/8338.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8338.patch", "merged_at": 1604661658000 }
https://api.github.com/repos/huggingface/transformers/issues/8337
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8337/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8337/comments
https://api.github.com/repos/huggingface/transformers/issues/8337/events
https://github.com/huggingface/transformers/issues/8337
737,222,352
MDU6SXNzdWU3MzcyMjIzNTI=
8,337
pip cannot install transformers with python version 3.X version on Ubuntu 18.04
{ "login": "varshachawan", "id": 25875832, "node_id": "MDQ6VXNlcjI1ODc1ODMy", "avatar_url": "https://avatars.githubusercontent.com/u/25875832?v=4", "gravatar_id": "", "url": "https://api.github.com/users/varshachawan", "html_url": "https://github.com/varshachawan", "followers_url": "https://api.github.com/users/varshachawan/followers", "following_url": "https://api.github.com/users/varshachawan/following{/other_user}", "gists_url": "https://api.github.com/users/varshachawan/gists{/gist_id}", "starred_url": "https://api.github.com/users/varshachawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/varshachawan/subscriptions", "organizations_url": "https://api.github.com/users/varshachawan/orgs", "repos_url": "https://api.github.com/users/varshachawan/repos", "events_url": "https://api.github.com/users/varshachawan/events{/privacy}", "received_events_url": "https://api.github.com/users/varshachawan/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "since transformers 3.4.0 or earlier version tries to install senetencepiece 0.1.94 ( latest) and it fails on this platform\r\nHere is fix I tried \r\n>installation of `sentencepiece==0.1.5` and then transformers installations", "The issue seems to be with SentencePiece then, rather than with `transformers`.\r\n\r\nThis be should be solved by https://github.com/huggingface/transformers/pull/8073 as it will remove SentencePiece from the dependencies", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.4.0 - Platform: ``` NAME="Ubuntu" VERSION="18.04.5 LTS (Bionic Beaver)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 18.04.5 LTS" VERSION_ID="18.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=bionic UBUNTU_CODENAME=bionic ``` - Python version: python3.X - PyTorch version (GPU?): no - Tensorflow version (GPU?): na - Using GPU in script?: na - Using distributed or parallel set-up in script?:na ### Who can help tokenizers: @mfuntowicz ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [X] pip install transformers ## To reproduce Steps to reproduce the behavior: 1. Create a VM with Ubuntu 18.04 2. Install any python3.X version 3. pip3 install transformers <!-- Requirement already satisfied: tokenizers==0.9.2 in /usr/local/lib/python3.6/dist-packages (from transformers) Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) Requirement already satisfied: click in /usr/lib/python3/dist-packages (from sacremoses->transformers) Requirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->transformers) Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf->transformers) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/lib/python3/dist-packages (from requests->transformers) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) Building wheels for collected packages: sentencepiece Running setup.py bdist_wheel for sentencepiece ... error Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-baldc7a3/sentencepiece/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/tmp0rhc8tr_pip-wheel- --python-tag cp36: running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-3.6 creating build/lib.linux-x86_64-3.6/sentencepiece copying src/sentencepiece/__init__.py -> build/lib.linux-x86_64-3.6/sentencepiece copying src/sentencepiece/sentencepiece_model_pb2.py -> build/lib.linux-x86_64-3.6/sentencepiece copying src/sentencepiece/sentencepiece_pb2.py -> build/lib.linux-x86_64-3.6/sentencepiece running build_ext /bin/sh: 1: pkg-config: not found Cloning into 'sentencepiece'... Note: checking out '8336bbd0c1cfba02a879afe625bf1ddaf7cd93c5'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> ./build_bundled.sh: 15: ./build_bundled.sh: cmake: not found make: *** No targets specified and no makefile found. Stop. make: *** No rule to make target 'install'. Stop. env: ‘pkg-config’: No such file or directory Failed to find sentencepiece pkg-config ---------------------------------------- Failed building wheel for sentencepiece Running setup.py clean for sentencepiece Failed to build sentencepiece Installing collected packages: sentencepiece, transformers Running setup.py install for sentencepiece ... error Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-baldc7a3/sentencepiece/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-cfvofwnm-record/install-record.txt --single-version-externally-managed --compile: running install running build running build_py creating build creating build/lib.linux-x86_64-3.6 creating build/lib.linux-x86_64-3.6/sentencepiece copying src/sentencepiece/__init__.py -> build/lib.linux-x86_64-3.6/sentencepiece copying src/sentencepiece/sentencepiece_model_pb2.py -> build/lib.linux-x86_64-3.6/sentencepiece copying src/sentencepiece/sentencepiece_pb2.py -> build/lib.linux-x86_64-3.6/sentencepiece running build_ext /bin/sh: 1: pkg-config: not found mkdir: cannot create directory ‘bundled’: File exists fatal: destination path 'sentencepiece' already exists and is not an empty directory. fatal: destination path 'sentencepiece' already exists and is not an empty directory. mkdir: cannot create directory ‘build’: File exists ./build_bundled.sh: 15: ./build_bundled.sh: cmake: not found make: *** No targets specified and no makefile found. Stop. make: *** No rule to make target 'install'. Stop. env: ‘pkg-config’: No such file or directory Failed to find sentencepiece pkg-config ---------------------------------------- Command "/usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-baldc7a3/sentencepiece/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-cfvofwnm-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-baldc7a3/sentencepiece/ -> ## Expected behavior Successful installation of transformers
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8337/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8337/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8336
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8336/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8336/comments
https://api.github.com/repos/huggingface/transformers/issues/8336/events
https://github.com/huggingface/transformers/pull/8336
737,209,501
MDExOlB1bGxSZXF1ZXN0NTE2MjkxNjI5
8,336
Make Trainer evaluation handle dynamic seq_length
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,604
1,604
COLLABORATOR
null
# What does this PR do? In token classification problems or language modeling predictions, logits and labels are usually of shape `[batch_size, seq_length, ...]` and if one uses dynamic padding, the `seq_length` might change from batch to batch. This made the evaluate and predict methods of the `Trainer` fail, because the Trainer wants to concatenate all predictions in one array. This PR fixes this, by potentially padding up to the maximum seq length when needed, using the -100 pad index (since it's the marker for ignored labels in PyTorch). It add tests that were failing before the PR, to check the fix actually solves the issue.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8336/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8336/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8336", "html_url": "https://github.com/huggingface/transformers/pull/8336", "diff_url": "https://github.com/huggingface/transformers/pull/8336.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8336.patch", "merged_at": 1604607232000 }
https://api.github.com/repos/huggingface/transformers/issues/8335
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8335/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8335/comments
https://api.github.com/repos/huggingface/transformers/issues/8335/events
https://github.com/huggingface/transformers/issues/8335
737,147,070
MDU6SXNzdWU3MzcxNDcwNzA=
8,335
Language Model to get only the score of predefined tokens
{ "login": "agombert", "id": 17645711, "node_id": "MDQ6VXNlcjE3NjQ1NzEx", "avatar_url": "https://avatars.githubusercontent.com/u/17645711?v=4", "gravatar_id": "", "url": "https://api.github.com/users/agombert", "html_url": "https://github.com/agombert", "followers_url": "https://api.github.com/users/agombert/followers", "following_url": "https://api.github.com/users/agombert/following{/other_user}", "gists_url": "https://api.github.com/users/agombert/gists{/gist_id}", "starred_url": "https://api.github.com/users/agombert/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/agombert/subscriptions", "organizations_url": "https://api.github.com/users/agombert/orgs", "repos_url": "https://api.github.com/users/agombert/repos", "events_url": "https://api.github.com/users/agombert/events{/privacy}", "received_events_url": "https://api.github.com/users/agombert/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,613
1,610
NONE
null
# 🚀 Feature request Hey, In Masked Language Model such as [TFDistilBertForMaskedLM](https://huggingface.co/transformers/model_doc/distilbert.html#tfdistilbertformaskedlm). The example explains well how to get the scores for [MASK] token. Nevertheless I did not find anything related to scale it up to 50k texts and especially get the results for a list of predefined tokens (let's say 10 tokens like in [MNLI examples for ZSL](https://huggingface.co/joeddav/xlm-roberta-large-xnli)). The feature would be to add a _only_ids_ parameter in the `call` function of `TFDistilBertForMaskedLM` to get at the end for each sentence only the scores of the predefines tokens we are interested in. Maybe I did not search at the right place, however I managed to find something by digging in source code of HuggingFace. <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation Well my motivation came from two things: I was trying to implement my own pipeline of the [PET Algorithm from Schik et al.](https://arxiv.org/pdf/2001.07676.pdf). And you need to perform weak labeling on a dataset of ~10k texts a few times (as much time as you have PVP couples). So first, I tried to code with the [TFDistilBertForMaskedLM](https://huggingface.co/transformers/model_doc/distilbert.html#tfdistilbertformaskedlm) class in order to do label the data. But came the second motivation: ```python ResourceExhaustedError: OOM when allocating tensor ``` This errors occured on colab and also in [Lambda GPU](https://lambdalabs.com) (gpu.1x.rtx6000) whenever I tried to do it for more than 1k sentences. I dug a bit and the errors seemed to come from the `call` function of the class `TFDistilBertLMHead`, because it manipulates vectors of shape `[bs, dim, config.voc_size]`. As I did not find anything on the issues or StackOverflow, I decided to get into the code of `modeling_tf_distilbert.py` to try to change a bit the `call` functions and try my code on [Lambda GPU](https://lambdalabs.com) (gpu.1x.rtx6000) to look for improvements (I used the test set of yahoo question dataset). ## Your contribution I changed a couple of things (not complicated things) in the classes and compared it (results below). First change (easy) - I would call it CS for cutting out the sequence: What we do is before applying the `vocab_projector` method we reduce the focus only on the mask token. It enables to manipulates tensors of shape (bs, 1, voc_size) instead of tensors of shape (bs, max_length, voc_size). ```python class CB_TFDistilBertForMaskedLM(TFDistilBertForMaskedLM): def call( self, inputs=None, attention_mask=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None, training=False, mask_only=False #If you just focus on the mask token id change the False by it (103 here) ): r""" labels (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): Labels for computing the masked language modeling loss. Indices should be in ``[-100, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring) Tokens with indices set to ``-100`` are ignored (masked), the loss is only computed for the tokens with labels in ``[0, ..., config.vocab_size]`` """ return_dict = return_dict if return_dict is not None else self.distilbert.return_dict if isinstance(inputs, (tuple, list)): labels = inputs[7] if len(inputs) > 7 else labels if len(inputs) > 7: inputs = inputs[:7] elif isinstance(inputs, (dict, BatchEncoding)): labels = inputs.pop("labels", labels) distilbert_output = self.distilbert( inputs, attention_mask=attention_mask, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, ) hidden_states = distilbert_output[0] # (bs, seq_length, dim) prediction_logits = self.vocab_transform(hidden_states) # (bs, seq_length, dim) prediction_logits = self.act(prediction_logits) # (bs, seq_length, dim) prediction_logits = self.vocab_layer_norm(prediction_logits) # (bs, seq_length, dim) #transform from (bs, seq_length, dim) to (bs, 1, dim) where 1 is the MASK if mask_only: indexes = tf.where(inputs['input_ids'] == mask_only).numpy() prediction_logits = get_mask_rep(prediction_logits, indexes) prediction_logits = self.vocab_projector(prediction_logits, ids_only=ids_only) loss = None if labels is None else self.compute_loss(labels, prediction_logits) if not return_dict: output = (prediction_logits,) + distilbert_output[1:] return ((loss,) + output) if loss is not None else output return TFMaskedLMOutput( loss=loss, logits=prediction_logits, hidden_states=distilbert_output.hidden_states, attentions=distilbert_output.attentions ) def get_mask_rep(score, indexes): """ Objective: get the score only for the indexes of interest. Inputs: - score, np.array or tf.tensor: the scores for all vocabulary - indexes, np.array: the array indicating [MASK] positionning Outputs: - final_scores, np.array: the final scores but only for the tokens of interest """ n = score.shape[0] embedded_mask = dict(zip(indexes[:, 0], indexes[:, 1])) final_scores = [] for index in range(score.shape[0]): final_scores.append(score[index, embedded_mask.get(index), :].numpy().reshape(1, -1)) return np.array(final_scores) ``` Second change - I would call it RVCS for reducing the vocabulary tensor and cuting the sequence: The idea is to reduce the tensor of `_linear` method in the _TFEmbeddings_ to manipulates tensors of size (dim, n_tokens_of_interests) instead of tensors of (dim, vocab_size). ```python class CB_TFEmbeddings(TFEmbeddings): def call(self, input_ids=None, position_ids=None, inputs_embeds=None, mode="embedding", training=False, ids_only=False): """Get token embeddings of inputs. Args: inputs: list of two int64 tensors with shape [batch_size, length]: (input_ids, position_ids) mode: string, a valid value is one of "embedding" and "linear". Returns: outputs: (1) If mode == "embedding", output embedding tensor, float32 with shape [batch_size, length, embedding_size]; (2) mode == "linear", output linear tensor, float32 with shape [batch_size, length, vocab_size]. Raises: ValueError: if mode is not valid. Shared weights logic adapted from https://github.com/tensorflow/models/blob/a009f4fb9d2fc4949e32192a944688925ef78659/official/transformer/v2/embedding_layer.py#L24 """ if mode == "embedding": return self._embedding(input_ids, position_ids, inputs_embeds, training=training) elif mode == "linear": return self._linear(input_ids, ids_only) else: raise ValueError("mode {} is not valid.".format(mode)) def _linear(self, inputs, ids_only): """Computes logits by running inputs through a linear layer. Args: inputs: A float32 tensor with shape [batch_size, length, hidden_size] Returns: float32 tensor with shape [batch_size, length, vocab_size]. """ batch_size = shape_list(inputs)[0] length = shape_list(inputs)[1] x = tf.reshape(inputs, [-1, self.dim]) if ids_only:#reduce the word_embeddings vector to the token ids of interest vocab_size = len(ids_only) word_embeddings = tf.convert_to_tensor(self.word_embeddings.numpy()[ids_only, :]) else: word_embeddings = self.word_embeddings vocab_size = self.vocab_size logits = tf.matmul(x, word_embeddings, transpose_b=True) return tf.reshape(logits, [batch_size, length, vocab_size]) class CB_TFDistilBertMainLayer(TFDistilBertMainLayer): config_class = DistilBertConfig def __init__(self, config, **kwargs): super().__init__(config, **kwargs) self.num_hidden_layers = config.num_hidden_layers self.output_attentions = config.output_attentions self.output_hidden_states = config.output_hidden_states self.return_dict = config.use_return_dict self.embeddings = CB_TFEmbeddings(config, name="embeddings") # Embeddings self.transformer = TFTransformer(config, name="transformer") # Encoder class CB_TFDistilBertLMHead(TFDistilBertLMHead): def call(self, hidden_states, ids_only): hidden_states = self.input_embeddings(hidden_states, mode="linear", ids_only=ids_only) if ids_only: #if we focus only on some tokens, reduce the bias to those dimensions other_bias = tf.convert_to_tensor(self.bias.numpy()[ids_only]) bias = other_bias else: bias = self.bias hidden_states = hidden_states + bias return hidden_states class CB_TFDistilBertForMaskedLM(TFDistilBertForMaskedLM): def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.vocab_size = config.vocab_size self.distilbert = CB_TFDistilBertMainLayer(config, name="distilbert") self.vocab_transform = tf.keras.layers.Dense( config.dim, kernel_initializer=get_initializer(config.initializer_range), name="vocab_transform" ) self.act = get_tf_activation("gelu") self.vocab_layer_norm = tf.keras.layers.LayerNormalization(epsilon=1e-12, name="vocab_layer_norm") self.vocab_projector = CB_TFDistilBertLMHead(config, self.distilbert.embeddings, name="vocab_projector") def call( self, inputs=None, attention_mask=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None, training=False, mask_only=False #If you just focus on the mask token id change the False by it (103 here) only_ids=False #Otherwise the tokens of interests as a list ): r""" labels (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): Labels for computing the masked language modeling loss. Indices should be in ``[-100, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring) Tokens with indices set to ``-100`` are ignored (masked), the loss is only computed for the tokens with labels in ``[0, ..., config.vocab_size]`` """ return_dict = return_dict if return_dict is not None else self.distilbert.return_dict if isinstance(inputs, (tuple, list)): labels = inputs[7] if len(inputs) > 7 else labels if len(inputs) > 7: inputs = inputs[:7] elif isinstance(inputs, (dict, BatchEncoding)): labels = inputs.pop("labels", labels) distilbert_output = self.distilbert( inputs, attention_mask=attention_mask, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, ) hidden_states = distilbert_output[0] # (bs, seq_length, dim) prediction_logits = self.vocab_transform(hidden_states) # (bs, seq_length, dim) prediction_logits = self.act(prediction_logits) # (bs, seq_length, dim) prediction_logits = self.vocab_layer_norm(prediction_logits) # (bs, seq_length, dim) #transform from (bs, seq_length, dim) to (bs, 1, dim) where 1 is the MASK if mask_only: indexes = tf.where(inputs['input_ids'] == mask_only).numpy() prediction_logits = get_mask_rep(prediction_logits, indexes) prediction_logits = self.vocab_projector(prediction_logits, ids_only=ids_only) loss = None if labels is None else self.compute_loss(labels, prediction_logits) if not return_dict: output = (prediction_logits,) + distilbert_output[1:] return ((loss,) + output) if loss is not None else output return TFMaskedLMOutput( loss=loss, logits=prediction_logits, hidden_states=distilbert_output.hidden_states, attentions=distilbert_output.attentions ) def get_mask_rep(score, indexes): """ Objective: get the score only for the indexes of interest. Inputs: - score, np.array or tf.tensor: the scores for all vocabulary - indexes, np.array: the array indicating [MASK] positionning Outputs: - final_scores, np.array: the final scores but only for the tokens of interest """ n = score.shape[0] embedded_mask = dict(zip(indexes[:, 0], indexes[:, 1])) final_scores = [] for index in range(score.shape[0]): final_scores.append(score[index, embedded_mask.get(index), :].numpy().reshape(1, -1)) return np.array(final_scores) ``` Thus after that I uploaded a notebook on Lambda and looked at the speed of each version of code and especially how many sentences could I process with the same GPU (gpu.1x.rtx6000): First line is the approximative maximum number of sentences I could process at once. Next lines are the time to process _n_ sentences by batches of 64. If there is a cross it because we got the MOO error before reaching the total number of sentences. | Method | Normal | CS | RVCS | |:--------:|:--------:|:--------:|:--------:| |max batch| 450 | 4500 | 4500 | |900 stcs| 1.21 | 1.20 | 4.5 | |30k stcs| x | 42.2 | 134.4 | |40k tokens| x | 53.5 | 180.7 | |50k tokens| x | x | 228.4 | |60k tokens| x | x | 272.7 | Well it is not much, but at least we can use the GPU for 40k sentences with a little change in the tensors shape and with decent performances. If we add the changes of reducing the tensors in the `_linear` method we reduce by 3 the performance but we can perform the process on more sentence with the same GPU. Code for the experiment: ```python import time patterns = create_patterns('{} [SEP] it talks about [MASK]) verbalizers = ['Society', 'Science', 'Health', 'Education', 'Computer', 'Sports', 'Business', 'Entertainment', 'Relationship', 'Politics'] verb_ids = get_token_id(verbalizers, tokenizer) model = TFDistilBertForMaskedLM.from_pretrained("distilbert-base-multilingual-cased") #model = CB_TFDistilBertForMaskedLM.from_pretrained("distilbert-base-multilingual-cased") step=64 st = time.time() for i in range(0, n, step): texts = apply_pattern(sentences[:step], patterns[1]) inputs = tokenizer.batch_encode_plus(texts, return_tensors='tf', max_length=256, truncation=True, padding=True) score = model(inputs, return_dict=True)#, mask_only=103, ids_only=verb_ids) print(time.time() - st) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8335/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8335/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8334
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8334/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8334/comments
https://api.github.com/repos/huggingface/transformers/issues/8334/events
https://github.com/huggingface/transformers/pull/8334
737,125,895
MDExOlB1bGxSZXF1ZXN0NTE2MjIxNDQz
8,334
Model card: T5-base fine-tuned on QuaRel
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Nice! => https://huggingface.co/datasets/quarel" ]
1,604
1,604
1,604
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8334/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8334/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8334", "html_url": "https://github.com/huggingface/transformers/pull/8334", "diff_url": "https://github.com/huggingface/transformers/pull/8334.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8334.patch", "merged_at": 1604650195000 }
https://api.github.com/repos/huggingface/transformers/issues/8333
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8333/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8333/comments
https://api.github.com/repos/huggingface/transformers/issues/8333/events
https://github.com/huggingface/transformers/issues/8333
737,123,937
MDU6SXNzdWU3MzcxMjM5Mzc=
8,333
FRCNN in the LXMERT demo outputs different features when using a local image vs. an image from a URL
{ "login": "ecekt", "id": 16474496, "node_id": "MDQ6VXNlcjE2NDc0NDk2", "avatar_url": "https://avatars.githubusercontent.com/u/16474496?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ecekt", "html_url": "https://github.com/ecekt", "followers_url": "https://api.github.com/users/ecekt/followers", "following_url": "https://api.github.com/users/ecekt/following{/other_user}", "gists_url": "https://api.github.com/users/ecekt/gists{/gist_id}", "starred_url": "https://api.github.com/users/ecekt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ecekt/subscriptions", "organizations_url": "https://api.github.com/users/ecekt/orgs", "repos_url": "https://api.github.com/users/ecekt/repos", "events_url": "https://api.github.com/users/ecekt/events{/privacy}", "received_events_url": "https://api.github.com/users/ecekt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Pinging @eltoto1219 ", "Hi @eltoto1219 I see that my issue and proposed solution have been mentioned in a more recent issue (#8769), should I follow there for the confirmation?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,604
1,619
1,619
NONE
null
## Environment info - `transformers` version: 3.3.1 - Platform: Linux-5.4.0-52-generic-x86_64-with-debian-buster-sid - Python version: 3.7.4 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @eltoto1219 ## Information Model I am using: LXMERT, more specifically FRCNN. The problem arises when using the demo.ipynb file given here: https://github.com/huggingface/transformers/blob/master/examples/lxmert/demo.ipynb ## To reproduce 1. Change the URL to the filename of a local image. Both the visualization and the extracted features are affected by this. ## Expected behavior I noticed that the order of Red, Green, Blue is changed in this line in the utils file: https://github.com/huggingface/transformers/blob/7abc1d96d114873d9c3c2f1bc81343fb1407cec4/examples/lxmert/utils.py#L552 When I comment out this line, the visualization and the features output by the model for a local image are the same as when we use its URL without commenting this line out. I would appreciate if you can confirm that commenting out that line does not cause other issues in LXMERT so that I can use local files. Best, Ece
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8333/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8333/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8332
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8332/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8332/comments
https://api.github.com/repos/huggingface/transformers/issues/8332/events
https://github.com/huggingface/transformers/issues/8332
737,115,868
MDU6SXNzdWU3MzcxMTU4Njg=
8,332
Measuring time when using xla_spawn on multiple cores
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi, I think you should take a look at the [troubleshooting section over at pytorch/xla](https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md), it has a lot of information on how to debug on TPU.\r\n\r\nThe metrics report they mention in the first paragraph can be output by the trainer with the `--debug` flag.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
Hi, I would like to measure the time of the program, when this is launched in distributed way on mutiple TPU cores with xla_spawn, I tried to write some callback like this which retunrs the same results with num_cores = 1 and 8, I am not sure how to handle it when this is distributed? I want to make sure pytorch XLA is using all cores indeed, thank you for your help. ``` class PrinterTimeCallback(TrainerCallback): #A bare :class:`~transformers.TrainerCallback` that just prints the logs. def on_epoch_begin(self, args, state, control, **kwargs): """ Event called at the beginning of an epoch. """ if state.is_local_process_zero: self.start_time = time.time() def on_epoch_end(self, args, state, control, **kwargs): """ Event called at the end of an epoch. """ if state.is_local_process_zero: self.end_time = time.time() total_time = self.end_time - self.start_time print("total_time ", total_time) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8332/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8332/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8331
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8331/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8331/comments
https://api.github.com/repos/huggingface/transformers/issues/8331/events
https://github.com/huggingface/transformers/pull/8331
737,092,680
MDExOlB1bGxSZXF1ZXN0NTE2MTk0ODAz
8,331
Flax/Jax documentation
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,604
1,605
1,605
MEMBER
null
Integrating Flax & JAX into transformers documentation
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8331/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8331/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8331", "html_url": "https://github.com/huggingface/transformers/pull/8331", "diff_url": "https://github.com/huggingface/transformers/pull/8331.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8331.patch", "merged_at": 1605124417000 }
https://api.github.com/repos/huggingface/transformers/issues/8330
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8330/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8330/comments
https://api.github.com/repos/huggingface/transformers/issues/8330/events
https://github.com/huggingface/transformers/pull/8330
737,063,704
MDExOlB1bGxSZXF1ZXN0NTE2MTcxNjc3
8,330
Docs bart training ref
{ "login": "lvwerra", "id": 8264887, "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lvwerra", "html_url": "https://github.com/lvwerra", "followers_url": "https://api.github.com/users/lvwerra/followers", "following_url": "https://api.github.com/users/lvwerra/following{/other_user}", "gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}", "starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions", "organizations_url": "https://api.github.com/users/lvwerra/orgs", "repos_url": "https://api.github.com/users/lvwerra/repos", "events_url": "https://api.github.com/users/lvwerra/events{/privacy}", "received_events_url": "https://api.github.com/users/lvwerra/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for the contribution!\r\ncc @LysandreJik " ]
1,604
1,604
1,604
MEMBER
null
# What does this PR do? Following the discussion on #7828 with @sshleifer and others I created a minimal example on the forum and added a reference to the Bart documentation. Let me know if this works for you.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8330/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8330/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8330", "html_url": "https://github.com/huggingface/transformers/pull/8330", "diff_url": "https://github.com/huggingface/transformers/pull/8330.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8330.patch", "merged_at": 1604614858000 }
https://api.github.com/repos/huggingface/transformers/issues/8329
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8329/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8329/comments
https://api.github.com/repos/huggingface/transformers/issues/8329/events
https://github.com/huggingface/transformers/pull/8329
737,019,962
MDExOlB1bGxSZXF1ZXN0NTE2MTM0NDUx
8,329
[Seq2SeqDataCollator] dont pass add_prefix_space=False to all tokenizers
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "We should at some point test whether, for BART, this impacts performance." ]
1,604
1,604
1,604
CONTRIBUTOR
null
T5Tokenizer will warn about an unrecognized kwarg, this fixes that error. cc @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8329/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8329/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8329", "html_url": "https://github.com/huggingface/transformers/pull/8329", "diff_url": "https://github.com/huggingface/transformers/pull/8329.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8329.patch", "merged_at": 1604594545000 }
https://api.github.com/repos/huggingface/transformers/issues/8328
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8328/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8328/comments
https://api.github.com/repos/huggingface/transformers/issues/8328/events
https://github.com/huggingface/transformers/pull/8328
737,004,037
MDExOlB1bGxSZXF1ZXN0NTE2MTIxMjEx
8,328
Return raw outputs in TextClassificationPipeline
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Will that flag be configurable on a model-level for the hosted Inference API, and if yes, how?", "Through the configuration, as specified in the original issue would be the easiest way to control it through the API. Haven't thought about it this far however.\r\n\r\nThe other option would be to disable the `sigmoid` over the output and let the user handle that instead, but this would break existing models on the inference API such as the DialogRPT model.", "Here's a proposal @julien-c (see latest commit).\r\n\r\nThe current `TextClassificationPipeline` has a task set to `\"\"` as it is not initialized from `pipeline(task)`.\r\nThis proposal puts the default `task` as `\"text-classification\"` for the `TextClassificationPipeline`. Since the pipeline will fetch the `task_specific_params` according to the given task, it can therefore fetch the following configuration option (which can of course already be set in the configuration on S3):\r\n\r\n```py\r\nconfig = GPT2Config.from_pretrained(\"microsoft/DialogRPT-updown\", task_specific_params={\"text-classification\": {\"return_raw_outputs\": True}})\r\nmodel = GPT2ForSequenceClassification.from_pretrained(\"microsoft/DialogRPT-updown\", config=config)\r\n```\r\n\r\nThis will be used as the default value for the pipeline, and allow users to specify pipeline-specific arguments directly in their model configuration.\r\n\r\nThe proposal was only made for the `TextClassificationPipeline`, but it would need to be made to all other pipelines to stay coherent. Let me know if this is an approach you would be interested in.\r\n", "pinging @Narsil and @mfuntowicz for their thoughts on this (we have ways to get configurable params on the API, just wanted to make sure we could use them for this)", "I've also run into this issue recently when uploading some multi-label text classification models to the HuggingFace hub, where it seems like the default activation for multiple classes is a softmax.\r\n\r\nHaving it return raw outputs is definitely useful, however, it still wouldn't show the scores I would expect in the inference api - in our case using a sigmoid over the results to allow more than one positive label?\r\n\r\nWould it be useful to also have an argument like `output_nonlinearity` for the desired activation function e.g. sigmoid/softmax?\r\n", "Hi @laurahanu, thanks for your proposal. I'll try and see how to best integrate that in this PR.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Hello, was just wondering if there are any updates on allowing the user to choose which function to run on the outputs? It seems like in the current documentation, the pipeline would still only run a sigmoid over the result if there is one label.\r\n\r\n> If multiple classification labels are available (model.config.num_labels >= 2), the pipeline will run a softmax over the results. If there is a single label, the pipeline will run a sigmoid over the result.\r\n", "Sorry for taking a long time to merge this; I wonder if we can't re-use the recently introduced `problem_type` instead, wdyt @sgugger @abhi1thakur?", "Yes, this flag is there for this reason specifically :-) ", "Thanks for reopening this and having another look at it! Do you have a timeline in mind for when this would be merged/available to use with the model hub?", "Hey @laurahanu, the PR should now be ready for merging. I'm pinging two team members for review; this should be merged in the coming days and deployed on the API shortly after.", "Great, thank you @LysandreJik!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "@LysandreJik is this ready to be merged?" ]
1,604
1,628
1,628
MEMBER
null
Currently the `TextClassificationPipeline` does a softmax over the output values when `num_labels > 1`, and does a sigmoid over the output when `num_labels == 1`. As seen in https://github.com/huggingface/transformers/issues/8259, this may be problematic when systems depend on a different output range. Adding a flag `return_raw_outputs` that will not apply the sigmoid or softmax in that case. closes #8259
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8328/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8328/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8328", "html_url": "https://github.com/huggingface/transformers/pull/8328", "diff_url": "https://github.com/huggingface/transformers/pull/8328.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8328.patch", "merged_at": 1628080967000 }
https://api.github.com/repos/huggingface/transformers/issues/8327
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8327/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8327/comments
https://api.github.com/repos/huggingface/transformers/issues/8327/events
https://github.com/huggingface/transformers/pull/8327
736,975,470
MDExOlB1bGxSZXF1ZXN0NTE2MDk2ODQ0
8,327
Create README.md
{ "login": "yfpeng", "id": 2766437, "node_id": "MDQ6VXNlcjI3NjY0Mzc=", "avatar_url": "https://avatars.githubusercontent.com/u/2766437?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yfpeng", "html_url": "https://github.com/yfpeng", "followers_url": "https://api.github.com/users/yfpeng/followers", "following_url": "https://api.github.com/users/yfpeng/following{/other_user}", "gists_url": "https://api.github.com/users/yfpeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/yfpeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yfpeng/subscriptions", "organizations_url": "https://api.github.com/users/yfpeng/orgs", "repos_url": "https://api.github.com/users/yfpeng/repos", "events_url": "https://api.github.com/users/yfpeng/events{/privacy}", "received_events_url": "https://api.github.com/users/yfpeng/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,604
1,604
1,604
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSTM: @stas00 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8327/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8327/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8327", "html_url": "https://github.com/huggingface/transformers/pull/8327", "diff_url": "https://github.com/huggingface/transformers/pull/8327.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8327.patch", "merged_at": 1604650973000 }
https://api.github.com/repos/huggingface/transformers/issues/8326
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8326/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8326/comments
https://api.github.com/repos/huggingface/transformers/issues/8326/events
https://github.com/huggingface/transformers/issues/8326
736,966,716
MDU6SXNzdWU3MzY5NjY3MTY=
8,326
Muti GPU training with torch==1.7.0 not working
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I haven't noticed anything strange on my side, but didn't play much with 1.7 yet. Could you post the code you're using?", "> Could you post the code you're using?\r\n\r\nThat is not possible - it is some company code. If you have some \"toy\" code maybe you could try it.\r\n\r\nI am using an electra base model. Pretokenize everything. Create a torch Dataset for classification (binary).\r\nThen I am using the normal Trainer to train...\r\n\r\n", "On AWS Sagemaker it works with Torch 1.7.0.\r\nI think this might be an NVIDIA Driver probelm on my side...\r\nI am closing this for now. Will reopen if needed.\r\n... sorry ..." ]
1,604
1,604
1,604
CONTRIBUTOR
null
Hi, muti GPU training with torch==1.7.0 is not working. I have the following configuration: - CUDA 11.0 - V100 GPU (1 or 4) - Torch 1.7.0 (1.6.0) - transformers 3.4.0 - Linux When I train with one GPU all works. When I run it with 4 GPUs the CPU load goes up to 100% on one core like 1,5 MB memory is used from each GPU but the training does not start. When I downgrade torch to Version 1.6.0 everything works. Might be an upstream problem but you might want to warn users from using torch 1.7.0 PS: Do you think I should report this at the torch repo?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8326/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8325
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8325/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8325/comments
https://api.github.com/repos/huggingface/transformers/issues/8325/events
https://github.com/huggingface/transformers/issues/8325
736,951,015
MDU6SXNzdWU3MzY5NTEwMTU=
8,325
Adding call back to measure time of each step
{ "login": "cameronalonso2", "id": 73615224, "node_id": "MDQ6VXNlcjczNjE1MjI0", "avatar_url": "https://avatars.githubusercontent.com/u/73615224?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cameronalonso2", "html_url": "https://github.com/cameronalonso2", "followers_url": "https://api.github.com/users/cameronalonso2/followers", "following_url": "https://api.github.com/users/cameronalonso2/following{/other_user}", "gists_url": "https://api.github.com/users/cameronalonso2/gists{/gist_id}", "starred_url": "https://api.github.com/users/cameronalonso2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cameronalonso2/subscriptions", "organizations_url": "https://api.github.com/users/cameronalonso2/orgs", "repos_url": "https://api.github.com/users/cameronalonso2/repos", "events_url": "https://api.github.com/users/cameronalonso2/events{/privacy}", "received_events_url": "https://api.github.com/users/cameronalonso2/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,604
1,610
1,610
NONE
null
Hi I need to measure the time of each training step, with distributed version of seq2seq_trainer with TPUs, could you provide me with some hints how to measure the time? thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8325/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8324
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8324/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8324/comments
https://api.github.com/repos/huggingface/transformers/issues/8324/events
https://github.com/huggingface/transformers/pull/8324
736,919,789
MDExOlB1bGxSZXF1ZXN0NTE2MDQ5ODUw
8,324
Model versioning
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[ { "id": 1802861720, "node_id": "MDU6TGFiZWwxODAyODYxNzIw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20CLI", "name": "Core: CLI", "color": "FF6426", "default": false, "description": "" }, { "id": 1834067346, "node_id": "MDU6TGFiZWwxODM0MDY3MzQ2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation", "name": "Documentation", "color": "77cc3b", "default": false, "description": "" }, { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "LGTM!", "When this PR is merged, how do I modify:\r\n```\r\n data_cached = cached_path(\r\n \"https://cdn-datasets.huggingface.co/translation/wmt_en_ro-tr40k-va0.5k-te0.5k.tar.gz\",\r\n extract_compressed_file=True,\r\n )\r\n```\r\nto take advantage of this new API? https://github.com/huggingface/transformers/blob/master/examples/seq2seq/test_bash_script.py#L27\r\nThis code was just merged yesterday." ]
1,604
1,605
1,605
MEMBER
null
**Write-up about the context for this change, and the enabled features is at https://discuss.huggingface.co/t/announcement-model-versioning-upcoming-changes-to-the-model-hub/1914** In short: 1. changes to the file downloading code used in `from_pretrained()` methods to use the new file URLs backed by huggingface-hosted git repos. 2. changes to the model upload CLI to create a model repo then be able to git clone and git push to it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8324/reactions", "total_count": 7, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 7, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8324/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8324", "html_url": "https://github.com/huggingface/transformers/pull/8324", "diff_url": "https://github.com/huggingface/transformers/pull/8324.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8324.patch", "merged_at": 1605010263000 }
https://api.github.com/repos/huggingface/transformers/issues/8323
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8323/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8323/comments
https://api.github.com/repos/huggingface/transformers/issues/8323/events
https://github.com/huggingface/transformers/issues/8323
736,911,935
MDU6SXNzdWU3MzY5MTE5MzU=
8,323
AlbertTransformer head_mask not subscriptable error when not passed
{ "login": "baeseongsu", "id": 32122993, "node_id": "MDQ6VXNlcjMyMTIyOTkz", "avatar_url": "https://avatars.githubusercontent.com/u/32122993?v=4", "gravatar_id": "", "url": "https://api.github.com/users/baeseongsu", "html_url": "https://github.com/baeseongsu", "followers_url": "https://api.github.com/users/baeseongsu/followers", "following_url": "https://api.github.com/users/baeseongsu/following{/other_user}", "gists_url": "https://api.github.com/users/baeseongsu/gists{/gist_id}", "starred_url": "https://api.github.com/users/baeseongsu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/baeseongsu/subscriptions", "organizations_url": "https://api.github.com/users/baeseongsu/orgs", "repos_url": "https://api.github.com/users/baeseongsu/repos", "events_url": "https://api.github.com/users/baeseongsu/events{/privacy}", "received_events_url": "https://api.github.com/users/baeseongsu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@LysandreJik Could I put this issue on PR?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "I am facing the same issue while using `transformers 4.5.1`. Is there any workaround for this issue ?", "Hi, @harikc456 \r\nYou can try to modify the source code. https://github.com/huggingface/transformers/blob/master/src/transformers/models/albert/modeling_albert.py#L463\r\nThere are no indices in None object (`head_mask` ). \r\nSo you can try like the followings:\r\n```python\r\nhead_mask = [None] * self.config.num_hidden_layers if head_mask is None else head_mask\r\n```", "@baeseongsu, sorry to have missed this. If you still feel like opening a PR, we would welcome it! It seems many models have this issue, as using the encoder call separately isn't something that tends to happen often.", "Hello, @LysandreJik. Thank you for checking 😄 \r\nExactly, right. In most of the cases, we don't need to call the internal classes.\r\nAnyway, I will try asap !!", "For now, I have able to bypass this issue with the following workaround\r\n```\r\nhead_mask = [None] * 12\r\nout = model.encoder(z, head_mask = head_mask)\r\n```" ]
1,604
1,620
1,620
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.4.0 - Platform: Linux version 4.4.0-193-generic / ubuntu - Python version: 3.6.5 - PyTorch version (GPU?): 1.4.0 (cuda 10.0) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people.\ --> @LysandreJik ## Information AlbertTransformer has `head_mask` arguments with default value of None. [(modeling_albert.py#L416)](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_albert.py#L416) But in forward, `head_mask` is indexed by group_idx without checking if head_mask is None. [(modeling_albert.py#L436)](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_albert.py#L436) So I got the error as below. (it seems like a similar issue [#1188](https://github.com/huggingface/transformers/issues/1188)) The problem arises when using: * [-] the official example scripts: (give details below) * [O] my own modified scripts: (give details below) The tasks I am working on is: * [-] an official GLUE/SQUaD task: (give the name) * [O] my own task or dataset: (give details below) ## To reproduce ```python from transformers import AlbertTokenizer, AlbertModel tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2') model = AlbertModel.from_pretrained('albert-base-v2', return_dict=True) inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") inputs2 = model.embeddings(input_ids=inputs.input_ids, token_type_ids=inputs.token_type_ids, position_ids=None, inputs_embeds=None) model.encoder(inputs2) ``` error message when i execute the above script. ```python --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-1-4fc3cd45df51> in <module>() 9 position_ids=None, 10 inputs_embeds=None) ---> 11 model.encoder(inputs2) ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 530 result = self._slow_forward(*input, **kwargs) 531 else: --> 532 result = self.forward(*input, **kwargs) 533 for hook in self._forward_hooks.values(): 534 hook_result = hook(self, input, result) ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/modeling_albert.py in forward(self, hidden_states, attention_mask, head_mask, output_attentions, output_hidden_states, return_dict) 434 hidden_states, 435 attention_mask, --> 436 head_mask[group_idx * layers_per_group : (group_idx + 1) * layers_per_group], 437 output_attentions, 438 output_hidden_states, TypeError: 'NoneType' object is not subscriptable ``` ## Expected behavior `model.encoder(inputs2)` encoder will be output of tensor. Espeically, shape of the tensor will be (1, 8, 768) Personally, I think we could add one more line of code before `for loop` ```python head_mask = [None] * self.config.num_hidden_layers if head_mask is None else head_mask ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8323/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8323/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/8322
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8322/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8322/comments
https://api.github.com/repos/huggingface/transformers/issues/8322/events
https://github.com/huggingface/transformers/issues/8322
736,911,799
MDU6SXNzdWU3MzY5MTE3OTk=
8,322
Keyword arguments {'add_prefix_space': False} not recognized.
{ "login": "rabeehkarimimahabadi", "id": 73364383, "node_id": "MDQ6VXNlcjczMzY0Mzgz", "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehkarimimahabadi", "html_url": "https://github.com/rabeehkarimimahabadi", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @rabeehkarimimahabadi. Could you please respect the issue template when opening issues? It is necessary for us to help you, otherwise we'll have to ask again and again for you to provide more information.\r\n\r\nPlease complete the issue template with the information related to your environment, to the command you launched, and the full error with the stack trace. Thanks.", "Hi\r\nI am sorry\r\nhere is the full command I use , it is running finetune_trainer with T5\r\nmodel. There is not an error, only it gives this warning. thanks\r\n\r\n\r\npython xla_spawn.py --num_cores 8 finetune_trainer.py\r\n--learning_rate=3e-5 --n_val 1000 --model_name_or_path t5-small\r\n--data_dir data/wmt_en_de/ --output_dir /home/rabeeh/temp/\r\n--overwrite_output_dir --tpu_num_cores=8 --max_source_length=64\r\n--max_target_length=64 --per_device_train_batch_size=32\r\n--per_device_eval_batch_size=32 --label_smoothing=0.1 --task=\"translation\"\r\n--logging_steps=200 --eval_steps=500 --num_train_epochs=6 --save_steps=500\r\n--max_source_length=128 --max_target_length=128\r\n--val_max_target_length=128 --test_max_target_length=128\r\n\r\n", "Since you're using the T5 model I believe you can safely ignore that warning, which is useful only for the byte-level BPE tokenizers (think GPT-2, RoBERTa).\r\n\r\nMaybe @patil-suraj or @sshleifer know why such a warning is thrown when using T5 with the `Seq2SeqTrainer`", "Yeah it's happening in seq2seq dataset, you can safely ignore. Fixed in #8329 ", "Hi,\nthank you. Is this applied to the repo? thanks\n\nOn Thu, Nov 5, 2020 at 4:15 PM Sam Shleifer <[email protected]>\nwrote:\n\n> Yeah it's happening in seq2seq dataset, you can safely ignore. Fixed in\n> #8329 <https://github.com/huggingface/transformers/pull/8329>\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/8322#issuecomment-722441777>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ARPXHH2WLTMU6NJ7B2K2Y33SOK6SZANCNFSM4TLKSTWQ>\n> .\n>\n", "Yes, the fix has just been merged. ", "Hi there, this is not fixed for me. and still I am getting it with running T5 using finetune_trainer. ", "Have you pulled from `master` to get the updated version?\r\n\r\nIn any case, as I've said before, this is a warning and not an error. You can safely ignore it." ]
1,604
1,604
1,604
NONE
null
Hi I am getting this message when running finetune_trainer.py on TPUs any idea how to remove it? thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8322/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8322/timeline
completed
null
null