url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/11338 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11338/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11338/comments | https://api.github.com/repos/huggingface/transformers/issues/11338/events | https://github.com/huggingface/transformers/issues/11338 | 862,842,133 | MDU6SXNzdWU4NjI4NDIxMzM= | 11,338 | tf generate compatible with tf.function | {
"login": "kev1876",
"id": 11463123,
"node_id": "MDQ6VXNlcjExNDYzMTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/11463123?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kev1876",
"html_url": "https://github.com/kev1876",
"followers_url": "https://api.github.com/users/kev1876/followers",
"following_url": "https://api.github.com/users/kev1876/following{/other_user}",
"gists_url": "https://api.github.com/users/kev1876/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kev1876/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kev1876/subscriptions",
"organizations_url": "https://api.github.com/users/kev1876/orgs",
"repos_url": "https://api.github.com/users/kev1876/repos",
"events_url": "https://api.github.com/users/kev1876/events{/privacy}",
"received_events_url": "https://api.github.com/users/kev1876/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,622 | 1,622 | NONE | null | because the tf generate function is not compatible with tf.function,so i can not use this function in tf server
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11338/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11338/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11337 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11337/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11337/comments | https://api.github.com/repos/huggingface/transformers/issues/11337/events | https://github.com/huggingface/transformers/pull/11337 | 862,797,159 | MDExOlB1bGxSZXF1ZXN0NjE5MzA4MDQ1 | 11,337 | Adding `AutomaticSpeechRecognitionPipeline`. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Thanks for working on it!\r\n> \r\n> It is very specific to S2T and Wav2Vec2 but I don't think that's too much of an issue, we can adapt later.\r\n> \r\n> Could you add this pipeline to:\r\n> \r\n> * the documentation\r\n\r\nYes !\r\n> \r\n> * the main init\r\n\r\nYes\r\n\r\n> \r\n> * the `pipeline` factory method\r\n\r\nI wanted to defer this into a follow-up PR, because of the AutoModel quirckness and to avoid making super long PRs.\r\nIf you feel we can't have it as a separate PR, I'll start including the work directly here.\r\n> \r\n> \r\n> We will also probably need a new auto model.\r\n\r\n"
] | 1,618 | 1,619 | 1,619 | CONTRIBUTOR | null | # What does this PR do?
- Because we added everything to enable this pipeline, we probably
should add it to `transformers`.
- This PR tries to limit the scope and focuses only on the pipeline part
(what should go in, and out).
- The tests are very specific for S2T and Wav2vec2 to make sure both
architectures are supported by the pipeline. We don't use the mixin for
tests right now, because that requires more work in the `pipeline`
function (will be done in a follow up PR).
- Unsure about the "helper" function `ffmpeg_read`. It makes a lot of
sense from a user perspective, it does not add any additional
dependencies (as in hard dependency, because users can always use their
own load mechanism). Meanwhile, it feels slightly clunky to have so much
optional preprocessing.
- The pipeline is not done to support streaming audio right now.
# Future work:
- Add `automatic-speech-recognition` as a `task`. And add the
FeatureExtractor.from_pretrained within `pipeline` function.
- Add small models within tests
- Add the Mixin to tests.
- Make the logic between ForCTC vs ForConditionalGeneration better.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@LysandreJik
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11337/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11337",
"html_url": "https://github.com/huggingface/transformers/pull/11337",
"diff_url": "https://github.com/huggingface/transformers/pull/11337.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11337.patch",
"merged_at": 1619776448000
} |
https://api.github.com/repos/huggingface/transformers/issues/11336 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11336/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11336/comments | https://api.github.com/repos/huggingface/transformers/issues/11336/events | https://github.com/huggingface/transformers/issues/11336 | 862,746,183 | MDU6SXNzdWU4NjI3NDYxODM= | 11,336 | M2M-100 SentencePiece model produces tokens that are missing on the fixed dictionary | {
"login": "jofregit",
"id": 29544402,
"node_id": "MDQ6VXNlcjI5NTQ0NDAy",
"avatar_url": "https://avatars.githubusercontent.com/u/29544402?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jofregit",
"html_url": "https://github.com/jofregit",
"followers_url": "https://api.github.com/users/jofregit/followers",
"following_url": "https://api.github.com/users/jofregit/following{/other_user}",
"gists_url": "https://api.github.com/users/jofregit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jofregit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jofregit/subscriptions",
"organizations_url": "https://api.github.com/users/jofregit/orgs",
"repos_url": "https://api.github.com/users/jofregit/repos",
"events_url": "https://api.github.com/users/jofregit/events{/privacy}",
"received_events_url": "https://api.github.com/users/jofregit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,622 | 1,622 | NONE | null | ## 🐛 Bug
The SentencePiece model for [M2M-100](https://huggingface.co/transformers/model_doc/m2m_100.html) (https://dl.fbaipublicfiles.com/m2m_100/spm.128k.model) generates several tokens that are missing on the fixed dictionary (https://dl.fbaipublicfiles.com/m2m_100/data_dict.128k.txt)
### To Reproduce
Steps to reproduce the behavior:
1. Tokenize the following sentence with the SentencePiece model for M2M-100:
```
import sentencepiece as spm
sentence = "My dog perhaps enjoyed music."
tokenizer = spm.SentencePieceProcessor(model_file =os.path.join(model_path, 'spm.128k.model') )
tokenizer.EncodeAsPieces(sentence)
```
2. See the tokens generated: ['▁My', '▁dog', '▁perhaps', '▁enjoyed', '▁music', '.']
3. If you check the fixed dictionary (data_dict.128k.txt) you will notice that '▁perhaps', '▁enjoyed' are missing and during the encoding process these tokens will be set to **3** which corresponds to the "unkwnown" token.
4. The translations are inaccurate for such cases: "My dog perhaps enjoyed noises." --> (fr) "Mon chien a appris les bruits." (with num_beams = 1)
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
#### Other tokens that will be set to **3** ("unknown" token) after the encoding:
{"
", "̈", "ঞ", "ઞ", "ଙ", "ଞ", "ඈ", "ၡ", "ầ", "ậ", "ẵ", "↳", "啡", \
"圳", "圾", "垃", "础", "礎", "萄", "雰", "됩", "밝", "얻", "\|01f4fa", "\
\|01f924", "୍ଚ", "୍ଷ", "ຜນ", "င့", "ည့", "ቃሴ", "ይማ", "ដើ", "ឌ្", \
"ほと", "やは", "ろん", "イベ", "ッフ", "パソ", "来越", "特朗", "西班", "бәп", "лөш", \
"үек", "խմբ", "سرے", "یین", "इते", "मीण", "িৎস", "ିରଫ", "සාහ", "คโน", \
"จจุ", "ถาน", "ษัท", "ียญ", "เสร", "ຂວງ", "ງິນ", "ຖິງ", "ລ້ວ", "ວາມ", \
"ຫ່ງ", "ຶ້ນ", "່ວມ", "ໍລິ", "အတြ", "គរប", "ភិវ", "ាណិ", "ូមិ", "េតុ", \
"ំនង", "្ងៃ", "システ", " иҫә", " луѓ", " мот", " հաղ", " ճան", " تجه", \
" هیو", " ټکن", " ڊيس", " તરી", " ରିପ", " മേഖ", "зеге", "шкил", \
"шөөр", "ідэн", "әүге", "әүеш", "میشہ", "ंसिर", "म्मू", "समें", \
"ক্টো", "ামলা", "েস্ক", "ਜਵਾਨ", "ਤੂਬਰ", "ਮੇਟੀ", "ਿਆਰਥ", "ંટણી", \
"துகா", "ಪರ್ಕ", "ಬೈಲ್", "ಾಜಿಕ", "මෙයි", "ญี่ป", "ดาห์", "รกิจ", \
"ริ่ม", "ัพท์", "าศาส", "าะห์", "ูนย์", "ຈົ້າ", "ດນາມ", "ມືອງ", \
"ສບຸກ", "ັກໂນ", "ໍາເນ", "က်နှ", "იტომ", "ំព័រ", "្ញុំ", "្មែរ", \
"្លួន", "្លេស", "かもしれ", " күрһ", " эшмә", " مقای", " उन्ह", " कोशि", \
" नोटि", " मोबा", " নিরা", " દિલ્", " માહિ", " ଓଡ଼ି", " ପଟ୍ଟ", " \
ಅಭ್ಯ", " ಕ್ಷೇ", " ಪೊಲೀ", " ವಾಣಿ", " කිහි", " පැමි", " ტერი", "версі", \
"клопе", "сьәлә", "һынса", "աքանչ", "րաժար", "ונטאג", "ترنتی", \
"ورسٹی", "پیوتر", "یبانی", "ंत्री", "क्राउ", "म्मीद", "তিবার", \
"বাদিক", "ুধবার", "ਹਾਨੂੰ", "ଭିନ୍ନ", "ബരിമല", "ගමැති", "ุงเทพ", \
"้อมูล", "ທະວີຕ", "ໍາລັບ", "თიერთ", "უხედა", "ძლიათ", "ხედრო", \
"លរដ្ឋ", "ីដេអូ", "្បាប់", " հանդի", " אוטוב", " דאנאר", " کارشن", " \
इस्ते", " उत्पा", " प्राथ", " ગુજરા", " അദ്ദേ", " ຂ່າວວ", "न्त्री", \
"सन्धान", "্যান্য", "வடிக்க", "ಮಾರ್ಟ್", "วเตอร์", "ังหวัด", "ວຽດນາມ", \
"აშორის", "ាមេរិក", "័ត៌មាន", "្នំពេញ", " тарафы", " төхөөр", " \
Հայաստ", " الفلسط", " ٹیکنال", " განმავ", "тегория", "улланыу", \
"פטעמבער", "বিদ্যাল", "র্জাতিক", "വനന്തപു", "ເຂົ້າຫາ", " қамтама", " \
ສົ່ງໃຫ້", " ສໍາຫລັບ", " სხვადას", "স্পতিবার", "ີໂອເອລາວ", " \
વ્યાખ્યાઓ", "abaihan", "abogon", " achieve", "ahabog", "ahabogang", \
"ahlekel", " akawnt", "akuada", "alakahle", "almudug", "altachd", " \
amih", "aminosang", " anvä", "aphuno", "arangang", "aroaupenc", " \
artíc", "ashayeli", " Azərbay", "ịbụ", " beispi", " benfas", " \
benveng", " bharra", "bingkil", "ịbụl", "BND", " Bucure", " \
businesses", "cabka", " certainly", " Chatro", " citt", "èhófà", \
"eklase", "emmuz", " enjoyed", "erantany", "erzlech", "eshimi", \
"esterd", "esye", " ettev", "ewé", " eyisi", "faktirè", "fthiwe", " \
giin", " Goom", "haichean", "haps", "hathast", " hemib", \
"heqululini", "holoni", " htt", "ibeat", "ibuli", "iddene", \
"idmatan", "igawas", "igbahin", "Igual", "íklad", "ilangkan", \
"imutangan", "isemane", "iyembre", " iyisig", " Izray", " kabungtor", \
" KAHAPON", "ketho", " kinaug", " któr", " lớ", "laseklase", \
"latego", "Lietuv", " lling", "ləq", " mainta", " mmad", " mopak", " \
mümk", "naqi", " nearly", " nëm", "ởng", " nghiệ", "oblèm", "ófà", " \
okuday", " øn", "ópez", " owesifazana", "owever", " paggam", "Pagh", \
"Paghimo", "panid", " particularly", " perhaps", " Phetol", " \
przecie", " qualc", "qubom", "ərçiv", " reported", " rəhb", "ríguez", \
"ərrü", " sagols", " sebaga", "Sekelo", "selves", " Sga", "sgol", " \
społ", " Srednj", "Sulod", "tatge", "though", "tirè", "tụrụ", \
"ughout", "ugnawan", "ujourd", "ulagway", "upenc", "uregwu", "utube", \
"utubong", "uwega", " Uyas", " véh", " vreemdel", "vrier", "winan", " \
wła", " wouldn", "XÍA", " xüs", "yembre", "ynəl", "ynnag", "yoné", " \
Zagre", "zində", "zköz", "zonia", \
"\[Alpha]\[Rho]ί\[Omicron]\[Upsilon]", " \[CapitalDelta]\
\[CurlyEpsilon]\[Kappa]\[CurlyEpsilon]", " \[CurlyEpsilon]\[Pi]\
\[Alpha]\[Gamma]\[Gamma]\[CurlyEpsilon]\[Lambda]", " \[CapitalIota]\
\[Omicron]\[Upsilon]\[Nu]", \
"\[Mu]\[Beta]\[Rho]ί\[Omicron]\[Upsilon]", " \[CapitalNu]\[Omicron]\
\[CurlyEpsilon]", " \[CapitalOmicron]\[Kappa]\[Tau]\[Omega]\[Beta]", \
" \[CapitalSigma]\[CurlyEpsilon]\[Pi]\[Tau]\[CurlyEpsilon]", " \
\[CapitalSigma]\[CapitalUpsilon]\[CapitalRho]\[CapitalIota]\
\[CapitalZeta]", "\[Tau]\[Omega]\[Beta]"
### Expected behavior
I guess that one would expect the SentencePiece model to produce mainly tokens corresponding to the ones in the fixed dictionary (https://dl.fbaipublicfiles.com/m2m_100/data_dict.128k.txt)
PS: I initially reported this bug to FAIR team, but I haven't received an answer yet. (https://github.com/pytorch/fairseq/issues/3463) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11336/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11335 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11335/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11335/comments | https://api.github.com/repos/huggingface/transformers/issues/11335/events | https://github.com/huggingface/transformers/pull/11335 | 862,733,676 | MDExOlB1bGxSZXF1ZXN0NjE5MjU1NDYz | 11,335 | [GPTNeo] create local attention mask ones | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,618 | 1,618 | 1,618 | MEMBER | null | # What does this PR do?
This PR refactors GPT Neo such that the causal attention mask for the local attention layer is only computed once per batch in the `GPTNeoModel` class and then shared between the layers instead of re-computing it in each layer.
I've verified that all slow tests are passing.
#### Benchmarks
This PR does not change memory/speed for the forward pass
On master
```
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
/home/suraj/projects/gpt-neo-c 2 32 0.021
/home/suraj/projects/gpt-neo-c 2 128 0.059
/home/suraj/projects/gpt-neo-c 2 512 0.227
/home/suraj/projects/gpt-neo-c 2 1024 0.464
/home/suraj/projects/gpt-neo-c 4 32 0.033
/home/suraj/projects/gpt-neo-c 4 128 0.113
/home/suraj/projects/gpt-neo-c 4 512 0.449
/home/suraj/projects/gpt-neo-c 2 1024 N/A
--------------------------------------------------------------------------------
==================== INFERENCE - MEMORY - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
/home/suraj/projects/gpt-neo-c 2 32 6136
/home/suraj/projects/gpt-neo-c 2 128 6268
/home/suraj/projects/gpt-neo-c 2 512 6790
/home/suraj/projects/gpt-neo-c 2 1024 7472
/home/suraj/projects/gpt-neo-c 4 32 6204
/home/suraj/projects/gpt-neo-c 4 128 6428
/home/suraj/projects/gpt-neo-c 4 512 7456
/home/suraj/projects/gpt-neo-c 4 1024 N/A
--------------------------------------------------------------------------------
```
On this PR
```
=================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
/home/suraj/projects/gpt-neo-c 2 32 0.021
/home/suraj/projects/gpt-neo-c 2 128 0.058
/home/suraj/projects/gpt-neo-c 2 512 0.222
/home/suraj/projects/gpt-neo-c 2 1024 0.453
/home/suraj/projects/gpt-neo-c 4 32 0.032
/home/suraj/projects/gpt-neo-c 4 128 0.11
/home/suraj/projects/gpt-neo-c 4 512 0.439
/home/suraj/projects/gpt-neo-c 4 1024 N/A
--------------------------------------------------------------------------------
==================== INFERENCE - MEMORY - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Memory in MB
--------------------------------------------------------------------------------
/home/suraj/projects/gpt-neo-c 2 32 6136
/home/suraj/projects/gpt-neo-c 2 128 6268
/home/suraj/projects/gpt-neo-c 2 512 6790
/home/suraj/projects/gpt-neo-c 2 1024 7476
/home/suraj/projects/gpt-neo-c 4 32 6204
/home/suraj/projects/gpt-neo-c 4 128 6428
/home/suraj/projects/gpt-neo-c 4 512 7460
/home/suraj/projects/gpt-neo-c 4 1024 N/A
--------------------------------------------------------------------------------
```
I did a micro-benchmark using the 125M model for generation and this PR does give a small speed-up when generating longer sequences.
On master
```
%timeit model.generate(**enc, do_sample=False, max_length=512, min_length=512)
4.63 s ± 25.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit -n 10 model.generate(**enc, do_sample=False, max_length=1024, min_length=1024)
9.63 s ± 549 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
```
On this PR
```
%timeit model.generate(**enc, do_sample=False, max_length=512, min_length=512)
4.25 s ± 189 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit -n 10 model.generate(**enc, do_sample=False, max_length=1024, min_length=1024)
9 s ± 437 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11335/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11335/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11335",
"html_url": "https://github.com/huggingface/transformers/pull/11335",
"diff_url": "https://github.com/huggingface/transformers/pull/11335.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11335.patch",
"merged_at": 1618924064000
} |
https://api.github.com/repos/huggingface/transformers/issues/11334 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11334/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11334/comments | https://api.github.com/repos/huggingface/transformers/issues/11334/events | https://github.com/huggingface/transformers/issues/11334 | 862,654,817 | MDU6SXNzdWU4NjI2NTQ4MTc= | 11,334 | Bug in GPT2ForSequenceClassification | {
"login": "abiolaTresor",
"id": 48957493,
"node_id": "MDQ6VXNlcjQ4OTU3NDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/48957493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abiolaTresor",
"html_url": "https://github.com/abiolaTresor",
"followers_url": "https://api.github.com/users/abiolaTresor/followers",
"following_url": "https://api.github.com/users/abiolaTresor/following{/other_user}",
"gists_url": "https://api.github.com/users/abiolaTresor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abiolaTresor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abiolaTresor/subscriptions",
"organizations_url": "https://api.github.com/users/abiolaTresor/orgs",
"repos_url": "https://api.github.com/users/abiolaTresor/repos",
"events_url": "https://api.github.com/users/abiolaTresor/events{/privacy}",
"received_events_url": "https://api.github.com/users/abiolaTresor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ah, indeed, I think the identifier is incorrect. It needs to be updated to `microsoft/DialogRPT-updown`. Do you want to open a PR to fix this?\r\n\r\nThe issue you mention regarding the random weights is because the `gpt2` checkpoint doesn't have a sequence classification head. This is not the case for the checkpoint mentioned above (`microsoft/DialogRPT-updown`) which does have a sequence classification head, so it will not be reinitialized every different run.",
"@LysandreJik Thanks a lot! Indeed, the identifier `microsoft/DialogRPT-updown` works! But, can I change the code example on hugging face site through a pull request? Since I won't really change some source code, is it a change that must be made in the Transformers repo? The (wrong) code example is as following\r\n\r\n\r\n",
"Yes, the example is created here: https://github.com/huggingface/transformers/blob/5e04d7086803ae4a3892f4082f2835a756592c2c/src/transformers/models/gpt2/modeling_gpt2.py#L1239\r\n\r\nIf you can update this, then it will update it in the docs as soon as we merge it.\r\n\r\nAll the docs visible on the website are in the repository :)",
"Ok, thanks! I'll deal with it then :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,622 | 1,622 | CONTRIBUTOR | null | ## Environment info
google collab
##Problem
When I run the example code provided on huggingface site for GPT2ForSequenceClassification, an error is raised sayin that 'microsoft/dialogrpt' is not a checkpoint. See the bug bellow:

Then , I replace 'microsoft/dialogrpt' by 'gpt2' but when I run the code twice, the logits have diffetent values at each run. After i've dug deeper, I've seen that the problem occurs when the top linear layer is being built. Its weights seem to be set randomly, so they differ from a run to another. Do you have any way to get rid of this problem? Thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11334/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11334/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11333 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11333/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11333/comments | https://api.github.com/repos/huggingface/transformers/issues/11333/events | https://github.com/huggingface/transformers/issues/11333 | 862,637,347 | MDU6SXNzdWU4NjI2MzczNDc= | 11,333 | Potential bug: Tokens with punctuation are re-tokenized although I've set `is_split_into_words=True` | {
"login": "kstathou",
"id": 9084998,
"node_id": "MDQ6VXNlcjkwODQ5OTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9084998?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kstathou",
"html_url": "https://github.com/kstathou",
"followers_url": "https://api.github.com/users/kstathou/followers",
"following_url": "https://api.github.com/users/kstathou/following{/other_user}",
"gists_url": "https://api.github.com/users/kstathou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kstathou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kstathou/subscriptions",
"organizations_url": "https://api.github.com/users/kstathou/orgs",
"repos_url": "https://api.github.com/users/kstathou/repos",
"events_url": "https://api.github.com/users/kstathou/events{/privacy}",
"received_events_url": "https://api.github.com/users/kstathou/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The `is_split_into_words` should be set to skip pre-tokenization (splitting on whitespace), not tokenization. This flag should be set to `True` if you have split your text into individual words, and you're now looking to have each word split into tokens and converted to IDs.\r\n\r\nThis seems to be unclear from the documentation, we'll work on improving this.",
"Makes sense, thank you for the clarification! I'd be happy to work on this if needed.",
"Yes, we would welcome such a contribution! I guess we would need to find all occurrences of that `is_split_into_words` parameter and clarify that pre-tokenization isn't tokenization as one could expect. \r\n\r\nWe would gladly welcome a PR!",
"Great, I will work on this next week! Thanks again for the help!"
] | 1,618 | 1,619 | 1,619 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.5.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
tokenizers: Hi @LysandreJik!
## Information
I am working on a token classification task where my input is in the following format:
```
texts = [['Foo', 'bar', '.'], ['Hello', 'world', '.']]
tags = [['B-ENT, 'I-ENT', 'O'], ['O', 'O, 'O']]
```
- **Model:** `bert-large-cased`
- **Tokenizer:** `BertTokenizerFast`. I align tokens with their tags as in this [tutorial](https://huggingface.co/transformers/custom_datasets.html#token-classification-with-w-nut-emerging-entities).
## Problem
Although I've set `is_split_into_words=True` in the tokenizer, tokens containing punctuation are tokenized.
## To reproduce
I reproduced the issue in this [Google Colab notebook](https://colab.research.google.com/drive/1mNJ-T6kOaC5_a8C3T_qCXBNFP5WYC5oi?usp=sharing).
## Expected behavior
Since I've set `is_split_into_words=True`, I would expect the tokenizer to keep the tokens as they are and split them into subwords with `##`. For example, if a token is `'foo(bar'`, I would expect it to stay that way, instead of being split into `['foo', '(', 'bar']`.
Thanks a lot for reading the issue! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11333/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11333/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11332 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11332/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11332/comments | https://api.github.com/repos/huggingface/transformers/issues/11332/events | https://github.com/huggingface/transformers/issues/11332 | 862,557,425 | MDU6SXNzdWU4NjI1NTc0MjU= | 11,332 | batch_encode_plus set a sort parameter | {
"login": "yuys0602",
"id": 80952122,
"node_id": "MDQ6VXNlcjgwOTUyMTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/80952122?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuys0602",
"html_url": "https://github.com/yuys0602",
"followers_url": "https://api.github.com/users/yuys0602/followers",
"following_url": "https://api.github.com/users/yuys0602/following{/other_user}",
"gists_url": "https://api.github.com/users/yuys0602/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuys0602/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuys0602/subscriptions",
"organizations_url": "https://api.github.com/users/yuys0602/orgs",
"repos_url": "https://api.github.com/users/yuys0602/repos",
"events_url": "https://api.github.com/users/yuys0602/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuys0602/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sorted by length?\r\nHow this helps you?",
"> Sorted by length?\r\n> How this helps you?\r\n\r\nfor next lstm layer",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,622 | 1,622 | NONE | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
When use `tokenizer.batch_encode_plus`, I hope get the sorted results, so hope have the sort parameter.
Such as: `tokenizer.batch_encode_plus(..., sorted=True)`
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11332/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11332/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11331 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11331/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11331/comments | https://api.github.com/repos/huggingface/transformers/issues/11331/events | https://github.com/huggingface/transformers/pull/11331 | 862,546,878 | MDExOlB1bGxSZXF1ZXN0NjE5MTAwMzM5 | 11,331 | [Generate] Remove outdated code | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Merging (cc @LysandreJik, @sgugger)"
] | 1,618 | 1,618 | 1,618 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR refactors `greedy_search()` and `sample()` by removing old code and improving some comments.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11331/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11331/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11331",
"html_url": "https://github.com/huggingface/transformers/pull/11331",
"diff_url": "https://github.com/huggingface/transformers/pull/11331.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11331.patch",
"merged_at": 1618920962000
} |
https://api.github.com/repos/huggingface/transformers/issues/11330 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11330/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11330/comments | https://api.github.com/repos/huggingface/transformers/issues/11330/events | https://github.com/huggingface/transformers/pull/11330 | 862,476,688 | MDExOlB1bGxSZXF1ZXN0NjE5MDQyMTA5 | 11,330 | Correcting comments in T5Stack to reflect correct tuple order | {
"login": "talkhaldi",
"id": 13479672,
"node_id": "MDQ6VXNlcjEzNDc5Njcy",
"avatar_url": "https://avatars.githubusercontent.com/u/13479672?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/talkhaldi",
"html_url": "https://github.com/talkhaldi",
"followers_url": "https://api.github.com/users/talkhaldi/followers",
"following_url": "https://api.github.com/users/talkhaldi/following{/other_user}",
"gists_url": "https://api.github.com/users/talkhaldi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/talkhaldi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/talkhaldi/subscriptions",
"organizations_url": "https://api.github.com/users/talkhaldi/orgs",
"repos_url": "https://api.github.com/users/talkhaldi/repos",
"events_url": "https://api.github.com/users/talkhaldi/events{/privacy}",
"received_events_url": "https://api.github.com/users/talkhaldi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, @talkhaldi thanks for correcting the other comment!\r\n\r\nThere is one code quality check failing, could run `make style && make quality` and push the code again?",
"Hi!\r\n\r\nThanks for your comment! The command you mentioned was requiring a lot of dependencies, some of which conflicted with the venv I was using, so instead of creating a new venv to run it, I removed the extra trailing space which technically was the only extra thing, and hoped it would work, but seems not ^^' \r\n\r\nI created a new venv and running the command now...",
"Hi @patil-suraj,\r\n\r\nSo it seems make style works well, but I get this error with make quality:\r\n\r\n```\r\n> make quality\r\nblack --check� �amples tests src utils\r\n839 files would be left unchanged.\r\nisort --check-only examples tests src utils\r\npython utils/custom_init_isort.py --check_only\r\nflake8 examples tests src utils\r\nmake extra_quality_checks\r\nmake[1]: Entering directory '/mnt/berry/home/alkhaldi/contribcode/transformers'\r\npython utils/check_copies.py\r\npython utils/check_table.py\r\npython utils/check_dummies.py\r\npython utils/check_repo.py\r\nTraceback (most recent call last):\r\n File \"utils/check_repo.py\", line 22, in <module>\r\n from transformers.models.auto import get_values\r\nImportError: cannot import name 'get_values' from 'transformers.models.auto' (unknown location)\r\nmake[1]: *** [Makefile:33: extra_quality_checks] Error 1\r\nmake[1]: Leaving directory '/mnt/berry/home/alkhaldi/contribcode/transformers'\r\nmake: *** [Makefile:42: quality] Error 2\r\n```\r\n\r\nDo you have an idea of what's wrong?",
"hi @talkhaldi\r\n\r\nplease make sure you have all the dev deps before running make\r\n\r\nto do that you could run `pip install -e \".[dev]\"` from the root of the repo.",
"Hi @patil-suraj,\r\n\r\nI have applied that command successfully, but it seems the tests are still failing :o What could be the problem?\r\n\r\nNote: That command changed many files, but I only added modeling_t5.py to the commit. Should I include the changes of all files even though I haven't changed them manually?",
"Hi @talkhaldi \r\n\r\nI think this is because we upgraded the version of `black`. Could you rebase your branch with master and then push again ?",
"Hi @patil-suraj,\r\n\r\nThe check_code_quality test passes now, but another test fails. Can you please tell me what's wrong?"
] | 1,618 | 1,622 | 1,622 | CONTRIBUTOR | null | # What does this PR do?
In order to match the actual order (line 513 and 516, and as accessed in 968), I've changed the order mentioned in comments L962 and L966-967.
## Before submitting
- [ x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR. (Seems like @patrickvonplaten, @patil-suraj are suggested for T5)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11330/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11330/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11330",
"html_url": "https://github.com/huggingface/transformers/pull/11330",
"diff_url": "https://github.com/huggingface/transformers/pull/11330.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11330.patch",
"merged_at": 1622034443000
} |
https://api.github.com/repos/huggingface/transformers/issues/11329 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11329/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11329/comments | https://api.github.com/repos/huggingface/transformers/issues/11329/events | https://github.com/huggingface/transformers/pull/11329 | 862,358,721 | MDExOlB1bGxSZXF1ZXN0NjE4OTQ1NDgx | 11,329 | Honor contributors to models | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, thanks for this update!\r\nI guess my huggingface username is [`camembert`](https://huggingface.co/camembert) ...?",
"> # What does this PR do?\r\n> This PR mentions by HF username the person who added a given model in each doc file, and updates the template so this keeps being consistently done.\r\n> \r\n> A few persons are missing, tagging there below. If you are one of the persons tagged, it would be great if you could create a HF account and report your HF username here so we can properly attribute the model addition to you :-)\r\n> \r\n> * Bert japanese: @singletongue\r\n> * CamemBert: @louismartin\r\n> * DeBerta (v1 and v2): @BigBird01\r\n => Pengcheng He: @BigBird01\r\n> * ProphetNet and XLM-ProphetNet: @qiweizhen\r\n\r\nThanks I just replied my name and account inline.",
"Hi @BigBird01, there is no huggingface account with the user name BigBird01 (I'm not talking about the GitHub account but the account on [huggingface.co](https://huggingface.co/)).",
"Thanks! It’s DeBERTa😊, DeBERTa (Pengcheng He) (huggingface.co)<https://huggingface.co/DeBERTa>\r\n\r\nThanks!\r\nPengcheng\r\n\r\n",
"Thank you for the mentioning!\r\nThe HF account for `bert-japanese` models is [`cl-tohoku`](https://huggingface.co/cl-tohoku).",
"Failure is spurious, so merging. @qiweizhen if you create (or already have) a HF account, I can add you in a followup PR."
] | 1,618 | 1,619 | 1,619 | COLLABORATOR | null | # What does this PR do?
This PR mentions by HF username the person who added a given model in each doc file, and updates the template so this keeps being consistently done.
A few persons are missing, tagging there below. If you are one of the persons tagged, it would be great if you could create a HF account and report your HF username here so we can properly attribute the model addition to you :-)
- Bert japanese: @singletongue
- CamemBert: @louismartin
- DeBerta (v1 and v2): @BigBird01
- ProphetNet and XLM-ProphetNet: @qiweizhen
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11329/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11329/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11329",
"html_url": "https://github.com/huggingface/transformers/pull/11329",
"diff_url": "https://github.com/huggingface/transformers/pull/11329.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11329.patch",
"merged_at": 1619012847000
} |
https://api.github.com/repos/huggingface/transformers/issues/11328 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11328/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11328/comments | https://api.github.com/repos/huggingface/transformers/issues/11328/events | https://github.com/huggingface/transformers/pull/11328 | 862,238,321 | MDExOlB1bGxSZXF1ZXN0NjE4ODMzNzIy | 11,328 | Trainer push to hub | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Side note, the torchhub test uses the master branch to check which dependency to install (see [here](https://github.com/huggingface/transformers/blob/c0328a6c263494fff527fac7288faa627e3267e0/.github/workflows/github-torch-hub.yml#L36)), which means it can't pass until this PR is merged. Unless I missed something!",
"This is awesome - very much looking forward to have this feature! \r\n\r\nOne thing I'd love to discuss are that they are 4 input arguments to to the `push_to_hub(...)` method - do we really need all of those? \r\n\r\n1.) I actually wouldn't put `save_directory` as an argument to `push_to_hub` because I think only the object on which `push_to_hub(...)` is called should be uploaded and not files that are unrelated to the object on which `push_to_hub()` is called. \r\n\r\n*E.g.*: If I call `model.save_pretrained(\"name/of/dir\")` the model and config is saved in this dir => I would therefore expect `model.push_to_hub(\"name_of_repo\")` to upload only model to the hub. IMO, this is more intuitive and more concise with how we design `save_pretrained(...)`. I don't really understand why a model object should be responsible to upload files to the hub that are not related to the model itself. Similar to how we call `model.save_pretrained(...)` and `tokenizer.save_pretrained(...)` to save everything we need, we should have to call `model.push_to_hub(...)` and `tokenizer.push_to_hub(...)` IMO.\r\n\r\n2.) Also, I think it would be a bit nicer if we could reduce the args: `repo_name`, `repo_url`, `organization` to just `repo_name` and `organization`. I would make `repo_name` mandatory (by default it will be pushed under one's namespace) and `organization` optionally. Do we really need three args to define the repo URL? What do you think?\r\n\r\n\r\nI feel quite strongly about 1.) to have consistency in the library, less strongly about 2.)"
] | 1,618 | 1,619 | 1,619 | COLLABORATOR | null | # What does this PR do?
This PR begins the work to completely integrate the `Trainer` API with the [model hub](https://huggingface.co/models). It introduces a new mixin `PushToHubMixin` that implements the `push_to_hub` method. That mixin is then subclassed in all objects have a `save_pretrained` method: config, tokenizers, models.
This enables the current API to create a new repo and push the model to it:
```
# Will create and push to https://huggingface.co/usernam/model_id/
model.save_pretrained(model_id, push_to_hub=True)
```
This requires the user to have a valid token, for instance generated by `transformers-cli login` (a useful error message is displayed if that is not the case).
If the repo already exists and the user just wants to update the weights, they can do:
```
model.save_pretrained(model_id, push_to_hub=True, repo_url=my_repo_url)
```
This will work as long as the git credentials of the user are stored locally. If not, a token may need to be passed either with `use_auth_token=True` or `use_auth_token=str_token`.
This also works to update the config or the tokenizer if there is a fix needed:
```
config = AutoConfig.from_pretrained(repo_id)
config.that_arg = fix
config.save_pretrained(local_folder, push_to_hub=True, repo_url=my_repo_url)
```
This PR also adds `Trainer.push_model_to_hub` that can be called after training to push the underlying model to the hub. This is controlled by a new `--push_to_hub` training argument, this last method is called in every example script using the Trainer, so people can start interacting with it.
Follow-up PRs scheduled are:
- have the Trainer automatically generate a sensible model card that can be pushed with the rest
- add the option to upload not only the final model to the hub but all checkpoints, using the versioning system to make it easy to navigate.
In terms of tests, this PR also adds a new environment variable watched by the Transformers library to decide which base url use in all things `from_pretrained`, which allows us to check the things we push to the staging env are actually working with the `from_pretrained` methods. The `tests_git_lfs` job in circle CI is renamed `tests_hub` and activates that env variable then runs all the tests marked with `is_staging_test` (which basically push things to the hub and check they can be used with `from_pretrained`). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11328/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11328/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11328",
"html_url": "https://github.com/huggingface/transformers/pull/11328",
"diff_url": "https://github.com/huggingface/transformers/pull/11328.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11328.patch",
"merged_at": 1619183858000
} |
https://api.github.com/repos/huggingface/transformers/issues/11327 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11327/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11327/comments | https://api.github.com/repos/huggingface/transformers/issues/11327/events | https://github.com/huggingface/transformers/issues/11327 | 861,988,941 | MDU6SXNzdWU4NjE5ODg5NDE= | 11,327 | run_ner.py example MobileBERT FP16 returns nan loss | {
"login": "tblattner",
"id": 10550807,
"node_id": "MDQ6VXNlcjEwNTUwODA3",
"avatar_url": "https://avatars.githubusercontent.com/u/10550807?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tblattner",
"html_url": "https://github.com/tblattner",
"followers_url": "https://api.github.com/users/tblattner/followers",
"following_url": "https://api.github.com/users/tblattner/following{/other_user}",
"gists_url": "https://api.github.com/users/tblattner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tblattner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tblattner/subscriptions",
"organizations_url": "https://api.github.com/users/tblattner/orgs",
"repos_url": "https://api.github.com/users/tblattner/repos",
"events_url": "https://api.github.com/users/tblattner/events{/privacy}",
"received_events_url": "https://api.github.com/users/tblattner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It looks like mobileBERT was pretrained on TPUs using bfloat16, which then often result in NaNs when using FP16 for further fine-tuning (see #11076 or #10956). You'll be best off training in FP32 or use another model compatible with FP16.",
"Makes sense! That's interesting that affects the training on GPUs! I will pass this info on to my colleague who deals with reproducibility! And for now I shall stick with FP32 when fine-tuning the MobileBERT model!\r\n\r\nMany thanks for the reply!",
"> You'll be best off training in FP32 or use another model compatible with FP16.\r\n\r\nAnd at some point we should also add `--bf16` mode to Trainer, for those who want to do finetuning and inference on hardware that supports it . e.g. high-end Ampere RTX-3090 and A100 should already support it, and of course TPU v2+. \r\n\r\nDoes it make sense?\r\n\r\nFYI, `bf16` AMP is being discussed here: https://github.com/pytorch/pytorch/issues/55374"
] | 1,618 | 1,618 | 1,618 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Linux-5.8.0-44-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes (RTX 2080 Ti)
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger @stas00 @patil-suraj
## Information
Model I am using MobileBERT:
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (give the name) conll2003
* [ ] my own task or dataset: (give details below)
## To reproduce
Using the example: https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py
Steps to reproduce the behavior:
1. Add training_args.fp16 = True to main() after initializing training_args
2. parameters to run_ner:
```
--model_name_or_path
google/mobilebert-uncased
--dataset_name
conll2003
--output_dir
/path/to/output
--do_eval
--do_train
--do_predict
```
3.loss will return nan
First observed nans popping up from the encoder within the forward call in the MobileBertModel class:
https://huggingface.co/transformers/_modules/transformers/modeling_mobilebert.html
## Expected behavior
When running without FP16, the model trains as expected. Other models that I have tested did not have this issue and converge well with fp16 enabled: RoBERTa, BERT, and DistilBERT. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11327/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11326 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11326/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11326/comments | https://api.github.com/repos/huggingface/transformers/issues/11326/events | https://github.com/huggingface/transformers/issues/11326 | 861,954,468 | MDU6SXNzdWU4NjE5NTQ0Njg= | 11,326 | Parameter missing from state_dict of optimizer when loading from checkpoint | {
"login": "ameet-1997",
"id": 18645407,
"node_id": "MDQ6VXNlcjE4NjQ1NDA3",
"avatar_url": "https://avatars.githubusercontent.com/u/18645407?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ameet-1997",
"html_url": "https://github.com/ameet-1997",
"followers_url": "https://api.github.com/users/ameet-1997/followers",
"following_url": "https://api.github.com/users/ameet-1997/following{/other_user}",
"gists_url": "https://api.github.com/users/ameet-1997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ameet-1997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ameet-1997/subscriptions",
"organizations_url": "https://api.github.com/users/ameet-1997/orgs",
"repos_url": "https://api.github.com/users/ameet-1997/repos",
"events_url": "https://api.github.com/users/ameet-1997/events{/privacy}",
"received_events_url": "https://api.github.com/users/ameet-1997/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you upgrade to the latest version of Transformers and see if the problem persists? I have tried to reproduce but it all works fine on my side.",
"I have the same issue.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I'm facing the same problem with `4.10.0.dev0`. @ameet-1997 could you find a solution for this?",
"Looks like you already found the solution, thanks for that!\r\nI wasn't able to fix it earlier."
] | 1,618 | 1,628 | 1,622 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: `'4.2.0dev0'`
- Platform: Debian
- Python version: `Python 3.6.10 |Anaconda, Inc.| (default, May 8 2020, 02:54:21)`
- PyTorch version (GPU?): `torch-xla-1.6`
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Yes
- Using TPUs
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ X] the official example scripts: (give details below)
* [ X] my own modified scripts: (give details below)
The tasks I am working on is:
* MLM
## To reproduce
You need to load a model from a checkpoint saved on the TPU.
Steps to reproduce the behavior:
1. Run `run_mlm.py` on any dataset and store a checkpoint. Then load from that checkpoint using the following command.
2. `python transformers/examples/language-modeling/run_mlm.py --warmup_steps 10000 --learning_rate 1e-4 --save_steps 100000 --max_seq_length 512 --logging_steps 50 --overwrite_output_dir --model_name_or_path ../../bucket/model_outputs/en/inverted_order_500K/mlm/checkpoint-10000 --do_train --do_eval --max_steps 500000 --per_device_train_batch_size 16 --per_device_eval_batch_size 16 --train_file ../../bucket/pretrain_data/en/valid.txt --validation_file ../../bucket/pretrain_data/en/valid.txt --output_dir ../../bucket/model_outputs/en/inverted_order_500K/mlm`
3. OR, use this `nohup python transformers/examples/xla_spawn.py --num_cores 8 transformers/examples/language-modeling/run_mlm.py --warmup_steps 10000 --learning_rate 1e-4 --save_steps 100000 --max_seq_length 512 --logging_steps 50 --overwrite_output_dir --model_name_or_path ../../bucket/model_outputs/en/inverted_order_500K/mlm/checkpoint-10000 --do_train --do_eval --max_steps 500000 --per_device_train_batch_size 16 --per_device_eval_batch_size 16 --train_file ../../bucket/pretrain_data/en/valid.txt --validation_file ../../bucket/pretrain_data/en/valid.txt --output_dir ../../bucket/model_outputs/en/inverted_order_500K/mlm`
## Error trace
This error trace uses a modified `Trainer`, but the issue occurs with the original `Trainer` as well.
> Traceback (most recent call last):
> File "transformers/examples/xla_spawn.py", line 85, in <module>
> main()
> File "transformers/examples/xla_spawn.py", line 81, in main
> xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)
> File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 292, in spawn
> _start_fn(0, pf_cfg, fn, args)
> File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 229, in _start_fn
> fn(gindex, *args)
> File "/home/asd/source_code/Multilingual/transformers/examples/language-modeling/run_mlm_synthetic.py", line 486, in _mp_fn
> main()
> File "/home/asd/source_code/Multilingual/transformers/examples/language-modeling/run_mlm_synthetic.py", line 460, in main
> trainer.train(model_path=model_path)
> File "/home/asd/source_code/Multilingual/transformers/src/transformers/trainer_word_modifications.py", line 666, in train
> self._load_optimizer_and_scheduler(model_path)
> File "/home/asd/source_code/Multilingual/transformers/src/transformers/trainer_word_modifications.py", line 1003, in _load_optimizer_and_scheduler
> self.optimizer.load_state_dict(optimizer_state)
> File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch/optim/optimizer.py", line 123, in load_state_dict
> raise ValueError("loaded state dict contains a parameter group "
> ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Where is the issue?
I've isolated the issue to be a missing parameter in `optimizer_state['state']`. For some reason, index `136` is missing from `optimizer_state['state'].keys()`
The following is the debugger output in function `_load_optimizer_and_scheduler` and just before line `self.optimizer.load_state_dict(optimizer_state)` in block `if is_torch_tpu_available()`.
``` python
>>> optimizer_state['param_groups']
[{'weight_decay': 0.0, 'lr': 0.0001, 'betas': [0.9, 0.999], 'eps': 1e-08, 'correct_bias': True, 'initial_lr': 0.0001, 'params': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53]}, {'weight_decay': 0.0, 'lr': 0.0001, 'betas': [0.9, 0.999], 'eps': 1e-08, 'correct_bias': True, 'initial_lr': 0.0001, 'params': [54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139]}]
>>> optimizer_state['state'].keys()
dict_keys([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 137, 138, 139])
```
## Expected behavior
Load the checkpoint correctly.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11326/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11326/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11325 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11325/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11325/comments | https://api.github.com/repos/huggingface/transformers/issues/11325/events | https://github.com/huggingface/transformers/pull/11325 | 861,899,431 | MDExOlB1bGxSZXF1ZXN0NjE4NTE3ODc4 | 11,325 | Enable added tokens | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger, please merge if you're happy with the updated PR :)"
] | 1,618 | 1,620 | 1,620 | MEMBER | null | Currently, the only way to manage adding `AddedToken`s to a tokenizer is via the `tokenizer.add_special_tokens` or `tokenizer.add_tokens` methods; it should also be enabled from the initialization.
Previously this was impossible:
```py
special_tokens = [AddedToken('<special>')]
GPT2Tokenizer.from_pretrained('gpt2', additional_special_tokens=special_tokens)
```
This enables this functionality and adds a test. This also fixes all tokenizers that were ill-configured for that purpose.. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11325/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11325/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11325",
"html_url": "https://github.com/huggingface/transformers/pull/11325",
"diff_url": "https://github.com/huggingface/transformers/pull/11325.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11325.patch",
"merged_at": 1620130438000
} |
https://api.github.com/repos/huggingface/transformers/issues/11324 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11324/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11324/comments | https://api.github.com/repos/huggingface/transformers/issues/11324/events | https://github.com/huggingface/transformers/pull/11324 | 861,867,132 | MDExOlB1bGxSZXF1ZXN0NjE4NDg4MTAy | 11,324 | [Trainer] Add a progress bar for batches skipped | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,618 | 1,618 | 1,618 | COLLABORATOR | null | # What does this PR do?
As suggested in #11284, this PR adds a progress bar for the batches skipped when resuming training from a checkpoint as well as a comment telling the user how to deactivate that behavior if they find it too long. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11324/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11324/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11324",
"html_url": "https://github.com/huggingface/transformers/pull/11324",
"diff_url": "https://github.com/huggingface/transformers/pull/11324.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11324.patch",
"merged_at": 1618873492000
} |
https://api.github.com/repos/huggingface/transformers/issues/11323 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11323/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11323/comments | https://api.github.com/repos/huggingface/transformers/issues/11323/events | https://github.com/huggingface/transformers/issues/11323 | 861,744,931 | MDU6SXNzdWU4NjE3NDQ5MzE= | 11,323 | Bug in trainer: substantially different results from restarting from a checkpoint and without | {
"login": "dorooddorood606",
"id": 79288051,
"node_id": "MDQ6VXNlcjc5Mjg4MDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/79288051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorooddorood606",
"html_url": "https://github.com/dorooddorood606",
"followers_url": "https://api.github.com/users/dorooddorood606/followers",
"following_url": "https://api.github.com/users/dorooddorood606/following{/other_user}",
"gists_url": "https://api.github.com/users/dorooddorood606/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorooddorood606/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorooddorood606/subscriptions",
"organizations_url": "https://api.github.com/users/dorooddorood606/orgs",
"repos_url": "https://api.github.com/users/dorooddorood606/repos",
"events_url": "https://api.github.com/users/dorooddorood606/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorooddorood606/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You will only have perfectly reproducible results using checkpointing if the only randomness comes from the shuffling in your data (this is enforced by the CI). The way this is programmed inside the Trainer is to go through each epoch before the current one (which triggers the random shuffling) and then each batch (which puts you in the same position as before the checkpoint).\r\n\r\nSince your results differ slightly, it looks like there are other random calls in your training code, which you did not share. There is no way to have the exact same results while resuming from a checkpoint if this is the case.",
"Hi @sgugger thanks for the reply, I do not have any other randomness in my codes, and I am using run_seq2seq.py codes to train t5 models on mrpc dataset, without modifications, I really appreciate your help on this issue as this is really crucial for me to have this working thanks a lot \r\n\r\nI initialize only the weights randomly, but I assume huggnigface well taking care of setting seeds, and there is really no other randomness ",
"@sgugger I confirm also training the vanilla t5 have the same issue exists:\r\nHere is the run for t5-base for 100 steps:\r\n\r\n```\r\n{'loss': 6.1045, 'learning_rate': 6e-07, 'epoch': 0.02} \r\n 0%| | 10/60000 [00:06<10:25:12, 1.60it/s]***** Running Evaluation *****\r\n Num examples = 204\r\n Batch size = 80\r\n ### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.44it/s]\r\n{'mrpc_en_eval_loss': 6.924696445465088, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.3137254901960786, 'mrpc_en_eval_runtime': 1.9287, 'mrpc_en_eval_samples_per_second': 105.771, 'epoch': 0.22} \r\n{'mrpc_en_eval_loss': 6.924696445465088, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.3137254901960786, 'mrpc_en_eval_runtime': 1.9287, 'mrpc_en_eval_samples_per_second': 105.771, 'epoch': 0.22, 'eval_average_metrics': 0.0} \r\n 0%| | 20/60000 [00:27<13:37:00, 1.22it/s]***** Running Evaluation *****\r\n Num examples = 204\r\n Batch size = 80\r\n ### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.49it/s]\r\n{'mrpc_en_eval_loss': 5.22016716003418, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.764705882352941, 'mrpc_en_eval_runtime': 1.8761, 'mrpc_en_eval_samples_per_second': 108.737, 'epoch': 0.43} \r\n{'mrpc_en_eval_loss': 5.22016716003418, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.764705882352941, 'mrpc_en_eval_runtime': 1.8761, 'mrpc_en_eval_samples_per_second': 108.737, 'epoch': 0.43, 'eval_average_metrics': 0.0} \r\n 0%| | 30/60000 [00:47<12:58:53, 1.28it/s]***** Running Evaluation *****\r\n Num examples = 204\r\n Batch size = 80\r\n ### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.37it/s]\r\n{'mrpc_en_eval_loss': 1.3517154455184937, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 18.137254901960784, 'mrpc_en_eval_gen_len': 3.2205882352941178, 'mrpc_en_eval_runtime': 1.9678, 'mrpc_en_eval_samples_per_second': 103.67, 'epoch': 0.65} \r\n{'mrpc_en_eval_loss': 1.3517154455184937, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 18.137254901960784, 'mrpc_en_eval_gen_len': 3.2205882352941178, 'mrpc_en_eval_runtime': 1.9678, 'mrpc_en_eval_samples_per_second': 103.67, 'epoch': 0.65, 'eval_average_metrics': 9.068627450980392} \r\n 0%| | 40/60000 [01:08<13:00:06, 1.28it/s]***** Running Evaluation *****\r\n Num examples = 204\r\n Batch size = 80\r\n ### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 4.62it/s]\r\n{'mrpc_en_eval_loss': 0.4487058222293854, 'mrpc_en_eval_f1': 81.3953488372093, 'mrpc_en_eval_accuracy': 68.62745098039215, 'mrpc_en_eval_gen_len': 2.0, 'mrpc_en_eval_runtime': 1.0261, 'mrpc_en_eval_samples_per_second': 198.811, 'epoch': 0.87} \r\n{'mrpc_en_eval_loss': 0.4487058222293854, 'mrpc_en_eval_f1': 81.3953488372093, 'mrpc_en_eval_accuracy': 68.62745098039215, 'mrpc_en_eval_gen_len': 2.0, 'mrpc_en_eval_runtime': 1.0261, 'mrpc_en_eval_samples_per_second': 198.811, 'epoch': 0.87, 'eval_average_metrics': 75.01139990880073} \r\n 0%| | 50/60000 [01:27<12:31:06, 1.33it/s]***** Running Evaluation *****\r\n Num examples = 204\r\n Batch size = 80\r\n ### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3.72it/s]\r\n{'mrpc_en_eval_loss': 0.25695744156837463, 'mrpc_en_eval_f1': 83.79204892966361, 'mrpc_en_eval_accuracy': 74.01960784313727, 'mrpc_en_eval_gen_len': 2.0833333333333335, 'mrpc_en_eval_runtime': 1.2653, 'mrpc_en_eval_samples_per_second': 161.228, 'epoch': 1.09} \r\n{'mrpc_en_eval_loss': 0.25695744156837463, 'mrpc_en_eval_f1': 83.79204892966361, 'mrpc_en_eval_accuracy': 74.01960784313727, 'mrpc_en_eval_gen_len': 2.0833333333333335, 'mrpc_en_eval_runtime': 1.2653, 'mrpc_en_eval_samples_per_second': 161.228, 'epoch': 1.09, 'eval_average_metrics': 78.90582838640043} \r\n 0%|▏ | 60/60000 [01:47<12:36:18, 1.32it/s]***** Running Evaluation *****\r\n Num examples = 204\r\n Batch size = 80\r\n ### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 4.29it/s]\r\n{'mrpc_en_eval_loss': 0.27573078870773315, 'mrpc_en_eval_f1': 82.11143695014663, 'mrpc_en_eval_accuracy': 70.09803921568627, 'mrpc_en_eval_gen_len': 2.014705882352941, 'mrpc_en_eval_runtime': 1.1521, 'mrpc_en_eval_samples_per_second': 177.063, 'epoch': 1.3} \r\n{'mrpc_en_eval_loss': 0.27573078870773315, 'mrpc_en_eval_f1': 82.11143695014663, 'mrpc_en_eval_accuracy': 70.09803921568627, 'mrpc_en_eval_gen_len': 2.014705882352941, 'mrpc_en_eval_runtime': 1.1521, 'mrpc_en_eval_samples_per_second': 177.063, 'epoch': 1.3, 'eval_average_metrics': 76.10473808291644} \r\n 0%|▏ | 70/60000 [02:09<13:15:00, 1.26it/s]***** Running Evaluation *****\r\n Num examples = 204\r\n Batch size = 80\r\n ### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3.75it/s]\r\n{'mrpc_en_eval_loss': 0.16758881509304047, 'mrpc_en_eval_f1': 87.04318936877075, 'mrpc_en_eval_accuracy': 80.88235294117648, 'mrpc_en_eval_gen_len': 2.2107843137254903, 'mrpc_en_eval_runtime': 1.2665, 'mrpc_en_eval_samples_per_second': 161.075, 'epoch': 1.52} \r\n{'mrpc_en_eval_loss': 0.16758881509304047, 'mrpc_en_eval_f1': 87.04318936877075, 'mrpc_en_eval_accuracy': 80.88235294117648, 'mrpc_en_eval_gen_len': 2.2107843137254903, 'mrpc_en_eval_runtime': 1.2665, 'mrpc_en_eval_samples_per_second': 161.075, 'epoch': 1.52, 'eval_average_metrics': 83.96277115497361} \r\n 0%|▏ | 80/60000 [02:30<13:18:49, 1.25it/s]***** Running Evaluation *****\r\n Num examples = 204\r\n Batch size = 80\r\n ### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3.64it/s]\r\n{'mrpc_en_eval_loss': 0.1627584546804428, 'mrpc_en_eval_f1': 89.86486486486486, 'mrpc_en_eval_accuracy': 85.29411764705883, 'mrpc_en_eval_gen_len': 2.235294117647059, 'mrpc_en_eval_runtime': 1.2734, 'mrpc_en_eval_samples_per_second': 160.198, 'epoch': 1.74} \r\n{'mrpc_en_eval_loss': 0.1627584546804428, 'mrpc_en_eval_f1': 89.86486486486486, 'mrpc_en_eval_accuracy': 85.29411764705883, 'mrpc_en_eval_gen_len': 2.235294117647059, 'mrpc_en_eval_runtime': 1.2734, 'mrpc_en_eval_samples_per_second': 160.198, 'epoch': 1.74, 'eval_average_metrics': 87.57949125596184} \r\n 0%|▏ | 90/60000 [02:50<12:35:38, 1.32it/s]***** Running Evaluation *****\r\n Num examples = 204\r\n Batch size = 80\r\n ### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3.71it/s]\r\n{'mrpc_en_eval_loss': 0.178583025932312, 'mrpc_en_eval_f1': 90.78014184397163, 'mrpc_en_eval_accuracy': 87.25490196078431, 'mrpc_en_eval_gen_len': 2.303921568627451, 'mrpc_en_eval_runtime': 1.2507, 'mrpc_en_eval_samples_per_second': 163.108, 'epoch': 1.96} \r\n{'mrpc_en_eval_loss': 0.178583025932312, 'mrpc_en_eval_f1': 90.78014184397163, 'mrpc_en_eval_accuracy': 87.25490196078431, 'mrpc_en_eval_gen_len': 2.303921568627451, 'mrpc_en_eval_runtime': 1.2507, 'mrpc_en_eval_samples_per_second': 163.108, 'epoch': 1.96, 'eval_average_metrics': 89.01752190237798} \r\n 0%|▏ | 100/60000 [03:09<12:29:36, 1.33it/s]***** Running Evaluation *****\r\n Num examples = 204\r\n Batch size = 80\r\n ### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3.70it/s]\r\n{'mrpc_en_eval_loss': 0.18296584486961365, 'mrpc_en_eval_f1': 88.72727272727272, 'mrpc_en_eval_accuracy': 84.80392156862744, 'mrpc_en_eval_gen_len': 2.338235294117647, 'mrpc_en_eval_runtime': 1.2762, 'mrpc_en_eval_samples_per_second': 159.845, 'epoch': 2.17} \r\n{'mrpc_en_eval_loss': 0.18296584486961365, 'mrpc_en_eval_f1': 88.72727272727272, 'mrpc_en_eval_accuracy': 84.80392156862744, 'mrpc_en_eval_gen_len': 2.338235294117647, 'mrpc_en_eval_runtime': 1.2762, 'mrpc_en_eval_samples_per_second': 159.845, 'epoch': 2.17, 'eval_average_metrics': 86.76559714795007} \r\n\r\n```\r\n\r\nNow lets see the results of t5-base after resuming from step = 60\r\n```\r\n 0%|▏ | 60/60000 [00:06<9:21:55, 1.78it/s]***** Running Evaluation *****\r\n Num examples = 204\r\n Batch size = 80\r\n ### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 4.00it/s]\r\n{'mrpc_en_eval_loss': 0.2794328033924103, 'mrpc_en_eval_f1': 82.11143695014663, 'mrpc_en_eval_accuracy': 70.09803921568627, 'mrpc_en_eval_gen_len': 2.014705882352941, 'mrpc_en_eval_runtime': 1.2224, 'mrpc_en_eval_samples_per_second': 166.887, 'epoch': 1.3} \r\n{'mrpc_en_eval_loss': 0.2794328033924103, 'mrpc_en_eval_f1': 82.11143695014663, 'mrpc_en_eval_accuracy': 70.09803921568627, 'mrpc_en_eval_gen_len': 2.014705882352941, 'mrpc_en_eval_runtime': 1.2224, 'mrpc_en_eval_samples_per_second': 166.887, 'epoch': 1.3, 'eval_average_metrics': 76.10473808291644} \r\n 0%|▏ | 70/60000 [00:28<13:22:56, 1.24it/s]***** Running Evaluation *****\r\n Num examples = 204\r\n Batch size = 80\r\n ### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3.59it/s]\r\n{'mrpc_en_eval_loss': 0.16057834029197693, 'mrpc_en_eval_f1': 88.43537414965986, 'mrpc_en_eval_accuracy': 83.33333333333334, 'mrpc_en_eval_gen_len': 2.2450980392156863, 'mrpc_en_eval_runtime': 1.3058, 'mrpc_en_eval_samples_per_second': 156.222, 'epoch': 1.52} \r\n{'mrpc_en_eval_loss': 0.16057834029197693, 'mrpc_en_eval_f1': 88.43537414965986, 'mrpc_en_eval_accuracy': 83.33333333333334, 'mrpc_en_eval_gen_len': 2.2450980392156863, 'mrpc_en_eval_runtime': 1.3058, 'mrpc_en_eval_samples_per_second': 156.222, 'epoch': 1.52, 'eval_average_metrics': 85.8843537414966} \r\n 0%|▏ | 80/60000 [00:48<12:55:04, 1.29it/s]***** Running Evaluation *****\r\n Num examples = 204\r\n Batch size = 80\r\n ### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3.69it/s]\r\n{'mrpc_en_eval_loss': 0.15957750380039215, 'mrpc_en_eval_f1': 88.81118881118881, 'mrpc_en_eval_accuracy': 84.31372549019608, 'mrpc_en_eval_gen_len': 2.284313725490196, 'mrpc_en_eval_runtime': 1.291, 'mrpc_en_eval_samples_per_second': 158.021, 'epoch': 1.74} \r\n{'mrpc_en_eval_loss': 0.15957750380039215, 'mrpc_en_eval_f1': 88.81118881118881, 'mrpc_en_eval_accuracy': 84.31372549019608, 'mrpc_en_eval_gen_len': 2.284313725490196, 'mrpc_en_eval_runtime': 1.291, 'mrpc_en_eval_samples_per_second': 158.021, 'epoch': 1.74, 'eval_average_metrics': 86.56245715069244} \r\n 0%|▏ | 90/60000 [01:11<13:47:58, 1.21it/s]***** Running Evaluation *****\r\n Num examples = 204\r\n Batch size = 80\r\n ### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3.67it/s]\r\n{'mrpc_en_eval_loss': 0.19618992507457733, 'mrpc_en_eval_f1': 87.17948717948718, 'mrpc_en_eval_accuracy': 82.84313725490196, 'mrpc_en_eval_gen_len': 2.3480392156862746, 'mrpc_en_eval_runtime': 1.2811, 'mrpc_en_eval_samples_per_second': 159.235, 'epoch': 1.96} \r\n{'mrpc_en_eval_loss': 0.19618992507457733, 'mrpc_en_eval_f1': 87.17948717948718, 'mrpc_en_eval_accuracy': 82.84313725490196, 'mrpc_en_eval_gen_len': 2.3480392156862746, 'mrpc_en_eval_runtime': 1.2811, 'mrpc_en_eval_samples_per_second': 159.235, 'epoch': 1.96, 'eval_average_metrics': 85.01131221719457} \r\n 0%|▏ | 100/60000 [01:33<12:55:11, 1.29it/s]***** Running Evaluation *****\r\n Num examples = 204\r\n Batch size = 80\r\n ### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3.75it/s]\r\n{'mrpc_en_eval_loss': 0.21464459598064423, 'mrpc_en_eval_f1': 87.96992481203009, 'mrpc_en_eval_accuracy': 84.31372549019608, 'mrpc_en_eval_gen_len': 2.3823529411764706, 'mrpc_en_eval_runtime': 1.2654, 'mrpc_en_eval_samples_per_second': 161.214, 'epoch': 2.17} \r\n{'mrpc_en_eval_loss': 0.21464459598064423, 'mrpc_en_eval_f1': 87.96992481203009, 'mrpc_en_eval_accuracy': 84.31372549019608, 'mrpc_en_eval_gen_len': 2.3823529411764706, 'mrpc_en_eval_runtime': 1.2654, 'mrpc_en_eval_samples_per_second': 161.214, 'epoch': 2.17, 'eval_average_metrics': 86.14182515111308} \r\n 0%|▏ \r\n\r\n```\r\n",
"Dear @sgugger @patrickvonplaten @patil-suraj \r\nCould you kindly have a look into this issue, this is really important to have the checkpointing workings, as in many cases one cannot train the models for larger periods, thnaks ",
"Following up on @sgugger's suggestion, if I understand the methodology correctly it doesn't quite apply to the generic checkpointing method, but one could subclass the Trainer to save the RNG state at the moment of saving the checkpoint, and then restore the same RNG state on resume. You'd probably need to do that for at least python and pytorch (and numpy and other libraries if you use those).\r\n\r\n@dorooddorood606, look into:\r\n```\r\n# before saving\r\npy_rng_state = random.getstate()\r\npt_rng_state = torch.get_rng_state()\r\nnp_rng_state = numpy.random.get_state()\r\n\r\n# post resume\r\nrandom.setstate(py_rng_state)\r\ntorch.set_rng_state(pt_rng_state)\r\nnumpy.random.set_state(np_rng_state)\r\n```",
"Dear @stas00 \r\nThank you very much for following up on this, I implemented this suggestion, and I still see the discrepancies after resuming the checkpoints. I emphasize I tried with \"vanilla t5-base\" so no changes from huggingface codes. In my own codes, I have some initialization which is the only part with randomness, I would be grateful if you could tell me if there might be an issue with these lines:\r\n```\r\nnn.init.normal_(linear_layer.weight, std=std)\r\nnn.init.zeros_(linear_layer.bias)\r\n```\r\nbut still since vanillat t5-base also has this issue, I was wondering if you might think this might be relevant to the trainer code as a general issue? I greatly appreciate it if you could kindly consider this issue.\r\n\r\nthanks a lot in advance for the great work you do and your hard efforts.\r\n",
"> Thank you very much for following up on this, I implemented this suggestion, \r\n\r\nCould we first validate that this was done correctly?\r\n\r\nTo test you can debug print some random number generated **immediately after saving the checkpoint** and RNG state and doing the same right **after the checkpoint and RNG states were restored** when you run the program 2nd time with resume. If you get the same number generated then we know you restored the RNG state. You probably want to check one for torch and one for python.\r\n\r\n> I have some initialization which is the only part with randomness, I would be grateful if you could tell me if there might be an issue with these lines:\r\n> \r\n> ```\r\n> nn.init.normal_(linear_layer.weight, std=std)\r\n\r\nThis line would definitely impact the RNG state. If you're uncertain you can always debug and generate a random number with that line of code and w/o it and see if it's the same.\r\n \r\nSo for example one workaround you could do is to restore the RNG state after your custom code above.\r\n\r\nOr better don't re-run this line, but save the outcome with the checkpoint and then restore it on subsequent runs, rather the needing to fiddle with RNG states.\r\n",
"Dear @stas00 \r\nFirst, I would like to thank you very much for taking your precious time and answering to my question. \r\nI observe that between different runs my codes generate different results. I was assuming since HuggingFace run_glue.py codes set the seeds initially, then it is well taking care of randomness. All my code has is some initialization, like what I sent, coming all after the \"set_seed()\" function. Considering only one run, putting check-pointing aside, could you kindly tell me if one needs to set seeds before each initialization? shall I bring them all in init_weights function of BERT? I appreciate your response a lot.\r\nThank you. ",
"First a few requests, @dorooddorood606 \r\n- please don't re-post the same question on Issues and forums, once is plenty - honestly I'm lost at what we are trying to solve here.\r\n- we all appreciate your appreciations, you're clearly a very nice person, but it becomes overbearing when we get copious amounts of it in every post.\r\n- let's focus on the problem only so that the sound-to-noise ratio is manageable.\r\n\r\nThank you!\r\n\r\n------------\r\n\r\nNow, let's try to summarize what doesn't work.\r\n\r\n1. From what I understand you extended the library with your own modifications. And now you're experiencing inconsistent randomness issues when you resume the model, correct?\r\n\r\n Does the library produce the expected results if you remove your modifications?\r\n\r\n2. Is there an easy way to provide a reproducible example that shows how the main library works correctly and then it breaks when with your modification? Perhaps a simple google colab notebook? If you do that please make sure that it's very easy to quickly see what the problem is and where it comes from. So no production-level hundreds of lines of code, but toy examples if possible.\r\n\r\n",
"Dear @stas00 \r\nThank you for the remind, I will follow the points you mentioned. I was thinking there is also a bug in the trainer as I was also observing it for the Bert-base model unchanged, but the randomness issue resolved with upgrading to 4.6.0 version of transformers.\r\n\r\n\r\n",
"Dear @stas00 \r\n\r\nI appreciate your input on the issue of reproducibility from resuming from checkpoints a lot. I tried to follow your points to state it in a clearer way. \r\n\r\n### Problem statement \r\nIf a user train a model till some steps and then reload the model from a checkpoint, the results differs from the training the model without breaks. \r\n\r\n### How to reproduce the issue \r\nTransformer version: I am using 4.6.0dev version of transformer \r\n```\r\nhttps://github.com/huggingface/transformers/commit/04ab2ca639ee6fd1002ce0a498d892245f0c9093\r\n```\r\n\r\nPlease kindly clone this repository with a minimal example\r\n```\r\ngit clone [email protected]:dorooddorood606/reproducibility.git\r\n```\r\nTo run the codes, please kindly run this command, between the runs in every 50 steps after save of the model, kill the model for like 2-3 times. Please then compare the final results of running for the full iterations with resuming, with ```raining without any breaks```. The results would differ. \r\n```\r\nTASK_NAME=mrpc\r\npython run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 2 --output_dir /temp/$TASK_NAME/ --eval_steps 50 --evaluation_strategy steps --load_best_model_at_end --fp16 --do_predict\r\n```\r\n\r\nPlease let me know if you need any further information on this. \r\n\r\n### Which modifications done on Trainer class to make it reproducible:\r\nI apply the following modifications to the trainer class:\r\n1) Following your suggestions. I save the random states and I reload them before reloading the checkpoint in the trainer class. Please see https://github.com/dorooddorood606/reproducibility/blob/f5902af4669bba8aaee326efdb0cd459e25be675/trainer.py#L126 \r\nand https://github.com/dorooddorood606/reproducibility/blob/f5902af4669bba8aaee326efdb0cd459e25be675/trainer.py#L200\r\n\r\n2) In each saving of checkpoints, I also save a copy of checkpoint in the output_dir, this is because I personally believe we need to also keep the last checkpoint to resume from in addition to keeping only checkpoint of the best model so far, to be able to continue training from the last state. Please see https://github.com/dorooddorood606/reproducibility/blob/f5902af4669bba8aaee326efdb0cd459e25be675/trainer.py#L87\r\n\r\n3) I get the last checkpoint in run_glue.py based on the checkpoint saved in the main output_dir, please see https://github.com/dorooddorood606/reproducibility/blob/f5902af4669bba8aaee326efdb0cd459e25be675/run_glue.py#L46\r\n\r\n### Larger impact of this issue\r\nTo me this issue with resuming from checkpoint, can also help other users and would be beneficial to all users who need to use this option. I appreciate a lot if you could sparse me some time from your precious time and help on this issue.\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"Thank you for your detailed followup, @dorooddorood606. And sharing what experiments you have tried. \r\n\r\nI agree that it'd be awesome to be able to resume as if there was no stopping.\r\n\r\nPlease give us some time, we are going discuss whether it is feasible to make it happen as there are so many moving parts to consider and if so will build this ground up.\r\n\r\nWe will keep you posted.",
"Dear @stas00 \r\nthank you. Sure, meanwhile if you might have some ideas and suggestions for me to try, I greatly appreciate your help. I searched for this issue a lot, and apart from the things HuggingFace repo has already implemented I could not find more tricks to do to solve the issue. \r\nThanks a lot in advance for your time and assistance. ",
"@sgugger is working on it in https://github.com/huggingface/transformers/pull/11582",
"Hi \r\nI cannot really express how much I appreciate this. Thank you very much both for working on this. This would be wonderful to have resuming fixed in trainer. Thanks for your efforts. ",
"I totally agree!\r\n\r\nAll kudos go to @sgugger , who has a much better understanding of the nooks and crannies of the HF Trainer.\r\n\r\n",
"Dear @sgugger \r\n\r\nThanks for the hard work. I tested it but the issue is not resolved, specially for small datasets it can make large changes in final results, I appreciate if you could share with me some suggestions on how to resolve the issue:\r\n\r\nThe original one:\r\n```\r\ncheckpoint: 200\r\n{'eval_loss': 0.44332757592201233, 'eval_accuracy': 0.7941176470588235, 'eval_f1': 0.8521126760563381, 'eval_combined_score': 0.8231151615575808, 'eval_runtime': 1.5259, 'eval_samples_per_second': 133.692, 'eval_average_metrics': 0.8231151615575808, 'epoch': 1.74}\r\n```\r\n\r\nThe resumed one:\r\n\r\n```\r\ncheckpoint: 200\r\n{'eval_loss': 0.4352119266986847, 'eval_accuracy': 0.7941176470588235, 'eval_f1': 0.85, 'eval_combined_score': 0.8220588235294117, 'eval_runtime': 1.4451, 'eval_samples_per_second': 141.165, 'eval_average_metrics': 0.8220588235294117, 'epoch': 1.74} \r\n```\r\nThe differences accumulate a lot over time \r\n\r\n# To reproduce please run:\r\n```\r\nTASK_NAME=mrpc \r\npython run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir /temp/results --eval_steps 50 --evaluation_strategy steps --load_best_model_at_end --fp16 --do_test --save_total_limit 1 \r\n``` \r\n\r\nHere are the final results without drop:\r\n```\r\n[INFO|trainer_pt_utils.py:907] 2021-05-09 17:35:14,973 >> ***** eval metrics *****\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,973 >> epoch = 3.0\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,973 >> eval_accuracy = 0.701\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,974 >> eval_average_metrics = 0.7605196946035051\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,974 >> eval_combined_score = 0.7605\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,974 >> eval_f1 = 0.8201\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,974 >> eval_loss = 0.604\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,974 >> eval_mem_cpu_alloc_delta = 2MB\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,974 >> eval_mem_cpu_peaked_delta = 2MB\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,974 >> eval_mem_gpu_alloc_delta = 0MB\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,974 >> eval_mem_gpu_peaked_delta = 33MB\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,974 >> eval_runtime = 0:00:01.95\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,974 >> eval_samples = 204\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:14,974 >> eval_samples_per_second = 104.502\r\n05/09/2021 17:35:14 - INFO - __main__ - *** Test ***\r\n[INFO|trainer.py:515] 2021-05-09 17:35:15,036 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: idx, sentence1, sentence2.\r\n[INFO|trainer.py:2089] 2021-05-09 17:35:15,040 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:2091] 2021-05-09 17:35:15,041 >> Num examples = 204\r\n[INFO|trainer.py:2094] 2021-05-09 17:35:15,041 >> Batch size = 8\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 26/26 [00:01<00:00, 13.77it/s]\r\n[INFO|trainer_pt_utils.py:907] 2021-05-09 17:35:17,070 >> ***** test metrics *****\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> epoch = 3.0\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> eval_accuracy = 0.6863\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> eval_average_metrics = 0.7490196078431373\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> eval_combined_score = 0.749\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> eval_f1 = 0.8118\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> eval_loss = 0.6198\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> eval_mem_cpu_alloc_delta = 0MB\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> eval_mem_cpu_peaked_delta = 2MB\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> eval_mem_gpu_alloc_delta = 0MB\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> eval_mem_gpu_peaked_delta = 33MB\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> eval_runtime = 0:00:01.95\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> eval_samples_per_second = 104.281\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:35:17,070 >> test_samples = 204\r\n\r\n```\r\n\r\nwith breaking in between:\r\n```\r\n[INFO|trainer_pt_utils.py:907] 2021-05-09 17:41:22,953 >> ***** eval metrics *****\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,953 >> epoch = 3.0\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,953 >> eval_accuracy = 0.6863\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,953 >> eval_average_metrics = 0.7467517127332861\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,953 >> eval_combined_score = 0.7468\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,953 >> eval_f1 = 0.8072\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,953 >> eval_loss = 0.6106\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,953 >> eval_mem_cpu_alloc_delta = 2MB\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,953 >> eval_mem_cpu_peaked_delta = 1MB\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,953 >> eval_mem_gpu_alloc_delta = 0MB\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,953 >> eval_mem_gpu_peaked_delta = 33MB\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,953 >> eval_runtime = 0:00:01.82\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,953 >> eval_samples = 204\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:22,954 >> eval_samples_per_second = 111.603\r\n05/09/2021 17:41:22 - INFO - __main__ - *** Test ***\r\n[INFO|trainer.py:515] 2021-05-09 17:41:23,014 >> The following columns in the evaluation set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: idx, sentence2, sentence1.\r\n[INFO|trainer.py:2089] 2021-05-09 17:41:23,018 >> ***** Running Evaluation *****\r\n[INFO|trainer.py:2091] 2021-05-09 17:41:23,019 >> Num examples = 204\r\n[INFO|trainer.py:2094] 2021-05-09 17:41:23,019 >> Batch size = 8\r\n100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 26/26 [00:01<00:00, 14.71it/s]\r\n[INFO|trainer_pt_utils.py:907] 2021-05-09 17:41:24,916 >> ***** test metrics *****\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,916 >> epoch = 3.0\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,916 >> eval_accuracy = 0.701\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,916 >> eval_average_metrics = 0.7572180248246088\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,916 >> eval_combined_score = 0.7572\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,916 >> eval_f1 = 0.8135\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,916 >> eval_loss = 0.6068\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,917 >> eval_mem_cpu_alloc_delta = 0MB\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,917 >> eval_mem_cpu_peaked_delta = 1MB\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,917 >> eval_mem_gpu_alloc_delta = 0MB\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,917 >> eval_mem_gpu_peaked_delta = 33MB\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,917 >> eval_runtime = 0:00:01.83\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,917 >> eval_samples_per_second = 111.455\r\n[INFO|trainer_pt_utils.py:912] 2021-05-09 17:41:24,917 >> test_samples = 204\r\n```\r\n\r\nThis is that different that still does not allow using checkpointing, I only have access to gpus which are interruptable and really appreciate your help \r\n\r\nI also have added `CUBLAS_WORKSPACE_CONFIG=:16:8` as described in `https://discuss.pytorch.org/t/random-seed-with-external-gpu/102260/3` to make torch deterministic, still does not work, ",
"Are you sure you are running on a source install of Transformers? The command produces the exact same results on my end.",
"Dear Sylvain,\r\nThanks for the response. Yes, I install transformers as pip install git+https://github.com/huggingface/transformers.git\r\n\r\nbut the results differs a lot. Please kindly run this command and break it after first checkpoint (iterations = 50)\r\n```\r\nTASK_NAME=mrpc\r\npython run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir /tmp/ --eval_steps 50 --evaluation_strategy steps --load_best_model_at_end --fp16 --do_test \r\n``` \r\n",
"This might be due to the FP16 parameter. Could you check if you get the same result without FP16?\r\nThe reason is due to the fact we don't save the state of the gradient scaler in mixed precision training, which is another thing to restore to its state. Can make a PR to fix that tomorrow.",
"Dear Sylvain\r\n\r\nThank you for taking your precious time and answering this issue. you are absolutely right. I checked it without fp16 and I confirm this works fine without fp16, it would be wonderful to have the fp16 mode also working when you have time.\r\n\r\nThank you for your hard work and great job you do :) ",
"Problem was fixed on my side with the PR above. Let me know if this is not the case for you.",
"Dear @sgugger \r\n\r\nThank you for the PR, I checked it with the last version of transformers now, and the issue still exists, please kindly run this command and break this after first 50 steps:\r\n```\r\nTASK_NAME=mrpc\r\npython run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --eval_steps 50 --evaluation_strategy steps --load_best_model_at_end --fp16 --do_test \r\n```\r\nHere are the results:\r\nIf you do not break:\r\n```\r\nAfter 50 steps:\r\n\r\n{'eval_loss': 0.6383711695671082, 'eval_accuracy': 0.6764705882352942, 'eval_f1': 0.8070175438596491, 'eval_combined_score': 0.7417440660474717, 'eval_runtime': 2.1914, 'eval_samples_per_second': 93.091, 'eval_average_metrics': 0.7417440660474717, 'epoch': 0.43}\r\n\r\nAfter 100 steps:\r\n{'eval_loss': 0.6184656023979187, 'eval_accuracy': 0.6862745098039216, 'eval_f1': 0.813953488372093, 'eval_combined_score': 0.7501139990880072, 'eval_runtime': 2.1089, 'eval_samples_per_second': 96.731, 'eval_average_metrics': 0.7501139990880072, 'epoch': 0.87}\r\n ```\r\n\r\nif you break after 50 steps:\r\n\r\n```\r\nAfter 100 steps\r\n{'eval_loss': 0.6308265328407288, 'eval_accuracy': 0.6862745098039216, 'eval_f1': 0.813953488372093, 'eval_combined_score': 0.7501139990880072, 'eval_runtime': 2.1549, 'eval_samples_per_second': 94.668, 'eval_average_metrics': 0.7501139990880072, 'epoch': 0.87} \r\n```\r\n\r\nThe differences accumulates and the results at the end varies a lot that resumed results are not usable.\r\nI really appreciate if you could kindly have another look. Could you kindly reopen this issue as well?\r\n\r\nthanks. \r\n",
"I sadly cannot reproduce (get the exact same results with the command you indicated using a source install on current master) so this comes from something in your particular setup at this stage."
] | 1,618 | 1,620 | 1,620 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.5
- Platform: linux
- Python version: 3.7
- PyTorch version (GPU?): 1.8
- Tensorflow version (GPU?): -
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: -
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger @patrickvonplaten, @patil-suraj
## Information
- I am training T5 model and I am resuming the training from a checkpoint
- I have fixed the issue here https://github.com/huggingface/transformers/issues/11294 by freezing the parameters back right after this loads the model from the checkpoint
- I am using "evaluation_strategy": "steps" and I evaluate the model every 10 steps with "save_total_limit": 1
- I modified the save_checkpoint class as below to "save last copy of the model in output_dir" as one need to load a checkpoint from the place the model is left trained, and not from the checkpoint with best evaluation:
```
def _save_checkpoint(self, model, trial, metrics=None):
super()._save_checkpoint(model, trial, metrics)
# Saves the models checkpoints in the main folder.
if self.is_world_process_zero():
# remove the older global_steps.
global_steps = [str(x) for x in Path(self.args.output_dir).glob("global_step*")]
for global_step in global_steps:
shutil.rmtree(global_step)
self.save_model(self.args.output_dir)
if self.deepspeed:
self.deepspeed.save_checkpoint(self.args.output_dir)
else:
# deepspeed.save_checkpoint above saves model/optim/sched
torch.save(self.optimizer.state_dict(), os.path.join(self.args.output_dir, "optimizer.pt"))
with warnings.catch_warnings(record=True) as caught_warnings:
torch.save(self.lr_scheduler.state_dict(), os.path.join(self.args.output_dir, "scheduler.pt"))
reissue_pt_warnings(caught_warnings)
self.state.save_to_json(os.path.join(self.args.output_dir, "trainer_state.json"))
```
then I find the last checkpoint to resume from it from the saved one in output directory as below:
```
def get_last_checkpoint(output_dir):
if os.path.exists(os.path.join(output_dir, 'pytorch_model.bin')):
return output_dir
return None
```
Here is the results without resume for 10 times evaluation:
```
{'loss': 5.0483, 'learning_rate': 6e-07, 'epoch': 0.02}
0%| | 10/60000 [00:07<11:11:04, 1.49it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.54it/s]
{'mrpc_en_eval_loss': 5.382528305053711, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0, 'mrpc_en_eval_runtime': 1.8421, 'mrpc_en_eval_samples_per_second': 110.741, 'epoch': 0.22}
{'mrpc_en_eval_loss': 5.382528305053711, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0, 'mrpc_en_eval_runtime': 1.8421, 'mrpc_en_eval_samples_per_second': 110.741, 'epoch': 0.22, 'eval_average_metrics': 0.0}
0%| | 20/60000 [00:20<11:57:29, 1.39it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.56it/s]
{'mrpc_en_eval_loss': 5.180729389190674, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0, 'mrpc_en_eval_runtime': 1.8179, 'mrpc_en_eval_samples_per_second': 112.218, 'epoch': 0.43}
{'mrpc_en_eval_loss': 5.180729389190674, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0, 'mrpc_en_eval_runtime': 1.8179, 'mrpc_en_eval_samples_per_second': 112.218, 'epoch': 0.43, 'eval_average_metrics': 0.0}
0%| | 30/60000 [00:33<12:01:13, 1.39it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.52it/s]
{'mrpc_en_eval_loss': 4.810805320739746, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0, 'mrpc_en_eval_runtime': 1.8421, 'mrpc_en_eval_samples_per_second': 110.743, 'epoch': 0.65}
{'mrpc_en_eval_loss': 4.810805320739746, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0, 'mrpc_en_eval_runtime': 1.8421, 'mrpc_en_eval_samples_per_second': 110.743, 'epoch': 0.65, 'eval_average_metrics': 0.0}
0%| | 40/60000 [00:45<11:17:50, 1.47it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.54it/s]
{'mrpc_en_eval_loss': 4.203256607055664, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0098039215686274, 'mrpc_en_eval_runtime': 2.031, 'mrpc_en_eval_samples_per_second': 100.441, 'epoch': 0.87}
{'mrpc_en_eval_loss': 4.203256607055664, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0098039215686274, 'mrpc_en_eval_runtime': 2.031, 'mrpc_en_eval_samples_per_second': 100.441, 'epoch': 0.87, 'eval_average_metrics': 0.0}
0%| | 50/60000 [00:58<11:42:57, 1.42it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.39it/s]
{'mrpc_en_eval_loss': 3.262455463409424, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0098039215686274, 'mrpc_en_eval_runtime': 2.1069, 'mrpc_en_eval_samples_per_second': 96.825, 'epoch': 1.09}
{'mrpc_en_eval_loss': 3.262455463409424, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0098039215686274, 'mrpc_en_eval_runtime': 2.1069, 'mrpc_en_eval_samples_per_second': 96.825, 'epoch': 1.09, 'eval_average_metrics': 0.0}
0%|▏ | 60/60000 [01:13<11:57:15, 1.39it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 1.78it/s]
{'mrpc_en_eval_loss': 1.9655567407608032, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.49019607843137253, 'mrpc_en_eval_gen_len': 3.053921568627451, 'mrpc_en_eval_runtime': 2.8657, 'mrpc_en_eval_samples_per_second': 71.186, 'epoch': 1.3}
{'mrpc_en_eval_loss': 1.9655567407608032, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.49019607843137253, 'mrpc_en_eval_gen_len': 3.053921568627451, 'mrpc_en_eval_runtime': 2.8657, 'mrpc_en_eval_samples_per_second': 71.186, 'epoch': 1.3, 'eval_average_metrics': 0.24509803921568626}
0%|▏ | 70/60000 [01:27<12:14:11, 1.36it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.08it/s]
{'mrpc_en_eval_loss': 0.7519775032997131, 'mrpc_en_eval_f1': 18.404907975460123, 'mrpc_en_eval_accuracy': 34.80392156862745, 'mrpc_en_eval_gen_len': 2.9411764705882355, 'mrpc_en_eval_runtime': 2.6193, 'mrpc_en_eval_samples_per_second': 77.884, 'epoch': 1.52}
{'mrpc_en_eval_loss': 0.7519775032997131, 'mrpc_en_eval_f1': 18.404907975460123, 'mrpc_en_eval_accuracy': 34.80392156862745, 'mrpc_en_eval_gen_len': 2.9411764705882355, 'mrpc_en_eval_runtime': 2.6193, 'mrpc_en_eval_samples_per_second': 77.884, 'epoch': 1.52, 'eval_average_metrics': 26.60441477204379}
0%|▏ | 80/60000 [01:41<12:02:22, 1.38it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.60it/s]
{'mrpc_en_eval_loss': 0.4142318665981293, 'mrpc_en_eval_f1': 75.62500000000001, 'mrpc_en_eval_accuracy': 61.76470588235294, 'mrpc_en_eval_gen_len': 2.1176470588235294, 'mrpc_en_eval_runtime': 1.7878, 'mrpc_en_eval_samples_per_second': 114.109, 'epoch': 1.74}
{'mrpc_en_eval_loss': 0.4142318665981293, 'mrpc_en_eval_f1': 75.62500000000001, 'mrpc_en_eval_accuracy': 61.76470588235294, 'mrpc_en_eval_gen_len': 2.1176470588235294, 'mrpc_en_eval_runtime': 1.7878, 'mrpc_en_eval_samples_per_second': 114.109, 'epoch': 1.74, 'eval_average_metrics': 68.69485294117648}
0%|▏ | 90/60000 [01:54<11:41:23, 1.42it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.54it/s]
{'mrpc_en_eval_loss': 0.3786551058292389, 'mrpc_en_eval_f1': 51.18483412322274, 'mrpc_en_eval_accuracy': 49.50980392156863, 'mrpc_en_eval_gen_len': 2.6519607843137254, 'mrpc_en_eval_runtime': 1.8265, 'mrpc_en_eval_samples_per_second': 111.69, 'epoch': 1.96}
{'mrpc_en_eval_loss': 0.3786551058292389, 'mrpc_en_eval_f1': 51.18483412322274, 'mrpc_en_eval_accuracy': 49.50980392156863, 'mrpc_en_eval_gen_len': 2.6519607843137254, 'mrpc_en_eval_runtime': 1.8265, 'mrpc_en_eval_samples_per_second': 111.69, 'epoch': 1.96, 'eval_average_metrics': 50.34731902239569}
0%|▏ | 100/60000 [02:07<12:01:27, 1.38it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.58it/s]
{'mrpc_en_eval_loss': 0.29472649097442627, 'mrpc_en_eval_f1': 71.01449275362319, 'mrpc_en_eval_accuracy': 60.78431372549019, 'mrpc_en_eval_gen_len': 2.3333333333333335, 'mrpc_en_eval_runtime': 1.812, 'mrpc_en_eval_samples_per_second': 112.581, 'epoch': 2.17}
{'mrpc_en_eval_loss': 0.29472649097442627, 'mrpc_en_eval_f1': 71.01449275362319, 'mrpc_en_eval_accuracy': 60.78431372549019, 'mrpc_en_eval_gen_len': 2.3333333333333335, 'mrpc_en_eval_runtime': 1.812, 'mrpc_en_eval_samples_per_second': 112.581, 'epoch': 2.17, 'eval_average_metrics': 65.89940323955669}
```
Now lets resume from step = 40, while the first 40 steps would get the same results, after resuming the results differ a lot:
```
0%| | 40/60000 [00:07<9:49:41, 1.69it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.62it/s]
{'mrpc_en_eval_loss': 4.203643321990967, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0098039215686274, 'mrpc_en_eval_runtime': 2.0033, 'mrpc_en_eval_samples_per_second': 101.834, 'epoch': 0.87}
{'mrpc_en_eval_loss': 4.203643321990967, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0098039215686274, 'mrpc_en_eval_runtime': 2.0033, 'mrpc_en_eval_samples_per_second': 101.834, 'epoch': 0.87, 'eval_average_metrics': 0.0}
0%| | 50/60000 [00:21<12:09:50, 1.37it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.30it/s]
{'mrpc_en_eval_loss': 3.2706634998321533, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0098039215686274, 'mrpc_en_eval_runtime': 2.2048, 'mrpc_en_eval_samples_per_second': 92.524, 'epoch': 1.09}
{'mrpc_en_eval_loss': 3.2706634998321533, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.0, 'mrpc_en_eval_gen_len': 3.0098039215686274, 'mrpc_en_eval_runtime': 2.2048, 'mrpc_en_eval_samples_per_second': 92.524, 'epoch': 1.09, 'eval_average_metrics': 0.0}
0%|▏ | 60/60000 [00:35<12:27:28, 1.34it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.54it/s]
{'mrpc_en_eval_loss': 1.9863247871398926, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.49019607843137253, 'mrpc_en_eval_gen_len': 3.019607843137255, 'mrpc_en_eval_runtime': 2.4126, 'mrpc_en_eval_samples_per_second': 84.557, 'epoch': 1.3}
{'mrpc_en_eval_loss': 1.9863247871398926, 'mrpc_en_eval_f1': 0.0, 'mrpc_en_eval_accuracy': 0.49019607843137253, 'mrpc_en_eval_gen_len': 3.019607843137255, 'mrpc_en_eval_runtime': 2.4126, 'mrpc_en_eval_samples_per_second': 84.557, 'epoch': 1.3, 'eval_average_metrics': 0.24509803921568626}
0%|▏ | 70/60000 [00:49<12:02:36, 1.38it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.07it/s]
{'mrpc_en_eval_loss': 0.7721647620201111, 'mrpc_en_eval_f1': 18.404907975460123, 'mrpc_en_eval_accuracy': 34.80392156862745, 'mrpc_en_eval_gen_len': 2.946078431372549, 'mrpc_en_eval_runtime': 2.5655, 'mrpc_en_eval_samples_per_second': 79.518, 'epoch': 1.52}
{'mrpc_en_eval_loss': 0.7721647620201111, 'mrpc_en_eval_f1': 18.404907975460123, 'mrpc_en_eval_accuracy': 34.80392156862745, 'mrpc_en_eval_gen_len': 2.946078431372549, 'mrpc_en_eval_runtime': 2.5655, 'mrpc_en_eval_samples_per_second': 79.518, 'epoch': 1.52, 'eval_average_metrics': 26.60441477204379}
0%|▏ | 80/60000 [01:02<12:08:06, 1.37it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.55it/s]
{'mrpc_en_eval_loss': 0.42692506313323975, 'mrpc_en_eval_f1': 74.28571428571428, 'mrpc_en_eval_accuracy': 60.29411764705882, 'mrpc_en_eval_gen_len': 2.142156862745098, 'mrpc_en_eval_runtime': 1.8243, 'mrpc_en_eval_samples_per_second': 111.824, 'epoch': 1.74}
{'mrpc_en_eval_loss': 0.42692506313323975, 'mrpc_en_eval_f1': 74.28571428571428, 'mrpc_en_eval_accuracy': 60.29411764705882, 'mrpc_en_eval_gen_len': 2.142156862745098, 'mrpc_en_eval_runtime': 1.8243, 'mrpc_en_eval_samples_per_second': 111.824, 'epoch': 1.74, 'eval_average_metrics': 67.28991596638654}
0%|▏ | 90/60000 [01:16<12:00:53, 1.39it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.50it/s]
{'mrpc_en_eval_loss': 0.39015302062034607, 'mrpc_en_eval_f1': 45.685279187817265, 'mrpc_en_eval_accuracy': 47.549019607843135, 'mrpc_en_eval_gen_len': 2.7205882352941178, 'mrpc_en_eval_runtime': 1.856, 'mrpc_en_eval_samples_per_second': 109.915, 'epoch': 1.96}
{'mrpc_en_eval_loss': 0.39015302062034607, 'mrpc_en_eval_f1': 45.685279187817265, 'mrpc_en_eval_accuracy': 47.549019607843135, 'mrpc_en_eval_gen_len': 2.7205882352941178, 'mrpc_en_eval_runtime': 1.856, 'mrpc_en_eval_samples_per_second': 109.915, 'epoch': 1.96, 'eval_average_metrics': 46.617149397830204}
0%|▏ | 100/60000 [01:31<12:02:17, 1.38it/s]***** Running Evaluation *****
Num examples = 204
Batch size = 80
### n_samples 204███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:01<00:00, 2.55it/s]
{'mrpc_en_eval_loss': 0.30966323614120483, 'mrpc_en_eval_f1': 68.48249027237354, 'mrpc_en_eval_accuracy': 60.29411764705882, 'mrpc_en_eval_gen_len': 2.426470588235294, 'mrpc_en_eval_runtime': 1.8275, 'mrpc_en_eval_samples_per_second': 111.625, 'epoch': 2.17}
{'mrpc_en_eval_loss': 0.30966323614120483, 'mrpc_en_eval_f1': 68.48249027237354, 'mrpc_en_eval_accuracy': 60.29411764705882, 'mrpc_en_eval_gen_len': 2.426470588235294, 'mrpc_en_eval_runtime': 1.8275, 'mrpc_en_eval_samples_per_second': 111.625, 'epoch': 2.17, 'eval_average_metrics': 64.38830395971618}
```
## Expected behavior
Resuming from a checkpoint needs to get the same results as without
Thank you for your help @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11323/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11322 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11322/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11322/comments | https://api.github.com/repos/huggingface/transformers/issues/11322/events | https://github.com/huggingface/transformers/pull/11322 | 861,598,638 | MDExOlB1bGxSZXF1ZXN0NjE4MjQxOTQ5 | 11,322 | [Trainer] fix the placement on device with fp16_full_eval | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"We probably need to re-think the \"placement on device\" logic. And to do it explicitly in each stage and in `__init__` only in those special cases where it's absolutely required."
] | 1,618 | 1,618 | 1,618 | CONTRIBUTOR | null | * `do_train` isn't a reliable arg - it is not required to run `train), so for now add a workaround for the `fp16_full_eval` case to place the model on device if `train()` was called w/o `do_train=True` being passed to the Trainer args.
* while at it fix the `deepspeed` case, now that it's used in eval too, it should never be put on device
* Also `args = self.args` to make the code easier to read/shorter - there is a lot of it in `train`.
Fixes: https://github.com/huggingface/transformers/issues/11200#issuecomment-822631511
@sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11322/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11322",
"html_url": "https://github.com/huggingface/transformers/pull/11322",
"diff_url": "https://github.com/huggingface/transformers/pull/11322.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11322.patch",
"merged_at": 1618858534000
} |
https://api.github.com/repos/huggingface/transformers/issues/11321 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11321/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11321/comments | https://api.github.com/repos/huggingface/transformers/issues/11321/events | https://github.com/huggingface/transformers/issues/11321 | 861,535,259 | MDU6SXNzdWU4NjE1MzUyNTk= | 11,321 | EncoderDecoderModel's decoder gets unexpected use_cache argument | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @biggoron \r\n\r\n`BertForMaskedLM` can't be used as a decoder, it's intended for masked LM. The `BertLMHeadModel` model should be used if you wan to use bert as a decoder. Also the right way to initialize the bert as decoder is as follows\r\n\r\n```python\r\ndec_config = BertConfig.from_pretrained(\"bert-base-uncased\")\r\ndec_config.add_cross_attentions = True # add cross attention if you want to use it in EncoderDecoderModel\r\ndec_config.is_decoder = True\r\ndec = BertLMHeadModel.from_pretrained(\"bert-base-uncased\", config=dec_config)\r\n```\r\n\r\nor simply use the `from_encoder_decoder_pretrained` method which takes care of this.",
"Thanks a lot for your help, it is much clearer!"
] | 1,618 | 1,619 | 1,619 | NONE | null | ## Environment info
- `transformers` version: 4.5.1
- Platform: Linux-4.15.0-123-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten, @patil-suraj I think you might have an idea on what went wrong.
## Information
EncoderDecoderModel internally calls decoder with use_cache parameter, which seems to be not define for a Bert decoder.
## To reproduce
I have bigger configs but the bug is still here with minimalistic configs:
```python
encoder_config = BertConfig()
encoder = BertModel(config=encoder_config)
decoder_config = BertConfig(is_decoder=True)
decoder = BertForMaskedLM(config=decoder_config)
model = EncoderDecoderModel(encoder=encoder, decoder=decoder)
input_ids = torch.ones(5, dtype=torch.long).unsqueeze(0)
model(input_ids=input_ids, decoder_input_ids=input_ids, labels=input_ids)
```
Outputs error from `/opt/conda/lib/python3.8/site-packages/transformers/models/encoder_decoder/modeling_encoder_decoder.py` at line 416 :
```
forward() got an unexpected keyword argument 'use_cache'
```
## Expected behavior
- Default Bert encoder and decoder can be stacked in an EncoderDecoderModel
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11321/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11320 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11320/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11320/comments | https://api.github.com/repos/huggingface/transformers/issues/11320/events | https://github.com/huggingface/transformers/issues/11320 | 861,493,251 | MDU6SXNzdWU4NjE0OTMyNTE= | 11,320 | Irregular VRAM usage with gpt-neo inference with sequences longer than 250 tokens | {
"login": "finetunej",
"id": 82650881,
"node_id": "MDQ6VXNlcjgyNjUwODgx",
"avatar_url": "https://avatars.githubusercontent.com/u/82650881?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/finetunej",
"html_url": "https://github.com/finetunej",
"followers_url": "https://api.github.com/users/finetunej/followers",
"following_url": "https://api.github.com/users/finetunej/following{/other_user}",
"gists_url": "https://api.github.com/users/finetunej/gists{/gist_id}",
"starred_url": "https://api.github.com/users/finetunej/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/finetunej/subscriptions",
"organizations_url": "https://api.github.com/users/finetunej/orgs",
"repos_url": "https://api.github.com/users/finetunej/repos",
"events_url": "https://api.github.com/users/finetunej/events{/privacy}",
"received_events_url": "https://api.github.com/users/finetunej/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"After asking about this on the EleutherAI discord, it was pointed out to me that 256 tokens corresponds to the local attention span of the model. Looking at the plot above, the first allocation peaks appear after about 256 tokens. After about 512 tokens, another shorter set of spikes occur, with more and shorter spikes being added every 256. This could indicate that there is an issue related to the implementation of local attention.",
"One more comment with additional information. During another run of the of the test script, I added some [logging](https://gist.github.com/finetuneanon/5b8b5cdaf4c27836ebbbf1ed0d238c5b) to modeling_gpt_neo.py in an exception handler. For another run where an OOM crash occured with sequence length 1871, before converting query and key to float32, 5270MB are used. Afterwards, 9984MB are in use. query.shape is [1, 1871, 20, 257, 128] and key.shape is [1, 1871, 20, 1, 128]. The transpose makes no additional allocation, but the matmul attempts to allocate another 4.6GB. The main culprit seems to be the dimension of size 257, which greatly increases the size of the tensor.",
"Hi @finetuneanon \r\n\r\nThanks for the detailed issue!\r\n\r\nSo what is happening here is, the way local attention is designed is a bit weird (not the implementation), in that it splits the `seq_length` dim into `(num_blocks, block_length)` but here `block_length` is actually dynamic.\r\n\r\nIt's equal to `window_size` by default which is 256. But when the `seq_length` is not evenly divisible by `block_length` then it's adjusted as follows\r\n\r\n```python\r\ndef _get_block_length_and_num_blocks(seq_length, window_size):\r\n \"\"\"\r\n Computes ``block_length`` and ``num_blocks`` such that ``seq_length`` becomes evenly divisible by\r\n ``block_length``.\r\n \"\"\"\r\n block_length = window_size\r\n while seq_length % block_length != 0:\r\n block_length -= 1\r\n num_blocks = seq_length // block_length\r\n return block_length, num_blocks\r\n\r\n```\r\nsuch that, the `seq_length` becomes evenly divisible by `block_length`. \r\n\r\nSo the shape of `query` becomes `(batch, num_blocks, block_length, hidden_dim)`\r\nand then the `keys` and `values` are padded and the `seq_length` dim is split such that their shape becomes \r\n`(batch, num_blocks, window_size + block_length, hidden_dim`).\r\n\r\nHere's a simple function to get the shape of `query` and `key` for given `seq_length`\r\n```python\r\ndef get_query_key_shape(seq_len, window_size, hidden_dim):\r\n block_length, num_blocks = _get_block_length_and_num_blocks(seq_len, window_size)\r\n query_shape = (1, num_blocks, block_length, hidden_dim)\r\n key_shape = (1, num_blocks, window_size + block_length, hidden_dim)\r\n return query_shape, key_shape\r\n```\r\n\r\nLet's print the shapes for few lengths\r\n```python\r\nwindow_size = 256\r\nhidden_dim = 2560\r\nfor seq_len in range(256, 266):\r\n query_shape, key_shape = get_query_key_shape(seq_len, window_size, hidden_dim)\r\n print(f\"seq_len: {seq_len}, query_shape: {query_shape}, key_shape: {key_shape}\"\r\n```\r\nwhich gives \r\n\r\n```\r\nseq_len: 256, query_shape: (1, 1, 256, 2560), key_shape: (1, 1, 512, 2560)\r\nseq_len: 257, query_shape: (1, 257, 1, 2560), key_shape: (1, 257, 257, 2560)\r\nseq_len: 258, query_shape: (1, 2, 129, 2560), key_shape: (1, 2, 385, 2560)\r\nseq_len: 259, query_shape: (1, 7, 37, 2560), key_shape: (1, 7, 293, 2560)\r\nseq_len: 260, query_shape: (1, 2, 130, 2560), key_shape: (1, 2, 386, 2560)\r\nseq_len: 261, query_shape: (1, 3, 87, 2560), key_shape: (1, 3, 343, 2560)\r\nseq_len: 262, query_shape: (1, 2, 131, 2560), key_shape: (1, 2, 387, 2560)\r\nseq_len: 263, query_shape: (1, 263, 1, 2560), key_shape: (1, 263, 257, 2560)\r\nseq_len: 264, query_shape: (1, 2, 132, 2560), key_shape: (1, 2, 388, 2560)\r\nseq_len: 265, query_shape: (1, 5, 53, 2560), key_shape: (1, 5, 309, 2560))\r\n```\r\n\r\nas you can see, because of the dynamic `block_length` the dimensions are very different for different `seq_length` which explains the irregular VRAM usage.\r\n\r\nif you set the seq_length to 1871 you'll get \r\n```\r\nseq_len: 1871, query_shape: (1, 1871, 1, 2560), key_shape: (1, 1871, 257, 2560)\r\n```\r\nas you posted above.\r\n\r\nSo I wouldn't say this is an implementation issue, that's how the local attention algorithm is designed in mesh-tf.",
"Thank you for taking the time and walking me through the calculations. It makes sense and certainly explains the irregular pattern. However, I wonder if it is possible to reach the same end result in a way that is less memory intensive. A bit earlier I was looking for more information about local self-attention and I found this [implementation](https://github.com/lucidrains/local-attention/blob/master/local_attention/local_attention.py). Running it for a [1, 1871, 2560] tensor results in a peak allocation of just about 253MB:\r\n\r\n```\r\n>>> import torch\r\n>>> q = torch.rand(1,1871,128*20).to(torch.float32).cuda()\r\n>>> k = torch.rand(1,1871,128*20).to(torch.float32).cuda()\r\n>>> v = torch.rand(1,1871,128*20).to(torch.float32).cuda()\r\n>>> from local_attention import LocalAttention\r\n>>> local_attention = LocalAttention(256, causal=True, look_forward=0, autopad=True, dim=2560).cuda()\r\n>>> torch.cuda.memory_allocated(), torch.cuda.max_memory_allocated()\r\n(57482240, 57482240)\r\n>>> result = local_attention(q, k, v)\r\n>>> torch.cuda.memory_allocated(), torch.cuda.max_memory_allocated()\r\n(78453760, 252826624)\r\n```\r\n\r\nSimply running this implementation and GPTNeoLocalSelfAttention on the same input does seem to give different results however, so I think there may also be some difference between the algorithms.\r\n\r\nEdit: Experimenting with it a bit, I think my best bet is to just limit the sequence length to 1750. The padding approach of that approach is very different.",
"I have thought more about it and think I have found a solution to reduce memory use.\r\n\r\n```\r\n--- src/transformers/models/gpt_neo/modeling_gpt_neo.py.backup\t2021-04-07 22:28:43.049493417 +0200\r\n+++ src/transformers/models/gpt_neo/modeling_gpt_neo.py\t2021-04-22 10:53:41.274276535 +0200\r\n@@ -413,4 +413,18 @@\r\n batch_size, seq_length = hidden_states.shape[:2]\r\n full_seq_length = seq_length + past_length\r\n+\r\n+ padding = None\r\n+ if layer_past is None and full_seq_length % self.window_size != 0 and full_seq_length > self.window_size:\r\n+ padding = self.window_size-(full_seq_length%self.window_size)\r\n+ if attention_mask is None:\r\n+ attention_mask = torch.zeros(query.shape[0], query.shape[1] + padding).to(query.device)\r\n+ attention_mask[:, padding:] = 1\r\n+ else:\r\n+ attention_mask = torch.cat([torch.zeros(attention_mask.shape[0], padding).to(attention_mask.device), attention_mask], axis=1)\r\n+ pad = lambda x: torch.cat([torch.zeros(x.shape[0],padding,x.shape[2]).to(x.device), x], axis=1)\r\n+ query, key, value = map(pad, (query, key, value))\r\n+ seq_length += padding\r\n+ full_seq_length += padding\r\n+\r\n block_length, num_blocks = self._get_block_length_and_num_blocks(full_seq_length, self.window_size)\r\n \r\n@@ -454,5 +468,9 @@\r\n attn_output = attn_output.reshape(batch_size, seq_length, self.embed_dim)\r\n \r\n- attn_output = self.out_proj(attn_output)\r\n+ if padding is not None:\r\n+ attn_output = attn_output[:,padding:]\r\n+ attn_weights = attn_weights[:,padding:]\r\n+\r\n+ attn_output = self.out_proj(attn_output.to(hidden_states.dtype))\r\n attn_output = self.resid_dropout(attn_output)\r\n\r\n```\r\n\r\nBy padding q, k and v and adding a mask to mask out the padding, the it becomes unnecessary to split things in a way that leads to a very large dimension. From how I see it, this should not change the result of the _attn function due to masking. For some reason I needed to add an extra .to at the end for running the model in fp16. First results of doing inference with this change look okay, but I am still testing it more. It's not updated with cfd2eaa8cf82da8581825c6592b66d2789c5bc53 yet.\r\n\r\nThe purple line here is a run with the patch applied:\r\n\r\n\r\n",
"EricHallahan from EleutherAI was so kind to run the lambada evaluation task with the patch applied and found no degradation in accuracy and negligible differences in speed over multiple runs.",
"(It is worth noting that LAMBADA task contexts are significantly shorter than 256 tokens. I believe Eric is currently running QA4MRE, which has much longer contexts)",
"Hi @finetunenon\r\n\r\nThanks a lot for working on this. Let me run a few experiments to verify this and get back to you.",
"Great, I ran a small test and it seems to be working! (sorry about the earlier comment). Here's the script\r\n\r\n```python\r\nimport torch\r\nfrom torch import nn\r\nfrom transformers.models.gpt_neo.modeling_gpt_neo import GPTNeoAttentionMixin\r\nfrom transformers import GPTNeoConfig\r\n\r\nclass GPTNeoLocalSelfAttention(nn.Module, GPTNeoAttentionMixin):\r\n def __init__(self, config):\r\n super().__init__()\r\n\r\n self.register_buffer(\"masked_bias\", torch.tensor(-1e9))\r\n\r\n self.attn_dropout = nn.Dropout(config.attention_dropout)\r\n self.resid_dropout = nn.Dropout(config.resid_dropout)\r\n\r\n self.embed_dim = config.hidden_size\r\n self.num_heads = config.num_heads\r\n self.head_dim = self.embed_dim // self.num_heads\r\n if self.head_dim * self.num_heads != self.embed_dim:\r\n raise ValueError(\r\n f\"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim} and `num_heads`: {self.num_heads}).\"\r\n )\r\n\r\n self.k_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=False)\r\n self.v_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=False)\r\n self.q_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=False)\r\n self.out_proj = nn.Linear(self.embed_dim, self.embed_dim, bias=True)\r\n\r\n self.window_size = config.window_size\r\n\r\n def forward(\r\n self,\r\n hidden_states,\r\n attention_mask=None,\r\n layer_past=None,\r\n head_mask=None,\r\n use_cache=False,\r\n output_attentions=False,\r\n pad_qkv=False\r\n ):\r\n query = self.q_proj(hidden_states)\r\n\r\n if layer_past is not None:\r\n past = layer_past[0]\r\n key_value_hidden_states = torch.cat([past, hidden_states], dim=1)\r\n past_length = past.size()[1]\r\n else:\r\n key_value_hidden_states = hidden_states\r\n past_length = 0\r\n\r\n key = self.k_proj(key_value_hidden_states)\r\n value = self.v_proj(key_value_hidden_states)\r\n \r\n \r\n # compute block length and num_blocks\r\n batch_size, seq_length = hidden_states.shape[:2]\r\n full_seq_length = seq_length + past_length\r\n \r\n padding = None\r\n if pad_qkv:\r\n if layer_past is None and full_seq_length % self.window_size != 0 and full_seq_length > self.window_size:\r\n padding = self.window_size-(full_seq_length%self.window_size)\r\n if attention_mask is None:\r\n attention_mask = torch.zeros(query.shape[0], query.shape[1] + padding).to(query.device)\r\n attention_mask[:, padding:] = 1\r\n else:\r\n attention_mask = torch.cat([torch.zeros(attention_mask.shape[0], padding).to(attention_mask.device), attention_mask], axis=1)\r\n pad = lambda x: torch.cat([torch.zeros(x.shape[0],padding,x.shape[2]).to(x.device), x], axis=1)\r\n query, key, value = map(pad, (query, key, value))\r\n seq_length += padding\r\n full_seq_length += padding\r\n \r\n block_length, num_blocks = self._get_block_length_and_num_blocks(full_seq_length, self.window_size)\r\n \r\n # create buckets\r\n if layer_past is not None:\r\n # we just need 1 block with block_length 1 when caching is enabled\r\n query = self._split_seq_length_dim_to(query, 1, 1)\r\n else:\r\n query = self._split_seq_length_dim_to(query, num_blocks, block_length)\r\n\r\n key = self._look_back(key, block_length, self.window_size)\r\n value = self._look_back(value, block_length, self.window_size)\r\n\r\n # select key/value vectors only for the last block\r\n if layer_past is not None:\r\n key = key[:, -1:, ...]\r\n value = value[:, -1:, ...]\r\n\r\n query = self._split_heads(query, self.num_heads, self.head_dim)\r\n key = self._split_heads(key, self.num_heads, self.head_dim)\r\n value = self._split_heads(value, self.num_heads, self.head_dim)\r\n \r\n attention_mask = GPTNeoAttentionMixin.create_local_attention_mask(\r\n batch_size, full_seq_length, self.window_size, \"cpu\", attention_mask\r\n )\r\n\r\n if layer_past is not None:\r\n # only take the mask for the last block\r\n attention_mask = attention_mask[:, -1:, :, -1:, :]\r\n\r\n # attn\r\n attn_output, attn_weights = self._attn(\r\n query,\r\n key,\r\n value,\r\n causal_mask=attention_mask,\r\n masked_bias=self.masked_bias,\r\n attn_dropout=self.attn_dropout,\r\n head_mask=head_mask,\r\n )\r\n\r\n attn_output = self._merge_heads(attn_output, self.num_heads, self.head_dim)\r\n attn_output = attn_output.reshape(batch_size, seq_length, self.embed_dim)\r\n \r\n if padding is not None:\r\n attn_output = attn_output[:,padding:]\r\n attn_weights = attn_weights[:,padding:]\r\n\r\n attn_output = self.out_proj(attn_output)\r\n attn_output = self.resid_dropout(attn_output)\r\n\r\n outputs = (attn_output,)\r\n if output_attentions:\r\n outputs += (attn_weights,)\r\n\r\n return outputs # a, (attentions)\r\n\r\nconfig = GPTNeoConfig(hidden_size=16, num_heads=4)\r\nattn_layer = GPTNeoLocalSelfAttention(config).eval()\r\n\r\nmatched = []\r\nwith torch.no_grad():\r\n for seq_len in range(1, 2049):\r\n hidden_states = torch.randn(1, seq_len, 16)\r\n out = attn_layer(hidden_states)[0]\r\n out_with_padding = attn_layer(hidden_states, pad_qkv=True)[0]\r\n matched.append(torch.allclose(out, out_with_padding, atol=1e-5))\r\n\r\nall(matched)\r\n# True\r\n```\r\nI will run a few tests with the actual model and will let you know. If it works, feel free to open a PR :)\r\n",
"Thanks for testing. If it works with the actual model, how should cfd2eaa8cf82da8581825c6592b66d2789c5bc53 be handled? I tried adapting the patch, but the attention mask seems to work in a very different way and I haven't been able to figure it out yet.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,631 | 1,631 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1 / HEAD
- Platform: Linux/Colab Pro
- Python version: 3.7
- PyTorch version (GPU?): 1.8.1 (CUDA 11.0)
- Tensorflow version (GPU?):
- Using GPU in script?: Yes, NVIDIA P100
- Using distributed or parallel set-up in script?:
### Who can help
@patil-suraj
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): EleutherAI/gpt-neo-2.7B
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Install transformers in a Colab Pro notebook
2. Run this script to log peak memory usage for inference with increasing sequence length: https://gist.github.com/finetuneanon/7ce0ed5090a27a383abffbbbc0433a29
3. Wait for it to crash with an OOM error in the attention matmul somewhere above sequence length 1850
Output:
```
1870 5436434432
ok 6535669248
1871 5436434432
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-2-f2aeed4489bd> in <module>()
21 return_dict_in_generate=True,
22 repetition_penalty=1.2,
---> 23 pad_token_id=tokenizer.eos_token_id
24 )
25 del ids
13 frames
/usr/local/lib/python3.7/dist-packages/transformers/models/gpt_neo/modeling_gpt_neo.py in _attn(self, query, key, value, causal_mask, masked_bias, attn_dropout, attention_mask, head_mask)
238 key = key.to(torch.float32)
239
--> 240 attn_weights = torch.matmul(query, key.transpose(-1, -2))
241 attn_weights = torch.where(causal_mask, attn_weights, masked_bias.to(attn_weights.dtype))
242
RuntimeError: CUDA out of memory. Tried to allocate 4.59 GiB (GPU 0; 15.90 GiB total capacity; 9.75 GiB already allocated; 4.60 GiB free; 10.42 GiB reserved in total by PyTorch)
```
The full output can be found here: https://gist.github.com/finetuneanon/c7292ea676f57f5bb63803685d80bf5b
The output has the format:
```
sequence_length occupied_cuda_memory_before_inference
ok peak_occupied_cuda_memory_during_inference
```
Doing inference with real text has the same issue.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I expected memory usage to increase steadily instead of jumping around wildly, but I am not sure if this might actually be the correct behaviour. If it is correct, reliably doing inference on long sequences on 16GB of VRAM seems to be impossible, but sometimes it works.
I have also plotted the peak memory allocation during inference:

The green line is peak memory allocation, the brown line is the amount of memory in use before running inference. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11320/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11319 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11319/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11319/comments | https://api.github.com/repos/huggingface/transformers/issues/11319/events | https://github.com/huggingface/transformers/issues/11319 | 861,461,850 | MDU6SXNzdWU4NjE0NjE4NTA= | 11,319 | Error in loading model tokenizer ('Helsinki-NLP/opus-mt-en-fr' actually loads 'Helsinki-NLP/opus-mt-en-de') | {
"login": "BenoitDalFerro",
"id": 69694610,
"node_id": "MDQ6VXNlcjY5Njk0NjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/69694610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenoitDalFerro",
"html_url": "https://github.com/BenoitDalFerro",
"followers_url": "https://api.github.com/users/BenoitDalFerro/followers",
"following_url": "https://api.github.com/users/BenoitDalFerro/following{/other_user}",
"gists_url": "https://api.github.com/users/BenoitDalFerro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenoitDalFerro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenoitDalFerro/subscriptions",
"organizations_url": "https://api.github.com/users/BenoitDalFerro/orgs",
"repos_url": "https://api.github.com/users/BenoitDalFerro/repos",
"events_url": "https://api.github.com/users/BenoitDalFerro/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenoitDalFerro/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You should upgrade to the last version of transformers, which fully relies on the repository of a pretrained model instead of using special files like here.",
"Thanks!"
] | 1,618 | 1,620 | 1,620 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.3.3
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.1
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: not at this stage in the code
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten as per https://huggingface.co/transformers/model_doc/marian.html specifications
Models:
- marian : @patrickvonplaten
Library:
- tokenizers: @LysandreJik
Documentation: @sgugger
## Information
Model I am using : MarianMT, 'Helsinki-NLP/opus-mt-en-fr'
The problem arises when using:
[X] the official example scripts
The tasks I am working on is:
not relevant
## To reproduce
Steps to reproduce the behavior:
```
from transformers import (MarianMTModel, MarianTokenize)
MT_model_name = 'Helsinki-NLP/opus-mt-en-fr'
MT_tokenizer = MarianTokenizer.from_pretrained(MT_model_name)
def download_vocab_files_for_tokenizer(tokenizer, model_type, output_path='/vocab'):
vocab_files_map = tokenizer.pretrained_vocab_files_map
vocab_files = {}
for resource in vocab_files_map.keys():
print (vocab_files_map[resource])
download_location = vocab_files_map[resource][model_type]
f_path = os.path.join(output_path, os.path.basename(download_location))
urllib.request.urlretrieve(download_location, f_path)
vocab_files[resource] = f_path
return vocab_files
vocab_files = download_vocab_files_for_tokenizer(tokenizer=MT_tokenizer, model_type=MT_model_name, output_path="/vocab")
```
> {'Helsinki-NLP/opus-mt-en-de': 'https://cdn.huggingface.co/Helsinki-NLP/opus-mt-en-de/source.spm'}
>
> ---------------------------------------------------------------------------
> KeyError Traceback (most recent call last)
> <ipython-input-6-9d4f64132d23> in <module>
> ----> 1 process_datasets(source_datasets_paths, dataset_dir, test_mode=True, clean_sentences=True, translate=False)
>
> ~\...py in process_datasets(source_datasets_paths, dataset_dir, test_mode, clean_sentences, translate, max_sample, negative_sampling)
> 171 MT_model_name = 'Helsinki-NLP/opus-mt-en-fr'
> 172 MT_tokenizer = MarianTokenizer.from_pretrained(MT_model_name)
> --> 173 vocab_files = download_vocab_files_for_tokenizer(tokenizer=MT_tokenizer, model_type=MT_model_name, output_path="/vocab")
> 174
> 175 print (vocab_files)
>
> ~...py in download_vocab_files_for_tokenizer(tokenizer, model_type, output_path)
> 29 print (vocab_files_map[resource])
> 30 print (model_type)
> ---> 31 print (vocab_files_map[resource][model_type])
> 32 download_location = vocab_files_map[resource][model_type]
> 33 f_path = os.path.join(output_path, os.path.basename(download_location))
>
> KeyError: 'Helsinki-NLP/opus-mt-en-fr'
## Expected behavior
> {'Helsinki-NLP/opus-mt-en-fr': 'https://cdn.huggingface.co/Helsinki-NLP/opus-mt-en-fr/source.spm'}
Then function outputs a dictionnary containing for the 'Helsinki-NLP/opus-mt-en-fr' key, the corresponding value being a path to file found remotely at https://cdn.huggingface.co/Helsinki-NLP/opus-mt-en-fr/source.spm (checked link it works) downloaded on local folder ./vocab
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11319/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11318 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11318/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11318/comments | https://api.github.com/repos/huggingface/transformers/issues/11318/events | https://github.com/huggingface/transformers/pull/11318 | 861,384,555 | MDExOlB1bGxSZXF1ZXN0NjE4MDU1ODAx | 11,318 | Load checkpoint without re-creating the model | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,618 | 1,618 | 1,618 | COLLABORATOR | null | # What does this PR do?
This PR avoids recreating the model when loading a checkpoint in the Trainer. As mentioned in #11294, the current loading messes up the model a user passed when some weights are frozen, which will yield to unexpected OOM errors.
A test is also added checking the frozen parameters are kept frozen.
Fixes #11294 and probably #11317 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11318/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11318/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11318",
"html_url": "https://github.com/huggingface/transformers/pull/11318",
"diff_url": "https://github.com/huggingface/transformers/pull/11318.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11318.patch",
"merged_at": 1618878689000
} |
https://api.github.com/repos/huggingface/transformers/issues/11317 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11317/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11317/comments | https://api.github.com/repos/huggingface/transformers/issues/11317/events | https://github.com/huggingface/transformers/issues/11317 | 861,277,476 | MDU6SXNzdWU4NjEyNzc0NzY= | 11,317 | large memory usage when resuming training from a checkpoint | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Similar issue to https://github.com/huggingface/transformers/issues/11294",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,622 | 1,622 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.5
- Platform: linux
- Python version: 3.7
- PyTorch version (GPU?): 1.8
- Tensorflow version (GPU?): -
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: -
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger @patrickvonplaten, @patil-suraj
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger @patrickvonplaten, @patil-suraj
## Information
Hi
I am training t5-base model on mnli dataset, with batch size = 128, the training works fine, but the moment, I want to resume from a checkpoint, then I will get a memory issue, so I observe large memory usage when it is resuming the training.
## Expected behavior
resuming from a checkpoint and training, should take equal amount of memory
## Error Stack
```
Traceback (most recent call last):
File "run_seq2seq.py", line 671, in <module>
main()
File "run_seq2seq.py", line 629, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/users/dara/dev/codes/seq2seq/third_party/trainers/trainer.py", line 329, in train
tr_loss += self.training_step(model, inputs)
File "/users/dara/libs/anaconda3/envs/test1/lib/python3.7/site-packages/transformers/trainer.py", line 1486, in training_step
loss = self.compute_loss(model, inputs)
File "/users/dara/libs/anaconda3/envs/test1/lib/python3.7/site-packages/transformers/trainer.py", line 1518, in compute_loss
outputs = model(**inputs)
File "/users/dara/libs/anaconda3/envs/test1/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/users/dara/dev/codes/seq2seq/third_party/models/t5/modeling_t5.py", line 1762, in forward
lang=lang
File "/users/dara/libs/anaconda3/envs/test1/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/users/dara/dev/codes/seq2seq/third_party/models/t5/modeling_t5.py", line 1115, in forward
task=task
File "/users/dara/libs/anaconda3/envs/test1/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/users/dara/dev/codes/seq2seq/third_party/models/t5/modeling_t5.py", line 752, in forward
output_attentions=output_attentions,
File "/users/dara/libs/anaconda3/envs/test1/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/users/dara/dev/codes/seq2seq/third_party/models/t5/modeling_t5.py", line 653, in forward
output_attentions=output_attentions,
File "/users/dara/libs/anaconda3/envs/test1/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/users/dara/dev/codes/seq2seq/third_party/models/t5/modeling_t5.py", line 518, in forward
hidden_states, self.k, key_value_states, past_key_value[0] if past_key_value is not None else None
File "/users/dara/dev/codes/seq2seq/third_party/models/t5/modeling_t5.py", line 501, in project
hidden_states = shape(proj_layer(key_value_states))
File "/users/dara/libs/anaconda3/envs/test1/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/users/dara/libs/anaconda3/envs/test1/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 94, in forward
return F.linear(input, self.weight, self.bias)
File "/users/dara/libs/anaconda3/envs/test1/lib/python3.7/site-packages/torch/nn/functional.py", line 1753, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: CUDA out of memory. Tried to allocate 48.00 MiB (GPU 0; 23.70 GiB total capacity; 21.38 GiB already allocated; 41.69 MiB free; 22.18 GiB reserved in total by PyTorch)
0%|
```
Thanks for your help and suggestions. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11317/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11317/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11316 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11316/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11316/comments | https://api.github.com/repos/huggingface/transformers/issues/11316/events | https://github.com/huggingface/transformers/pull/11316 | 861,255,232 | MDExOlB1bGxSZXF1ZXN0NjE3OTQ5NjU1 | 11,316 | Added BERT pretraining example running on Graphcore IPUs to research projects | {
"login": "jimypbr",
"id": 7130791,
"node_id": "MDQ6VXNlcjcxMzA3OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7130791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jimypbr",
"html_url": "https://github.com/jimypbr",
"followers_url": "https://api.github.com/users/jimypbr/followers",
"following_url": "https://api.github.com/users/jimypbr/following{/other_user}",
"gists_url": "https://api.github.com/users/jimypbr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jimypbr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jimypbr/subscriptions",
"organizations_url": "https://api.github.com/users/jimypbr/orgs",
"repos_url": "https://api.github.com/users/jimypbr/repos",
"events_url": "https://api.github.com/users/jimypbr/events{/privacy}",
"received_events_url": "https://api.github.com/users/jimypbr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
Adds BERT pretraining example running on Graphcore IPUs to research projects
## Before submitting
- This was discussed in the HuggingFace/Graphcore meeting
## Who can review?
@LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11316/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11316/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11316",
"html_url": "https://github.com/huggingface/transformers/pull/11316",
"diff_url": "https://github.com/huggingface/transformers/pull/11316.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11316.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11315 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11315/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11315/comments | https://api.github.com/repos/huggingface/transformers/issues/11315/events | https://github.com/huggingface/transformers/issues/11315 | 861,249,980 | MDU6SXNzdWU4NjEyNDk5ODA= | 11,315 | T5Model crashes when trained with multiple GPUs | {
"login": "eladyt",
"id": 1304550,
"node_id": "MDQ6VXNlcjEzMDQ1NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1304550?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eladyt",
"html_url": "https://github.com/eladyt",
"followers_url": "https://api.github.com/users/eladyt/followers",
"following_url": "https://api.github.com/users/eladyt/following{/other_user}",
"gists_url": "https://api.github.com/users/eladyt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eladyt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eladyt/subscriptions",
"organizations_url": "https://api.github.com/users/eladyt/orgs",
"repos_url": "https://api.github.com/users/eladyt/repos",
"events_url": "https://api.github.com/users/eladyt/events{/privacy}",
"received_events_url": "https://api.github.com/users/eladyt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This code seems to be using `simpletransformers`, sadly we won't dive into that. You could use the `run_translation.py` script [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq) which supports multi-gpu training. See this [doc](https://github.com/huggingface/transformers/tree/master/examples#distributed-training-and-mixed-precision) for distributed training using `Trainer`.\r\n\r\nAnd with `Trainer` you could also leverage `deepspeed` to get more efficiency, see this [doc](https://huggingface.co/transformers/main_classes/trainer.html#trainer-integrations) for `deepspeed` integration ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,622 | 1,622 | NONE | null | ## Environment info
- `transformers` version: 4.5.1
- Platform: Linux-5.4.0-1041-azure-x86_64-with-debian-buster-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
@patrickvonplaten, @patil-suraj
## Information
I'm training a T5 translation model. It works on a CPU or single GPU, but when I try to run it with multiple GPUs I get the following error:
Traceback (most recent call last):
File "train2.py", line 43, in <module>
model.train_model(train_df, eval_data=eval_df)
File "/home/eladyt/.local/lib/python3.7/site-packages/simpletransformers/t5/t5_model.py", line 206, in train_model
**kwargs,
File "/home/eladyt/.local/lib/python3.7/site-packages/simpletransformers/t5/t5_model.py", line 605, in train
**kwargs,
File "/home/eladyt/.local/lib/python3.7/site-packages/simpletransformers/t5/t5_model.py", line 705, in eval_model
result = self.evaluate(eval_dataset, output_dir, verbose=verbose, silent=silent, **kwargs)
File "/home/eladyt/.local/lib/python3.7/site-packages/simpletransformers/t5/t5_model.py", line 763, in evaluate
outputs = model(**inputs)
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 167, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 177, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 1 on device 1.
Original Traceback (most recent call last):
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 167, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 177, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/eladyt/.local/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py", line 1506, in forward
return_dict=return_dict,
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/eladyt/.local/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py", line 881, in forward
inputs_embeds = self.embed_tokens(input_ids)
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 158, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/home/eladyt/.local/lib/python3.7/site-packages/torch/nn/functional.py", line 1916, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Input, output and indices must be on the current device
## To reproduce
Steps to reproduce the behavior:
Here is the code (from https://towardsdatascience.com/how-to-train-an-mt5-model-for-translation-with-simple-transformers-30ba5fa66c5f):
```python
import logging
import pandas as pd
from simpletransformers.t5 import T5Model, T5Args
import torch.multiprocessing
torch.multiprocessing.set_sharing_strategy('file_system')
logging.basicConfig(level=logging.INFO)
transformers_logger = logging.getLogger("transformers")
transformers_logger.setLevel(logging.WARNING)
train_df = pd.read_csv("data2/train.tsv", sep="\t").astype(str)
eval_df = pd.read_csv("data2/eval.tsv", sep="\t").astype(str)
train_df["prefix"] = ""
eval_df["prefix"] = ""
model_args = T5Args()
model_args.max_seq_length = 25
model_args.train_batch_size = 20
model_args.eval_batch_size = 20
model_args.num_train_epochs = 20
model_args.evaluate_during_training = True
model_args.evaluate_during_training_steps = 30000
model_args.use_multiprocessing = False
model_args.fp16 = True
model_args.save_steps = -1
model_args.save_eval_checkpoints = False
model_args.no_cache = True
model_args.reprocess_input_data = True
model_args.overwrite_output_dir = True
model_args.preprocess_inputs = True
model_args.num_return_sequences = 1
model_args.n_gpu=4
model_args.is_model_parallel = True
model = T5Model("mt5", "google/mt5-base", args=model_args)
model.train_model(train_df, eval_data=eval_df)
```
## Expected behavior
A model should be generated. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11315/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11314 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11314/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11314/comments | https://api.github.com/repos/huggingface/transformers/issues/11314/events | https://github.com/huggingface/transformers/pull/11314 | 861,130,981 | MDExOlB1bGxSZXF1ZXN0NjE3ODQ3NjU5 | 11,314 | Removed `max_length` from being mandatory within `generate`. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"All tests pass, the max_length was actually a bug hidden by `while cur_len < max_length` that was still in there.\r\nThe bart tests (at least) caught it automatically and enabled me to change it to the correct comparison ! ",
"> All tests pass, the max_length was actually a bug hidden by `while cur_len < max_length` that was still in there.\r\n> The bart tests (at least) caught it automatically and enabled me to change it to the correct comparison !\r\n\r\nGreat! Feel free to merge then!"
] | 1,618 | 1,618 | 1,618 | CONTRIBUTOR | null | # What does this PR do?
- Moving on to fully using `StoppingCriteria` for `greedy` and `sample`
modes.
- `max_length` still used for `beam_search` and `group_beam_search`
(Follow up PR)
- Fixes a bug with MaxLengthStoppingCriteria (we should stop as soon a
we hit the max_length, the comparison needs to be or equal, that affects
the tests).
- Added options to use `logits_processor` and `stopping_criteria`
directly within `generate` function (so some users can define their own
`logits_processor` and `stopping_criteria`).
- Modified the backward compat tests to make sure we issue a warning.
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@patrickvonplaten @LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11314/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11314",
"html_url": "https://github.com/huggingface/transformers/pull/11314",
"diff_url": "https://github.com/huggingface/transformers/pull/11314.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11314.patch",
"merged_at": 1618999006000
} |
https://api.github.com/repos/huggingface/transformers/issues/11313 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11313/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11313/comments | https://api.github.com/repos/huggingface/transformers/issues/11313/events | https://github.com/huggingface/transformers/pull/11313 | 861,129,822 | MDExOlB1bGxSZXF1ZXN0NjE3ODQ2Njk2 | 11,313 | [WIP] Add PiT | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | [] | 1,618 | 1,648 | null | MEMBER | null | # What does this PR do?
Adds `PoolingTransformer` for image classification. https://github.com/naver-ai/pit
Todos:
- [ ] Fix tests
- [ ] Add doc
- [ ] port and push all `PiT` checkpoints | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11313/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11313/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11313",
"html_url": "https://github.com/huggingface/transformers/pull/11313",
"diff_url": "https://github.com/huggingface/transformers/pull/11313.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11313.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11312 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11312/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11312/comments | https://api.github.com/repos/huggingface/transformers/issues/11312/events | https://github.com/huggingface/transformers/issues/11312 | 861,114,173 | MDU6SXNzdWU4NjExMTQxNzM= | 11,312 | The output of IBERT is float32. Am I doing wrong? | {
"login": "kyoungrok0517",
"id": 1051900,
"node_id": "MDQ6VXNlcjEwNTE5MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1051900?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kyoungrok0517",
"html_url": "https://github.com/kyoungrok0517",
"followers_url": "https://api.github.com/users/kyoungrok0517/followers",
"following_url": "https://api.github.com/users/kyoungrok0517/following{/other_user}",
"gists_url": "https://api.github.com/users/kyoungrok0517/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kyoungrok0517/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kyoungrok0517/subscriptions",
"organizations_url": "https://api.github.com/users/kyoungrok0517/orgs",
"repos_url": "https://api.github.com/users/kyoungrok0517/repos",
"events_url": "https://api.github.com/users/kyoungrok0517/events{/privacy}",
"received_events_url": "https://api.github.com/users/kyoungrok0517/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The I-BERT framework allows for easy fine-tuning in PyTorch to find the optimal parameters. Once those are found, the model can be deployed in a setup using int-8 capable operations such as TensorRT.\r\n\r\n@kssteven418 will be able to explain better than me.",
"Yes, the current I-BERT implementation (both in HF and Fairseq in my personal repo) only searches for the optimal int8 parameters through quantization-aware training and leaves out the actual model deployment. That is to say, it simulates int8 inference using floating-point representations and operations. One reason we are not supporting int8 execution is that, PyTorch only supports int8 inference via its own quantization APIs. Therefore, the optimal parameters found in the I-BERT framework must be then exported to different frameworks that can support int8 deployment (TensorRT, TVM, etc. are some popular frameworks). We haven't yet open-sourced the code for model deployment.",
"@kssteven418 Thanks for the answer. What I want is the final output in `int8`. Then will it be ok to just cast the output into `int8` even with pytorch? The whole execution doesn't have to be run in integer mode.",
"Yes, if you look at the quantization modules, e.g., QuantLinear, there are additional attributes such as `weight_integer` that represent the int8 model parameters in the torch.float. You can cast those numbers to torch.int8, but just make sure that you don't round down the numbers - they must be rounded.",
"> Yes, if you look at the quantization modules, e.g., QuantLinear, there are additional attributes such as `weight_integer` that represent the int8 model parameters in the torch.float. You can cast those numbers to torch.int8, but just make sure that you don't round down the numbers - they must be rounded.\r\n\r\nThank you very much. This will be the last question I expect :) \r\n\r\nThe following is how I defined my network and how the input passes flows. Briefly, I want the output of `IBertModel` to pass one more `QuantLinear` layer to get the final representation, like this. \r\n1. input -> `IBertModel`\r\n1. `IBertModel` -> `QuantAct` \r\n1. `QuantAct` -> `QuantLinear` -> `final_representation`\r\n\r\nBut the `QuantLinear` module outputs two tensors: `quant_x` and `scaling_factor`. \r\n\r\n**Do I have to deal with both, or can I just use `quant_x` as the final representation?** \r\n\r\n```python\r\nreturn (\r\n F.linear(x_int, weight=self.weight_integer, bias=self.bias_integer) * bias_scaling_factor, # use only this?\r\n bias_scaling_factor, # or do I also need this?\r\n)\r\n```\r\n\r\nThis is my code.\r\n```python\r\n# Define network\r\nself.pre_linear = QuantAct(self.act_bit, quant_mode=self.quant_mode)\r\n\r\nself.linear = QuantLinear(\r\n self.input_size,\r\n self.n,\r\n quant_mode=self.quant_mode,\r\n)\r\n\r\n...\r\n\r\n# Generate output\r\nx, pre_scaling_factor = self.pre_linear(x)\r\nx, scaling_factor = self.linear(x, pre_scaling_factor)\r\n\r\n# x = x * scaling_factor?\r\n```",
"As you have also noticed, all the quant modules including QuantLinear return two tensors: `quant_x` and `scaling_factor`. Here, `quant_x` / `scaling_factor` represents the quantized (integer) value for the activation - in other words, `quant_x` is the dequantized value. Therefore, you do not have to multiply it with the `scaling_factor`. ",
"Hi!\r\n\r\nI would like to deploy IBERT on a framework like TensorRT. I am a complete beginner in that field and I was wandering if someone could give me some tips on the main steps of how to quantize IBERT? \r\n\r\nThank you!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,623 | 1,623 | NONE | null | ## Environment info
- `transformers` version: 4.5.1
- Platform: Linux-5.8.0-49-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: DDP (pytorch-lightning)
### Who can help
@LysandreJik, @patil-suraj, @patrickvonplaten
## Information
I'm trying IBert. The first output of the model is `float32` so I'm curious why it happens. I set `quant_mode=True`.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I'm using MSMARCO (IR dataset)
## To reproduce
Steps to reproduce the behavior:
1. Initialize a model with the command `AutoModel.from_pretrained('kssteven/ibert-roberta-base', quant_mode=True, add_pooling_layer=False)`
2. Check the `dtype` of the model output.
## Expected behavior
The output `dtype` should be `int8`, but I see `float32`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11312/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11312/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11311 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11311/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11311/comments | https://api.github.com/repos/huggingface/transformers/issues/11311/events | https://github.com/huggingface/transformers/pull/11311 | 861,050,711 | MDExOlB1bGxSZXF1ZXN0NjE3NzgyNDE3 | 11,311 | Update hf_argparser.py | {
"login": "qqpann",
"id": 17402261,
"node_id": "MDQ6VXNlcjE3NDAyMjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/17402261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qqpann",
"html_url": "https://github.com/qqpann",
"followers_url": "https://api.github.com/users/qqpann/followers",
"following_url": "https://api.github.com/users/qqpann/following{/other_user}",
"gists_url": "https://api.github.com/users/qqpann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qqpann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qqpann/subscriptions",
"organizations_url": "https://api.github.com/users/qqpann/orgs",
"repos_url": "https://api.github.com/users/qqpann/repos",
"events_url": "https://api.github.com/users/qqpann/events{/privacy}",
"received_events_url": "https://api.github.com/users/qqpann/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,622 | 1,622 | CONTRIBUTOR | null | Dictionary type should be annotated with `Dict`.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes type annotation for dict.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11311/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11311",
"html_url": "https://github.com/huggingface/transformers/pull/11311",
"diff_url": "https://github.com/huggingface/transformers/pull/11311.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11311.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11310 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11310/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11310/comments | https://api.github.com/repos/huggingface/transformers/issues/11310/events | https://github.com/huggingface/transformers/issues/11310 | 861,017,697 | MDU6SXNzdWU4NjEwMTc2OTc= | 11,310 | [Benchmark] GPT2LMHeadModel (gpt2-medium) forward pass inference became 9% slower compared to 2.8.0 release | {
"login": "LSinev",
"id": 12072891,
"node_id": "MDQ6VXNlcjEyMDcyODkx",
"avatar_url": "https://avatars.githubusercontent.com/u/12072891?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LSinev",
"html_url": "https://github.com/LSinev",
"followers_url": "https://api.github.com/users/LSinev/followers",
"following_url": "https://api.github.com/users/LSinev/following{/other_user}",
"gists_url": "https://api.github.com/users/LSinev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LSinev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LSinev/subscriptions",
"organizations_url": "https://api.github.com/users/LSinev/orgs",
"repos_url": "https://api.github.com/users/LSinev/repos",
"events_url": "https://api.github.com/users/LSinev/events{/privacy}",
"received_events_url": "https://api.github.com/users/LSinev/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"@patil-suraj Can you please check if this speed decrease of GPT2LMHeadModel model call is not caused by your PR #11225?",
"Hi @LSinev \r\n\r\nThank you for posting the detailed issue. I will take a look.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,622 | 1,622 | CONTRIBUTOR | null | # 🖥 Benchmarking `GPT2LMHeadModel`
## Benchmark
GPT2LMHeadModel model call (and model.generate() too)
## Set-up
gpu: gtx 1080
pytorch 1.4.0
transformers 2.8.0, 3.5.1, 4.5.1 releases and latest master branch
Code to reproduce
```python
import timeit
import numpy as np
import torch
from transformers import __version__ as trans_version
from transformers import (
GPT2LMHeadModel,
)
print("transformers:", trans_version)
model = GPT2LMHeadModel.from_pretrained("gpt2-medium")
print(model.__class__)
model.to("cuda")
model.eval()
rounding = 3
timed_result = timeit.repeat(stmt="""model.generate(input_ids=inp_t,
max_length=1024,
min_length=1024,
do_sample=False,
early_stopping=False, pad_token_id=50256, eos_token_id=50256)""",
setup="""inp = np.random.randint(low=1, high=50255, size=1014);inp_t = torch.LongTensor(inp).unsqueeze(0).to("cuda")""",
repeat=30, number=1, globals=globals())
timed_model_result = timeit.repeat(stmt="""with torch.no_grad():
model(input_ids=inp_t)""",
setup="""inp = np.random.randint(low=1, high=50255, size=1024);inp_t = torch.LongTensor(inp).unsqueeze(0).to("cuda")""",
repeat=30, number=10, globals=globals())
print('GPT2LMmedium model.generate (using caching) 1014 input, generate to 1024 (mean ± 3std):',
str(np.round(np.mean(timed_result), rounding)) + '±' + str(np.round(3 * np.std(timed_result), rounding)))
print('GPT2LMmedium model call, 1024 input 10 times (mean ± 3std):',
str(np.round(np.mean(timed_model_result), rounding)) + '±' + str(np.round(3 * np.std(timed_model_result), rounding)))
```
## Results
While `model.generate()` code improved and works faster now, model forward pass used in model direct call, became 9% slower
transformers: **2.8.0**
<class 'transformers.modeling_gpt2.GPT2LMHeadModel'>
GPT2LMmedium model.generate (using caching) 1014 input, generate to 1024 (mean ± 3std): 0.557±0.037
GPT2LMmedium model call, 1024 input 10 times (mean ± 3std): **1.821**±0.017
transformers: **3.5.1**
<class 'transformers.modeling_gpt2.GPT2LMHeadModel'>
GPT2LMmedium model.generate (using caching) 1014 input, generate to 1024 (mean ± 3std): 0.37±0.003
GPT2LMmedium model call, 1024 input 10 times (mean ± 3std): 1.849±0.012
transformers: **4.5.1**
<class 'transformers.models.gpt2.modeling_gpt2.GPT2LMHeadModel'>
GPT2LMmedium model.generate (using caching) 1014 input, generate to 1024 (mean ± 3std): 0.36±0.003
GPT2LMmedium model call, 1024 input 10 times (mean ± 3std): 1.823±0.013
transformers: **4.6.0.dev0**
<class 'transformers.models.gpt2.modeling_gpt2.GPT2LMHeadModel'>
GPT2LMmedium model.generate (using caching) 1014 input, generate to 1024 (mean ± 3std): 0.367±0.004
GPT2LMmedium model call, 1024 input 10 times (mean ± 3std): **1.991**±0.013
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11310/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11309 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11309/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11309/comments | https://api.github.com/repos/huggingface/transformers/issues/11309/events | https://github.com/huggingface/transformers/pull/11309 | 861,007,075 | MDExOlB1bGxSZXF1ZXN0NjE3NzQ3Njky | 11,309 | Vit deit fixes | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The notebook looks amazing!\r\n\r\nOut of curiosity, have you tried using `Trainer` to fine-tune `ViT`?",
"Added a notebook that uses the Trainer. Includes a nice confusion matrix at the end :) "
] | 1,618 | 1,620 | 1,620 | CONTRIBUTOR | null | # What does this PR do?
* Some small documentation improvements of ViT + DeiT.
* Adds a cats image to the `fixtures/test_samples` folder, which is used in the integration tests of both ViT and DeiT.
* Adds a community notebook, illustrating how to fine-tune the Vision Transformer on CIFAR-10.
(there's something weird going on with the .gitignore within the test_samples folder , see files changed). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11309/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11309/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11309",
"html_url": "https://github.com/huggingface/transformers/pull/11309",
"diff_url": "https://github.com/huggingface/transformers/pull/11309.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11309.patch",
"merged_at": 1620834363000
} |
https://api.github.com/repos/huggingface/transformers/issues/11308 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11308/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11308/comments | https://api.github.com/repos/huggingface/transformers/issues/11308/events | https://github.com/huggingface/transformers/issues/11308 | 860,966,059 | MDU6SXNzdWU4NjA5NjYwNTk= | 11,308 | RAG with RAY implementation: Ray workers memory slowly increase over time. | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think I solved the memory leakage issue. In my case, I simply wanted to update the index parameters in **self.retriever object**. So I used **def set_index function**. But I observe for some reason ray works completely cannot flush out the old index and its related objects.\r\n\r\nSo now when I want to update it, I delete the self.retriever object and re-initialize it. \r\n\r\n```\r\nclass RayRetriever:\r\n def __init__(self):\r\n self.initialized = False\r\n\r\n def create_rag_retriever(self, config, question_encoder_tokenizer, ctx_encoder_tokenizer,generator_tokenizer, index):\r\n if not self.initialized:\r\n self.retriever = RagRetriever(\r\n config,\r\n question_encoder_tokenizer=question_encoder_tokenizer,\r\n ctx_encoder_tokenizer=ctx_encoder_tokenizer,\r\n generator_tokenizer=generator_tokenizer,\r\n index=index,\r\n init_retrieval=False,\r\n )\r\n self.initialized = True\r\n\r\n def init_retrieval(self):\r\n self.retriever.index.init_index()\r\n\r\n def set_index(self,index):\r\n self.retriever.index=index #with this new index class all the paramters in HFindex class get updated\r\n \r\n\r\n def clear_object(self): #we can call this first delete all thing things and again call create_rag_retriever\r\n del self.retriever\r\n self.initialized = False\r\n \r\n\r\n def retrieve(self, question_hidden_states, n_docs):\r\n\r\n doc_ids, retrieved_doc_embeds = self.retriever._main_retrieve(question_hidden_states, n_docs)\r\n doc_dicts= self.retriever.index.get_doc_dicts(doc_ids)\r\n\r\n return doc_ids, retrieved_doc_embeds,doc_dicts\r\n``````"
] | 1,618 | 1,618 | 1,618 | CONTRIBUTOR | null | I fine-tune the RAG with RAY workers from 50 000 steps. When I checked the MEM% with the top command, I can see the memory consumption keep growing slowly. Usually, it should use around 20GB. After 50000 steps it raises up to 24 GB.
This could eventually lead to crash the system with OOM error. I did a background check and found the Redis server usually keeps increasing its memory consumption.
So is it ok to set a value **object_store_memory**?
I get something similar to this... (found this from the [issue](https://github.com/ray-project/ray/issues/10431) )

@lhoestq @amogkam | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11308/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11308/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11307 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11307/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11307/comments | https://api.github.com/repos/huggingface/transformers/issues/11307/events | https://github.com/huggingface/transformers/issues/11307 | 860,870,722 | MDU6SXNzdWU4NjA4NzA3MjI= | 11,307 | Getting time offsets of beginning and end of each word in Wav2Vec2 | {
"login": "theainerd",
"id": 15798640,
"node_id": "MDQ6VXNlcjE1Nzk4NjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/15798640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theainerd",
"html_url": "https://github.com/theainerd",
"followers_url": "https://api.github.com/users/theainerd/followers",
"following_url": "https://api.github.com/users/theainerd/following{/other_user}",
"gists_url": "https://api.github.com/users/theainerd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theainerd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theainerd/subscriptions",
"organizations_url": "https://api.github.com/users/theainerd/orgs",
"repos_url": "https://api.github.com/users/theainerd/repos",
"events_url": "https://api.github.com/users/theainerd/events{/privacy}",
"received_events_url": "https://api.github.com/users/theainerd/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
},
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] | open | false | null | [] | [
"@patrickvonplaten @patil-suraj @sgugger ",
"This sounds like a nice feature, but I sadly won't have time to work on it - let's see if someone in the community could be interested :-)",
"There is something like this which may help : https://github.com/lumaku/espnet/blob/espnet2_ctc_segmentation/espnet2/bin/asr_align.py\r\n\r\nI need some help in integrating it to wav2vec2 in hugging face. ",
"@theainerd are you working on this feature?",
"I would also really like to see this feature.\r\n\r\n@theainerd I'd be happy to help in any way I can although I'm not too familiar with the Wav2Vec transformer.\r\n\r\n@patrickvonplaten do you think you could write out a brief outline of what you think the steps required would be?",
"Hi there!\r\n\r\nI'm very very new to collaborating on open-source projects as well as on using huggingface/transformers in general therefore I'm not confident I can come up with a solution for this issue -- however I did some poking around with tutorials surrounding Wav2Vec2 and I was thinking of ways on how this might be implemented:\r\n\r\nIt seems like the Wav2Vec2FeatureExtractor does most of the heavylifting of converting the raw audio array to suitable input values\r\n\r\n-> These input values are then fed into the model to obtain the logits (Dimension of the output is observed to be dropped a considerable amount here)\r\n\r\n\r\n-> after applying argmax to obtain the IDs, these IDs are then fed back into the Wav2Vec2CTCTokenizer decode/batch_decode function to obtain the transcription.\r\n\r\n\r\nPerhaps information of the sampling rate should be stored within the Tokenizer class such that during decode it's able to make use of this information to determine the timestamp? Or it might be possible to store it within the Wav2Vec2Processor class and have some wrapper functions take care of determining the timestamp and including it during the decode step\r\n\r\nA relation of how the input values dimensions are mapped to the output logit's dimensions would be needed for this, which I don't have the expertise at the moment to figure out \r\n\r\nCC:\r\n@theainerd \r\n@MerryOscar \r\n@patrickvonplaten \r\n\r\nsources I've been referring to -- \r\nhttps://www.kdnuggets.com/2021/03/speech-text-wav2vec.html (I realise this is outdated with the old tokenizer class, which seems to perform feature extraction as well)\r\n\r\nhttps://huggingface.co/blog/fine-tune-wav2vec2-english\r\n",
"+1 on this, i'd really appreciate timestamped words as well. the datasets like timit, etc. seem to have this info, but i guess that's part of their test set, not an output from the model itself. ",
"Here's what i've found so far: \r\nif speech length is - 480,000\r\ninput_values lenth - 480,000\r\nlogits length - 1499\r\n\r\nthis was for a 30s audio file. \r\n`\r\nmodel = Wav2Vec2ForCTC\r\nprocessor = Wav2Vec2Processor\r\n\r\n input_values = processor(speech, return_tensors=\"pt\").input_values\r\n logits = model(input_values).logits\r\n`",
"> Here's what i've found so far:\r\n> if speech length is - 480,000\r\n> input_values lenth - 480,000\r\n> logits length - 1499\r\n> \r\n> this was for a 30s audio file.\r\n> `\r\n> model = Wav2Vec2ForCTC\r\n> processor = Wav2Vec2Processor\r\n> \r\n> ```\r\n> input_values = processor(speech, return_tensors=\"pt\").input_values\r\n> logits = model(input_values).logits\r\n> ```\r\n> \r\n> `\r\n\r\nThanks for investigating on this -- while I think it may be possible to just use the ratio and sampling rate to derive the timestamp, what I'm afraid of is that this ratio might just be a \"magic number\" and might differ if there are variations in the configuration of the Wav2Vec2 model\r\n\r\nCurrent ratio from input_values size to logits seem to be around **320**\r\n\r\ne.g.:\r\nDoes the ratio change if the [hyperparameters](https://huggingface.co/transformers/model_doc/wav2vec2.html#transformers.Wav2Vec2Config) of the model are changed? \r\n\r\n\r\nIs this ratio constant for varying size of audio? (Experiment with different size WAV clips and check the ratio)\r\n",
"> > Here's what i've found so far:\r\n> > if speech length is - 480,000\r\n> > input_values lenth - 480,000\r\n> > logits length - 1499\r\n> > this was for a 30s audio file.\r\n> > `\r\n> > model = Wav2Vec2ForCTC\r\n> > processor = Wav2Vec2Processor\r\n> > ```\r\n> > input_values = processor(speech, return_tensors=\"pt\").input_values\r\n> > logits = model(input_values).logits\r\n> > ```\r\n> > \r\n> > \r\n> > `\r\n> \r\n> Thanks for investigating on this -- while I think it may be possible to just use the ratio and sampling rate to derive the timestamp, what I'm afraid of is that this ratio might just be a \"magic number\" and might differ if there are variations in the configuration of the Wav2Vec2 model\r\n> \r\n> Current ratio from input_values size to logits seem to be around **320**\r\n> \r\n> e.g.:\r\n> Does the ratio change if the [hyperparameters](https://huggingface.co/transformers/model_doc/wav2vec2.html#transformers.Wav2Vec2Config) of the model are changed?\r\n> \r\n> Is this ratio constant for varying size of audio? (Experiment with different size WAV clips and check the ratio)\r\n\r\nMaybe @patrickvonplaten could shed some light of whether we are going in the right direction about this (if it's not too much trouble) 😓 🙏 ",
"> > Here's what i've found so far:\r\n> > if speech length is - 480,000\r\n> > input_values lenth - 480,000\r\n> > logits length - 1499\r\n> > this was for a 30s audio file.\r\n> > `\r\n> > model = Wav2Vec2ForCTC\r\n> > processor = Wav2Vec2Processor\r\n> > ```\r\n> > input_values = processor(speech, return_tensors=\"pt\").input_values\r\n> > logits = model(input_values).logits\r\n> > ```\r\n> > \r\n> > \r\n> > `\r\n> \r\n> Thanks for investigating on this -- while I think it may be possible to just use the ratio and sampling rate to derive the timestamp, what I'm afraid of is that this ratio might just be a \"magic number\" and might differ if there are variations in the configuration of the Wav2Vec2 model\r\n> \r\n> Current ratio from input_values size to logits seem to be around **320**\r\n> \r\n> e.g.:\r\n> Does the ratio change if the [hyperparameters](https://huggingface.co/transformers/model_doc/wav2vec2.html#transformers.Wav2Vec2Config) of the model are changed?\r\n> \r\n> Is this ratio constant for varying size of audio? (Experiment with different size WAV clips and check the ratio)\r\n\r\nhey @yushao2, what ratio are you referring to here ? sorry, not too familiar with audio processing",
"@patrickvonplaten @yushao2 following up on this",
"> @patrickvonplaten @yushao2 following up on this\r\n\r\nHi there! Sorry for not being responsive here.\r\n\r\nThe ratio here refers to the number you get when you divide the size of ``input_values`` to the size of ``logits``\r\n\r\nin this case, you mentioned\r\n>input_values lenth - 480,000\r\n>logits length - 1499\r\n\r\nthe ratio would be 480000/1499 which is approximately 320",
"Hello all,\r\n\r\nThere is something I have found which may serve as a good starting point. Basically this returns the time offsets and the textual data as well . \r\n\r\nhttps://github.com/lumaku/ctc-segmentation\r\n\r\n```python\r\n\r\nimport torch\r\nimport torchaudio\r\nfrom datasets import load_dataset, load_metric\r\nfrom transformers import Wav2Vec2ForCTC, Wav2Vec2Processor\r\nimport re\r\n\r\nfrom ctc_segmentation import ctc_segmentation\r\nfrom ctc_segmentation import CtcSegmentationParameters\r\nfrom ctc_segmentation import determine_utterance_segments\r\nfrom ctc_segmentation import prepare_text\r\n\r\n# Get the Wav2Vec2 model and the predicted text\r\ntest_dataset = load_dataset(\"common_voice\", \"hi\", split=\"test\")\r\nwer = load_metric(\"wer\")\r\n\r\nprocessor = Wav2Vec2Processor.from_pretrained(\"theainerd/Wav2Vec2-large-xlsr-hindi\")\r\nmodel = Wav2Vec2ForCTC.from_pretrained(\"theainerd/Wav2Vec2-large-xlsr-hindi\")\r\nmodel.to(\"cuda\")\r\n\r\nchars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\\"\\“]'\r\n\r\nresampler = torchaudio.transforms.Resample(48_000, 16_000)\r\n\r\n# Preprocessing the datasets.\r\n# We need to read the aduio files as arrays\r\ndef speech_file_to_array_fn(batch):\r\n batch[\"sentence\"] = re.sub(chars_to_ignore_regex, '', batch[\"sentence\"]).lower()\r\n speech_array, sampling_rate = torchaudio.load(batch[\"path\"])\r\n batch[\"speech\"] = resampler(speech_array).squeeze().numpy()\r\n return batch\r\n\r\ntest_dataset = test_dataset.map(speech_file_to_array_fn)\r\n\r\ninput_values = processor(test_dataset[\"speech\"][0], return_tensors=\"pt\").input_values # Batch size 1\r\nlogits = model(input_values.to(\"cuda\")).logits\r\npredicted_ids = torch.argmax(logits, dim=-1)\r\ntranscription = processor.decode(predicted_ids[0])\r\n\r\nsoftmax = torch.nn.Softmax(dim = -1)\r\n\r\n# apply configuration\r\nconfig = CtcSegmentationParameters()\r\n\r\nwith torch.no_grad():\r\n # Apply ctc layer to obtain log character probabilities\r\n lpz = softmax(logits)[0].cpu().numpy()\r\n\r\nchar_dict = {\"न\": 0, \"च\": 1, \"थ\": 2, \"ी\": 3, \"ऐ\": 4, \"ृ\": 5, \"ध\": 6, \"य\": 7, \"ह\": 8, \"ऊ\": 9, \"म\": 10, \"ण\": 11, \"ै\": 13, \"ौ\": 14, \"ा\": 15, \"ल\": 16, \"त\": 17, \"इ\": 18, \"ढ़\": 19, \"ष\": 20, \"भ\": 21, \"ग़\": 22, \"ख\": 23, \"ड़\": 24, \"ए\": 25, \"व\": 26, \"ु\": 27, \"ओ\": 28, \"र\": 29, \"श\": 30, \"औ\": 31, \"ट\": 32, \"आ\": 33, \"ो\": 34, \"ढ\": 35, \"झ\": 36, \"ग\": 37, \"ज़\": 38, \"अ\": 39, \"े\": 40, \"प\": 41, \"घ\": 42, \"द\": 43, \"ई\": 44, \"फ़\": 45, \"ब\": 46, \"ड\": 47, \"ँ\": 48, \"छ\": 49, \"ू\": 50, \"फ\": 51, \"ि\": 52, \"स\": 53, \"्\": 54, \"क\": 55, \"उ\": 56, \"ठ\": 57, \"ं\": 58, \"़\": 59, \"ज\": 60, \"क़\": 61, \"|\": 12, \"[UNK]\": 62, \"[PAD]\": 63}\r\nchar_list = list(char_dict.keys())\r\n\r\n# Prepare the text for aligning\r\nground_truth_mat, utt_begin_indices = prepare_text(config, transcription,char_list)\r\n# Align using CTC segmentation\r\ntimings, char_probs, state_list = ctc_segmentation(config, lpz, ground_truth_mat)\r\n\r\n# Obtain list of utterances with time intervals and confidence score\r\nsegments = determine_utterance_segments(config, utt_begin_indices, char_probs, timings, transcription)\r\n# Sample Output : 0.26 1.73 -0.0154 THE SALE OF THE HOTELS * An example picked up from the ctc_segmentation \r\n```\r\nNow if I have the time offsets but how to get this for each and every word rather than the segments. _Please don't take this as an absolute solution_ as I am not sure that this is a good direction to go but still something is better than nothing. Please share your thoughts. \r\n",
"Hi everyone, here is some sample code which I have created to get the word-level start and end timestamps.\r\nIt's surely a bit hacky, and I could imagine there being some special cases where it might break, but for the cases I have tried it with it worked great.\r\n\r\n```python\r\nfrom itertools import groupby\r\nimport torch\r\nfrom transformers import Wav2Vec2Processor, Wav2Vec2ForCTC\r\nimport soundfile as sf\r\n\r\n##############\r\n# load model & audio and run audio through model\r\n##############\r\nmodel_name = 'facebook/wav2vec2-large-960h-lv60-self'\r\nprocessor = Wav2Vec2Processor.from_pretrained(model_name)\r\nmodel = Wav2Vec2ForCTC.from_pretrained(model_name).cuda()\r\n\r\n\r\naudio_filepath = ''\r\nspeech, sample_rate = sf.read(audio_filepath)\r\ninput_values = processor(speech, sampling_rate=sample_rate, return_tensors=\"pt\").input_values.cuda()\r\n\r\nlogits = model(input_values).logits\r\n\r\npredicted_ids = torch.argmax(logits, dim=-1)\r\ntranscription = processor.decode(predicted_ids[0]).lower()\r\n\r\n##############\r\n# this is where the logic starts to get the start and end timestamp for each word\r\n##############\r\nwords = [w for w in transcription.split(' ') if len(w) > 0]\r\npredicted_ids = predicted_ids[0].tolist()\r\nduration_sec = input_values.shape[1] / sample_rate\r\n\r\n\r\nids_w_time = [(i / len(predicted_ids) * duration_sec, _id) for i, _id in enumerate(predicted_ids)]\r\n# remove entries which are just \"padding\" (i.e. no characers are recognized)\r\nids_w_time = [i for i in ids_w_time if i[1] != processor.tokenizer.pad_token_id]\r\n# now split the ids into groups of ids where each group represents a word\r\nsplit_ids_w_time = [list(group) for k, group\r\n in groupby(ids_w_time, lambda x: x[1] == processor.tokenizer.word_delimiter_token_id)\r\n if not k]\r\n\r\nassert len(split_ids_w_time) == len(words) # make sure that there are the same number of id-groups as words. Otherwise something is wrong\r\n\r\nword_start_times = []\r\nword_end_times = []\r\nfor cur_ids_w_time, cur_word in zip(split_ids_w_time, words):\r\n _times = [_time for _time, _id in cur_ids_w_time]\r\n word_start_times.append(min(_times))\r\n word_end_times.append(max(_times))\r\n \r\nwords, word_start_times, word_end_times\r\n```",
"@KB-g \r\nCongrats!\r\nIs there a chance to also extract the \"per word probability\"?",
"@KB-g \r\nThe `assert len() == len()` triggers. \r\nThis audio: [assert.zip](https://github.com/huggingface/transformers/files/6721402/assert.zip)\r\nTestcase:\r\n````python\r\nfrom itertools import groupby\r\nimport torch\r\nfrom transformers import Wav2Vec2Processor, Wav2Vec2ForCTC\r\nimport soundfile as sf\r\n\r\nmodel_name = 'DewiBrynJones/wav2vec2-large-xlsr-welsh'\r\nprocessor = Wav2Vec2Processor.from_pretrained(model_name)\r\nmodel = Wav2Vec2ForCTC.from_pretrained(model_name)\r\n\r\naudio_filepath = '/tmp/assert.wav'\r\nspeech, sample_rate = sf.read(audio_filepath)\r\ninput_values = processor(speech, sampling_rate=sample_rate, return_tensors=\"pt\").input_values\r\nlogits = model(input_values).logits\r\npredicted_ids = torch.argmax(logits, dim=-1)\r\ntranscription = processor.decode(predicted_ids[0]).lower()\r\n\r\n##############\r\n# this is where the logic starts to get the start and end timestamp for each word\r\n##############\r\nwords = [w for w in transcription.split(' ') if len(w) > 0]\r\npredicted_ids = predicted_ids[0].tolist()\r\nduration_sec = input_values.shape[1] / sample_rate\r\nids_w_time = [(i / len(predicted_ids) * duration_sec, _id) for i, _id in enumerate(predicted_ids)]\r\nids_w_time = [i for i in ids_w_time if i[1] != processor.tokenizer.pad_token_id]\r\nsplit_ids_w_time = [list(group) for k, group\r\n in groupby(ids_w_time, lambda x: x[1] == processor.tokenizer.word_delimiter_token_id)\r\n if not k]\r\n# make sure that there are the same number of id-groups as words. Otherwise something is wrong\r\nassert len(split_ids_w_time) == len(words), (len(split_ids_w_time), len(words))\r\n````",
"> @KB-g Congrats! Is there a chance to also extract the \"per word probability\"?\r\n\r\nHey @KB-g \r\nAny success on this?",
"Hi @doublex , @abhirooptalasila,\r\nI haven't tried to get the per-word probability. If you come up with a solution, it would be great if you could let me know. I'd also be interested in a solution :)",
"Hi @KB-g, @doublex and @abhirooptalasila,\r\n\r\nmaybe this [tutorial](https://pytorch.org/audio/main/tutorials/forced_alignment_tutorial.html) helps to find out a way to calculate a \"per-word probability\". In the function `merge_words`, the author calculates scores for each word based on tokens probabilities and theirs duration. ",
"We need to document the time stamp retrieval a bit better here I think",
"@KB-g Thanks for the code snippet, really useful. Made a small addition (no_grad) for inference, would help people facing OOM error(s):\r\n\r\n```python\r\nfrom itertools import groupby\r\nimport torch\r\nfrom transformers import Wav2Vec2Processor, Wav2Vec2ForCTC\r\nimport soundfile as sf\r\n\r\n##############\r\n# load model & audio and run audio through model\r\n##############\r\nmodel_name = 'facebook/wav2vec2-large-960h-lv60-self'\r\nprocessor = Wav2Vec2Processor.from_pretrained(model_name)\r\nmodel = Wav2Vec2ForCTC.from_pretrained(model_name).cuda()\r\n\r\n\r\naudio_filepath = ''\r\nspeech, sample_rate = sf.read(audio_filepath)\r\ninput_values = processor(speech, sampling_rate=sample_rate, return_tensors=\"pt\").input_values.cuda()\r\n\r\nwith torch.no_grad():\r\n logits = model(input_values).logits\r\n\r\npredicted_ids = torch.argmax(logits, dim=-1)\r\ntranscription = processor.decode(predicted_ids[0]).lower()\r\n\r\n##############\r\n# this is where the logic starts to get the start and end timestamp for each word\r\n##############\r\nwords = [w for w in transcription.split(' ') if len(w) > 0]\r\npredicted_ids = predicted_ids[0].tolist()\r\nduration_sec = input_values.shape[1] / sample_rate\r\n\r\n\r\nids_w_time = [(i / len(predicted_ids) * duration_sec, _id) for i, _id in enumerate(predicted_ids)]\r\n# remove entries which are just \"padding\" (i.e. no characers are recognized)\r\nids_w_time = [i for i in ids_w_time if i[1] != processor.tokenizer.pad_token_id]\r\n# now split the ids into groups of ids where each group represents a word\r\nsplit_ids_w_time = [list(group) for k, group\r\n in groupby(ids_w_time, lambda x: x[1] == processor.tokenizer.word_delimiter_token_id)\r\n if not k]\r\n\r\nassert len(split_ids_w_time) == len(words) # make sure that there are the same number of id-groups as words. Otherwise something is wrong\r\n\r\nword_start_times = []\r\nword_end_times = []\r\nfor cur_ids_w_time, cur_word in zip(split_ids_w_time, words):\r\n _times = [_time for _time, _id in cur_ids_w_time]\r\n word_start_times.append(min(_times))\r\n word_end_times.append(max(_times))\r\n \r\nwords, word_start_times, word_end_times\r\n```",
"@Ap1075, thank you for the example you provided above. I'm having a hard time figuring out where/how to pass in transcribed text so it can be aligned with the audio. Is passing in pre-transcribed text possible, or am I misunderstanding how it works?",
"I'm trying to get word timing for karaoke I have the lyrics... Would this be possible? 🤔"
] | 1,618 | 1,687 | null | CONTRIBUTOR | null | # 🚀 Feature request
Hello I was thinking it would be of great help if I can get the time offsets of start and end of each word .
## Motivation
I was going through Google Speech to text documentation and found this [feature](https://cloud.google.com/speech-to-text/docs/async-time-offsets) and thought will be really amazing if i can have something similar here.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
I can really use some help in this task and would love to implement something similar.
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11307/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11307/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11306 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11306/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11306/comments | https://api.github.com/repos/huggingface/transformers/issues/11306/events | https://github.com/huggingface/transformers/pull/11306 | 860,771,997 | MDExOlB1bGxSZXF1ZXN0NjE3NTYwOTMw | 11,306 | Wav2Vec2 Pretraining | {
"login": "anton-l",
"id": 26864830,
"node_id": "MDQ6VXNlcjI2ODY0ODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anton-l",
"html_url": "https://github.com/anton-l",
"followers_url": "https://api.github.com/users/anton-l/followers",
"following_url": "https://api.github.com/users/anton-l/following{/other_user}",
"gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anton-l/subscriptions",
"organizations_url": "https://api.github.com/users/anton-l/orgs",
"repos_url": "https://api.github.com/users/anton-l/repos",
"events_url": "https://api.github.com/users/anton-l/events{/privacy}",
"received_events_url": "https://api.github.com/users/anton-l/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I will take a deeper look tomorrow and comment here - looks great already :-)",
"Looks great to me so far @anton-l - I see no breaking changes, the modularization looks good to me and the parameter naming is fine as well -> don't think `self.quantizer.quantizer` is too awkward => we also have `self.self.self_attention` somewhere in BERT ;-)",
"I think an important next step would be to verify that the pretraining works more or less :-) ",
"Integration tests are now passing. Can be verified by running:\r\n\r\n```python\r\n#!/usr/bin/env python3 \r\nimport datasets \r\nimport fairseq \r\nimport torch \r\n \r\nimport soundfile as sf \r\nimport sys \r\nfrom fairseq.criterions.wav2vec_criterion import Wav2VecCriterionConfig, Wav2vecCriterion \r\nfrom fairseq.tasks.audio_pretraining import AudioPretrainingConfig, AudioPretrainingTask \r\n \r\nfrom transformers import Wav2Vec2ForPreTraining, Wav2Vec2FeatureExtractor \r\n \r\nhf_path = str(sys.argv[1]) \r\nfairseq_wav2vec2_path = str(sys.argv[2])\r\n\r\nmodel, cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task([fairseq_wav2vec2_path])\r\n\r\nfeature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(hf_path, do_normalize=False)\r\nhf_model = Wav2Vec2ForPreTraining.from_pretrained(hf_path)\r\n\r\nmodel = model[0]\r\nmodel.eval()\r\n\r\n\r\ndummy_speech_data = datasets.load_dataset(\"patrickvonplaten/librispeech_asr_dummy\", \"clean\", split=\"validation\")\r\n\r\n\r\ndef map_to_array(batch):\r\n speech_array, _ = sf.read(batch[\"file\"])\r\n batch[\"speech\"] = speech_array\r\n return batch\r\n\r\n\r\ndummy_speech_data = dummy_speech_data.map(map_to_array, remove_columns=[\"file\"])\r\ninputs = feature_extractor(dummy_speech_data[:3][\"speech\"], return_tensors=\"pt\", padding=\"longest\", return_attention_mask=True)\r\n\r\ninput_values = inputs.input_values\r\nattention_mask = inputs.attention_mask\r\n\r\naudio_cfg = AudioPretrainingConfig(labels=\"ltr\", data=\"./data\")\r\ntask = AudioPretrainingTask.setup_task(audio_cfg) \r\ncriterion = Wav2vecCriterion(Wav2VecCriterionConfig(infonce=True, log_keys=[\"prob_perplexity\", \"code_perplexity\", \"temp\"], loss_weights=[0.1, 10]), task)\r\n\r\nsample = {\r\n \"net_input\": {\r\n \"source\": input_values,\r\n \"padding_mask\": attention_mask.ne(1),\r\n },\r\n \"id\": torch.zeros((1,)),\r\n}\r\n\r\ntorch.manual_seed(0)\r\nresult = model(**sample[\"net_input\"])\r\ntorch.manual_seed(0)\r\nhf_result = hf_model(input_values, attention_mask=attention_mask)\r\n\r\n\r\nassert torch.allclose(hf_result.logits, result['x'], atol=1e-3), \"wrong logits\"\r\n\r\nloss, sample_size, log = criterion(model, sample)\r\n\r\nprint(\"Loss diff %\", 100 * (loss.detach().item() - hf_result.loss.detach().item()) / hf_result.loss.detach())\r\n```\r\n\r\nand using [this](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small.pt) as the fairseq checkpoint and [this](https://huggingface.co/patrickvonplaten/wav2vec2-base) model as the HF model."
] | 1,618 | 1,623 | 1,623 | MEMBER | null | # What does this PR do?
Fixes #11246
This adds `Wav2Vec2ForPreTraining` which allows to pre-train Wav2Vec 2.0 on unlabeled audio with a self-supervised Vector Quantization task.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
## Implementation checklist
- [x] Run a successful forward pass on `Wav2Vec2ForPreTraining` with mostly copy-pasted code from `fairseq`
- [x] Make sure the intermediate and output logit tensors match between `Wav2Vec2ForPreTraining` and `fairseq.models.wav2vec.Wav2Vec2Model`
- [x] Sync with @patrickvonplaten regarding class decomposition, network layers placement and potentially breaking changes
- [x] Run the model with a padded variable-length batch (not just a single sample)
- [x] Run the model in training mode, make sure that the contrastive loss and code perplexity decrease
- [x] Write integraton tests to check fairseq's tensor reproducibility
- [x] Write smoke tests for `GumbelVectorQuantizer` and vector sampling
- [x] Refactor copied code (e.g. `GumbelVectorQuantizer` and `sample_negatives`) to follow the code style of the rest of the module
- [x] Add sensible defaults for config variables
- [x] Add docstrings for every module and comments where necessary
- [x] Update model documentation
Bonus round:
- [ ] Finetune the model on a subset of CommonVoice
- [ ] Check that the pooled vectors of audio samples converge into neat clusters as a result of quantization
- [x] Check that Pretraining works with Deepspeed | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11306/reactions",
"total_count": 17,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 11,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11306/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11306",
"html_url": "https://github.com/huggingface/transformers/pull/11306",
"diff_url": "https://github.com/huggingface/transformers/pull/11306.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11306.patch",
"merged_at": 1623260456000
} |
https://api.github.com/repos/huggingface/transformers/issues/11305 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11305/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11305/comments | https://api.github.com/repos/huggingface/transformers/issues/11305/events | https://github.com/huggingface/transformers/issues/11305 | 860,626,279 | MDU6SXNzdWU4NjA2MjYyNzk= | 11,305 | invalid multinomial distribution (with replacement=False, not enough non-negative category to sample) | {
"login": "Muennighoff",
"id": 62820084,
"node_id": "MDQ6VXNlcjYyODIwMDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/62820084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Muennighoff",
"html_url": "https://github.com/Muennighoff",
"followers_url": "https://api.github.com/users/Muennighoff/followers",
"following_url": "https://api.github.com/users/Muennighoff/following{/other_user}",
"gists_url": "https://api.github.com/users/Muennighoff/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Muennighoff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muennighoff/subscriptions",
"organizations_url": "https://api.github.com/users/Muennighoff/orgs",
"repos_url": "https://api.github.com/users/Muennighoff/repos",
"events_url": "https://api.github.com/users/Muennighoff/events{/privacy}",
"received_events_url": "https://api.github.com/users/Muennighoff/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"i have the same exact problem when i use `do_sample=True` can you re-open this issue?",
"Maybe @gante has an idea!",
"Hi there @Muennighoff @zeke-john 👋 \r\n\r\nI've run the script above for both models on `v4.5.1` (and on `v4.22.dev0`) and it works with no problems -- you can see a colab [here](https://colab.research.google.com/drive/1bg7v0mxZbFxJTjj28AriYBeERsAN264E?usp=sharing).\r\n\r\nA potential cause for errors may be GPU memory -- generation with `num_beams` is memory intensive. Let me know if you have more details about your problem :)"
] | 1,618 | 1,657 | 1,622 | CONTRIBUTOR | null | When using "sshleifer/distilbart-cnn-6-6" & do_sample the below code errors out, meanwhile the same code works for "sshleifer/distilbart-xsum-6-6". Am I missing something really obvious here? Thanks for any help!
Tranformers: 4.5.1
````
from transformers import (
AutoModelForSeq2SeqLM,
AutoTokenizer
)
model_name = "sshleifer/distilbart-cnn-6-6"
#model_name = "sshleifer/distilbart-xsum-6-6"
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
text = "New York City (NYC), often simply called New York, is the most populous city in the United States"
input_ids = tokenizer.encode(text, return_tensors='pt')
sample_outputs = model.generate(input_ids,
num_beams=3,
do_sample=True
)
sample_outputs
```` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11305/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11305/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11304 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11304/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11304/comments | https://api.github.com/repos/huggingface/transformers/issues/11304/events | https://github.com/huggingface/transformers/issues/11304 | 860,608,405 | MDU6SXNzdWU4NjA2MDg0MDU= | 11,304 | env about run longformer model downloaded from https://github.com/allenai/longformer | {
"login": "BinchaoPeng",
"id": 43957010,
"node_id": "MDQ6VXNlcjQzOTU3MDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/43957010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BinchaoPeng",
"html_url": "https://github.com/BinchaoPeng",
"followers_url": "https://api.github.com/users/BinchaoPeng/followers",
"following_url": "https://api.github.com/users/BinchaoPeng/following{/other_user}",
"gists_url": "https://api.github.com/users/BinchaoPeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BinchaoPeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BinchaoPeng/subscriptions",
"organizations_url": "https://api.github.com/users/BinchaoPeng/orgs",
"repos_url": "https://api.github.com/users/BinchaoPeng/repos",
"events_url": "https://api.github.com/users/BinchaoPeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/BinchaoPeng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I created an env that cannot be copied",
"Please do not create duplicates.\r\nDuplicate of #11301 ",
"sorry, i close it now"
] | 1,618 | 1,618 | 1,618 | NONE | null | 1. just use `conda install transformers`,the transformers version is 4.4.2, can't run
1.1. error:
```bash
can't import pipline
# then use tokenizer and model to get my feature vector,ERROR:
RuntimeError: Error(s) in loading state_dict for BartModel:
size mismatch for model.encoder.embed_positions.weight: copying a param with shape torch.Size([16386, 768]) from
checkpoint, the shape in current model is torch.Size([1026, 768]).
```
2. change to use `pip install transformers`, the transformers version is 3.3.0,can't run
2.1. error:
```bash
can't import name tokenizer
# then i found it in issue that use "pip install tokenizer", it has exited, and tokenizer version is 0.8.0rc2. I find another env that can work,the tokenizer version is 0.5.0,so i use pip to change its version from 0.8.0rc2 to 0.5.0.ERROR:
pip's dependency ..... which is incompatibe
# however the version of toikenizer in only one env which can run is 0.5.0 .
```
3. So the only one worked env is:
```bash
conda create -n dnabert python=3.6
# pytorch-transformers
pip install pytorch-transformers
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
# dnabert
git clone https://github.com/jerryji1993/DNABERT
cd DNABERT
python3 -m pip install --editable .
cd examples
python3 -m pip install -r requirements.txt
# allenai
conda install cudatoolkit=10.0
pip install git+https://github.com/allenai/longformer.git
# huggingface
pip install transformers
```
# 我做错了什么才会到这种局面? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11304/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11304/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11303 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11303/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11303/comments | https://api.github.com/repos/huggingface/transformers/issues/11303/events | https://github.com/huggingface/transformers/issues/11303 | 860,586,136 | MDU6SXNzdWU4NjA1ODYxMzY= | 11,303 | small bug in RAG model | {
"login": "yuvalkirstain",
"id": 57996478,
"node_id": "MDQ6VXNlcjU3OTk2NDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuvalkirstain",
"html_url": "https://github.com/yuvalkirstain",
"followers_url": "https://api.github.com/users/yuvalkirstain/followers",
"following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}",
"gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions",
"organizations_url": "https://api.github.com/users/yuvalkirstain/orgs",
"repos_url": "https://api.github.com/users/yuvalkirstain/repos",
"events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuvalkirstain/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | [
"This indeed looks like a typo! Thanks for your issue :-)\r\n\r\nThe kwargs should not be trimmed by `\"question_question_encoder\"`, but by `\"question_encoder\"`.\r\n\r\nWould you like to open a PR to fix it? Otherwise I can do it as well :-)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Unping"
] | 1,618 | 1,622 | 1,622 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Linux-5.4.0-66-generic-x86_64-with-glibc2.27
- Python version: 3.8.3
- PyTorch version (GPU?): 1.8.1+cu111 (True)
### Who can help
@ola13
Models:
- rag
## Information
Model I am using (Bert, XLNet ...): RAG
The problem arises when using:
* [ ] the official example scripts: seems to be a bug in https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/modeling_rag.py#L306
To pass my own question encoder the argument needs to be `question_encoder_model`, but the suffix ("model") gets trimmed because the len is taken according to `question_question_encoder_`.
```python
kwargs_question_encoder = {
argument[len("question_question_encoder_") :]: value
for argument, value in kwargs.items()
if argument.startswith("question_encoder_")
}
```
## To reproduce
Steps to reproduce the behavior:
```python
question_encoder = AutoModel.from_pretrained("any model")
rag_model = model_class.from_pretrained_question_encoder_generator(
question_encoder_model=question_encoder, generator_pretrained_model_name_or_path=generator_name_or_path, config=rag_config
)
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
be able to pass a question encoder **model** and not just a config to the rag model.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11303/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11303/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11302 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11302/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11302/comments | https://api.github.com/repos/huggingface/transformers/issues/11302/events | https://github.com/huggingface/transformers/issues/11302 | 860,582,468 | MDU6SXNzdWU4NjA1ODI0Njg= | 11,302 | Problems with webbased editing of model cards | {
"login": "PhilipMay",
"id": 229382,
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilipMay",
"html_url": "https://github.com/PhilipMay",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The probem only seems to happen on some models. This model here works ok: https://huggingface.co/T-Systems-onsite/cross-en-de-roberta-sentence-transformer",
"Tagging @Pierrci for visibility",
"Hi @PhilipMay, thank you for reporting this, we just deployed a fix for that, let us know if you still encounter the problem, otherwise feel free to close the issue :)",
"Works now. Thanks. "
] | 1,618 | 1,620 | 1,620 | CONTRIBUTOR | null | When I open a model in hugging face model repository - like here: https://huggingface.co/german-nlp-group/electra-base-german-uncased
And then click "Edit model card" the text in the webbased editor contains `\r` characters. When the webbased editor is now used to save the modelcard these characters are saved and shown.
See screenshot:

This is a bug. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11302/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11302/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11301 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11301/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11301/comments | https://api.github.com/repos/huggingface/transformers/issues/11301/events | https://github.com/huggingface/transformers/issues/11301 | 860,553,318 | MDU6SXNzdWU4NjA1NTMzMTg= | 11,301 | Longformer model with weight(model.encoder.embed_positions.weight) error | {
"login": "BinchaoPeng",
"id": 43957010,
"node_id": "MDQ6VXNlcjQzOTU3MDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/43957010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BinchaoPeng",
"html_url": "https://github.com/BinchaoPeng",
"followers_url": "https://api.github.com/users/BinchaoPeng/followers",
"following_url": "https://api.github.com/users/BinchaoPeng/following{/other_user}",
"gists_url": "https://api.github.com/users/BinchaoPeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BinchaoPeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BinchaoPeng/subscriptions",
"organizations_url": "https://api.github.com/users/BinchaoPeng/orgs",
"repos_url": "https://api.github.com/users/BinchaoPeng/repos",
"events_url": "https://api.github.com/users/BinchaoPeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/BinchaoPeng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"What code are you running that leads to that error?",
"# God, someone finally replied to me,thanks!\r\n## code\r\n```python\r\nfrom transformers import AutoModel, AutoTokenizer, pipeline\r\nimport torch\r\n\r\nmodel_name = 'pre-model/' + 'longformer-encdec-base-16384'\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\nmodel = AutoModel.from_pretrained(model_name)\r\nclassifier = pipeline('feature-extraction', model=model, tokenizer=tokenizer)\r\n\r\n# encoded_inputs = tokenizer([\"ATGCATGCNACT\"], [\"ATGCATGCNACT\"], return_token_type_ids=True, return_tensors='pt')\r\nencoded_inputs = tokenizer([\"ATGCATGCNACT\", \"ATGCATG\", \"ACTGGTCATGCAC\"], return_tensors='pt',\r\n padding=True)\r\nprint(encoded_inputs)\r\n# feature = model(input_ids=encoded_inputs['input_ids'], attention_mask=encoded_inputs['attention_mask'],\r\n# return_netsors='pt')\r\nfeature = model(**encoded_inputs,\r\n return_netsors='pt')\r\nprint(feature[0])\r\nprint(type(feature[0]))\r\n# feature = torch.as_tensor(feature)\r\n# print(feature.shape)\r\nprint(\"***\" * 48)\r\n\r\nfeature = classifier([\"ATG\", \"ATGCATG\", \"ACTGGTCATGCAC\"])\r\nprint(type(feature))\r\nfeature = torch.as_tensor(feature)\r\nprint(feature)\r\nprint(feature.shape)\r\nprint(\"***\" * 48)\r\n\r\n```\r\n## env info\r\n\r\n### can work: env0\r\n```bash\r\n# Name Version Build Channel\r\n_libgcc_mutex 0.1 main defaults\r\nabsl-py 0.12.0 pypi_0 pypi\r\nastunparse 1.6.3 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nbiopython 1.78 pypi_0 pypi\r\nblas 1.0 mkl defaults\r\nboto3 1.17.48 pypi_0 pypi\r\nbotocore 1.20.48 pypi_0 pypi\r\nbrotlipy 0.7.0 py36h27cfd23_1003 defaults\r\nca-certificates 2021.1.19 h06a4308_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ncachetools 4.2.1 pypi_0 pypi\r\ncertifi 2020.12.5 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ncffi 1.14.5 py36h261ae71_0 defaults\r\nchardet 4.0.0 py36h06a4308_1003 defaults\r\nclick 7.1.2 pyhd3eb1b0_0 defaults\r\ncryptography 3.4.7 py36hd23ed53_0 defaults\r\ncudatoolkit 10.0.130 0 defaults\r\ndataclasses 0.8 pyh4f3eec9_6 defaults\r\ndill 0.3.3 pypi_0 pypi\r\nfilelock 3.0.12 pyhd3eb1b0_1 defaults\r\nfreetype 2.10.4 h5ab3b9f_0 defaults\r\nfuture 0.18.2 pypi_0 pypi\r\ngoogle-auth 1.28.1 pypi_0 pypi\r\ngoogle-auth-oauthlib 0.4.4 pypi_0 pypi\r\ngrpcio 1.37.0 pypi_0 pypi\r\nidna 2.10 pyhd3eb1b0_0 defaults\r\nimageio 2.9.0 pypi_0 pypi\r\nimportlib-metadata 3.10.0 pypi_0 pypi\r\nintel-openmp 2020.2 254 defaults\r\njmespath 0.10.0 pypi_0 pypi\r\njoblib 1.0.1 pyhd3eb1b0_0 defaults\r\njpeg 9b h024ee3a_2 defaults\r\nlcms2 2.12 h3be6417_0 defaults\r\nld_impl_linux-64 2.33.1 h53a641e_7 defaults\r\nlibffi 3.3 he6710b0_2 defaults\r\nlibgcc-ng 9.1.0 hdf63c60_0 defaults\r\nlibpng 1.6.37 hbc83047_0 defaults\r\nlibprotobuf 3.14.0 h8c45485_0 defaults\r\nlibstdcxx-ng 9.1.0 hdf63c60_0 defaults\r\nlibtiff 4.1.0 h2733197_1 defaults\r\nlongformer 0.1 pypi_0 pypi\r\nlz4-c 1.9.3 h2531618_0 defaults\r\nmarkdown 3.3.4 pypi_0 pypi\r\nmkl 2020.2 256 defaults\r\nmkl-service 2.3.0 py36he8ac12f_0 defaults\r\nmkl_fft 1.3.0 py36h54f3939_0 defaults\r\nmkl_random 1.1.1 py36h0573a6f_0 defaults\r\nncurses 6.2 he6710b0_1 defaults\r\nninja 1.10.2 py36hff7bd54_0 defaults\r\nnlp 0.4.0 pypi_0 pypi\r\nnltk 3.6.1 pypi_0 pypi\r\nnumpy 1.19.5 pypi_0 pypi\r\nnumpy-base 1.19.2 py36hfa32c7d_0 defaults\r\noauthlib 3.1.0 pypi_0 pypi\r\nolefile 0.46 py36_0 defaults\r\nopenssl 1.1.1k h27cfd23_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npackaging 20.9 pyhd3eb1b0_0 defaults\r\npandas 1.1.5 pypi_0 pypi\r\npatsy 0.5.1 pypi_0 pypi\r\npillow 8.2.0 py36he98fc37_0 defaults\r\npip 21.0.1 py36h06a4308_0 defaults\r\nprotobuf 3.15.8 pypi_0 pypi\r\npyahocorasick 1.4.2 pypi_0 pypi\r\npyarrow 3.0.0 pypi_0 pypi\r\npyasn1 0.4.8 pypi_0 pypi\r\npyasn1-modules 0.2.8 pypi_0 pypi\r\npybedtools 0.8.2 pypi_0 pypi\r\npycparser 2.20 py_2 defaults\r\npyopenssl 20.0.1 pyhd3eb1b0_1 defaults\r\npyparsing 2.4.7 pyhd3eb1b0_0 defaults\r\npysam 0.16.0.1 pypi_0 pypi\r\npysocks 1.7.1 py36h06a4308_0 defaults\r\npython 3.6.13 hdb3f193_0 defaults\r\npython-dateutil 2.8.1 pypi_0 pypi\r\npython_abi 3.6 1_cp36m huggingface\r\npytorch-lightning 0.8.5 pypi_0 pypi\r\npytorch-transformers 1.2.0 pypi_0 pypi\r\npytz 2021.1 pypi_0 pypi\r\npyyaml 5.4.1 pypi_0 pypi\r\nreadline 8.1 h27cfd23_0 defaults\r\nregex 2021.4.4 py36h27cfd23_0 defaults\r\nrequests 2.25.1 pyhd3eb1b0_0 defaults\r\nrequests-oauthlib 1.3.0 pypi_0 pypi\r\nrouge-score 0.0.4 pypi_0 pypi\r\nrsa 4.7.2 pypi_0 pypi\r\ns3transfer 0.3.6 pypi_0 pypi\r\nsacremoses 0.0.44 pypi_0 pypi\r\nscikit-learn 0.24.1 pypi_0 pypi\r\nscipy 1.5.4 pypi_0 pypi\r\nsentencepiece 0.1.91 pypi_0 pypi\r\nseqeval 1.2.2 pypi_0 pypi\r\nsetuptools 52.0.0 py36h06a4308_0 defaults\r\nsix 1.15.0 py36h06a4308_0 defaults\r\nsqlite 3.35.4 hdfb4753_0 defaults\r\nstatsmodels 0.12.2 pypi_0 pypi\r\ntensorboard 2.4.1 pypi_0 pypi\r\ntensorboard-plugin-wit 1.8.0 pypi_0 pypi\r\ntensorboardx 2.2 pypi_0 pypi\r\ntest-tube 0.7.5 pypi_0 pypi\r\nthreadpoolctl 2.1.0 pypi_0 pypi\r\ntk 8.6.10 hbc83047_0 defaults\r\ntokenizers 0.5.0 pypi_0 pypi\r\ntorch 1.6.0 pypi_0 pypi\r\ntorchvision 0.5.0 py36_cu100 pytorch\r\ntqdm 4.60.0 pypi_0 pypi\r\ntransformers 3.1.0 pypi_0 pypi\r\ntyping-extensions 3.7.4.3 pypi_0 pypi\r\nurllib3 1.26.4 pyhd3eb1b0_0 defaults\r\nwerkzeug 1.0.1 pypi_0 pypi\r\nwheel 0.36.2 pyhd3eb1b0_0 defaults\r\nxxhash 2.0.2 pypi_0 pypi\r\nxz 5.2.5 h7b6447c_0 defaults\r\nzipp 3.4.1 pypi_0 pypi\r\nzlib 1.2.11 h7b6447c_3 defaults\r\nzstd 1.4.9 haebb681_0 defaults\r\n```\r\n### can not work\r\n\r\n#### env1:tf2-pt-keras\r\n```bash\r\n# Name Version Build Channel\r\n_libgcc_mutex 0.1 main https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\n_tflow_select 2.1.0 gpu https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nabsl-py 0.11.0 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\naiohttp 3.6.3 py36h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\napex 0.1 pypi_0 pypi\r\nargon2-cffi 20.1.0 py36h7b6447c_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nastor 0.8.1 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nastunparse 1.6.3 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nasync-timeout 3.0.1 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nasync_generator 1.10 py36h28b3542_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nattrs 20.3.0 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nbackcall 0.2.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nbert-serving-client 1.10.0 pypi_0 pypi\r\nbert-serving-server 1.10.0 pypi_0 pypi\r\nblas 1.0 mkl https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nbleach 3.2.1 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nblinker 1.4 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nbrotlipy 0.7.0 py36h27cfd23_1003 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nc-ares 1.16.1 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nca-certificates 2021.4.13 h06a4308_1 defaults\r\ncachetools 4.1.1 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ncertifi 2020.12.5 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ncffi 1.14.3 py36h261ae71_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nchardet 3.0.4 py36h06a4308_1003 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nclick 7.1.2 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ncryptography 3.2.1 py36h3c74f83_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ncudatoolkit 10.1.243 h6bb024c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ncudnn 7.6.5 cuda10.1_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ncupti 10.1.168 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ncycler 0.10.0 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ndataclasses 0.7 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ndbus 1.13.18 hb2f20db_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ndecorator 4.4.2 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ndefusedxml 0.6.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nentrypoints 0.3 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nexpat 2.2.10 he6710b0_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nfilelock 3.0.12 pyhd3eb1b0_1 defaults\r\nfontconfig 2.13.0 h9420a91_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nfreetype 2.10.4 h5ab3b9f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ngast 0.2.2 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nglib 2.66.1 h92f7085_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ngoogle-auth 1.23.0 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ngoogle-auth-oauthlib 0.4.2 pyhd3eb1b0_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ngoogle-pasta 0.2.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ngputil 1.4.0 pypi_0 pypi\r\ngrpcio 1.31.0 py36hf8bcb03_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ngst-plugins-base 1.14.0 hbbd80ab_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ngstreamer 1.14.0 hb31296c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nh5py 2.10.0 py36hd6299e0_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nhdf5 1.10.6 hb1b8bf9_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nicu 58.2 he6710b0_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nidna 2.10 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nidna_ssl 1.1.0 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nimportlib-metadata 2.0.0 py_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nimportlib_metadata 2.0.0 1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nintel-openmp 2020.2 254 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nipykernel 5.3.4 py36h5ca1d4c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nipython 7.12.0 py36h5ca1d4c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge\r\nipython_genutils 0.2.0 pyhd3eb1b0_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nipywidgets 7.6.0 pyhd3eb1b0_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\njedi 0.10.2 py36_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free\r\njinja2 2.11.2 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\njoblib 0.17.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\njpeg 9b h024ee3a_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\njsonschema 3.2.0 py_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\njupyter 1.0.0 py36_7 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\njupyter_client 6.1.7 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\njupyter_console 6.2.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\njupyter_core 4.7.0 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\njupyterlab_pygments 0.1.2 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nkeras 2.3.1 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nkeras-applications 1.0.8 py_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nkeras-base 2.3.1 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nkeras-preprocessing 1.1.0 py_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nkiwisolver 1.3.0 py36h2531618_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nkrb5 1.18.2 h173b8e3_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlcms2 2.11 h396b838_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nld_impl_linux-64 2.33.1 h53a641e_7 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlibcurl 7.71.1 h20c2e04_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlibedit 3.1.20191231 h14c3975_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlibffi 3.3 he6710b0_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlibgcc-ng 9.1.0 hdf63c60_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlibgfortran-ng 7.3.0 hdf63c60_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlibpng 1.6.37 hbc83047_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlibprotobuf 3.13.0.1 hd408876_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlibsodium 1.0.18 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlibssh2 1.9.0 h1ba5d50_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlibstdcxx-ng 9.1.0 hdf63c60_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlibtiff 4.1.0 h2733197_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlibuuid 1.0.3 h1bed415_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlibuv 1.40.0 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlibxcb 1.14 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlibxml2 2.9.10 hb55368b_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlz4-c 1.9.2 heb0550a_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nmarkdown 3.3.3 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nmarkupsafe 1.1.1 py36h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nmatplotlib 3.3.2 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nmatplotlib-base 3.3.2 py36h817c723_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nmistune 0.8.4 py36h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nmkl 2020.2 256 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nmkl-service 2.3.0 py36he904b0f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nmkl_fft 1.2.0 py36h23d657b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nmkl_random 1.1.1 py36h0573a6f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nmultidict 4.7.6 py36h7b6447c_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nnbclient 0.5.1 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nnbconvert 6.0.7 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nnbformat 5.0.8 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nncurses 6.2 he6710b0_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nnest-asyncio 1.4.3 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nninja 1.10.1 py36hfd86e86_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nnotebook 6.1.6 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nnumpy 1.19.2 py36h54aff64_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nnumpy-base 1.19.2 py36hfa32c7d_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\noauthlib 3.1.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nolefile 0.46 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nopenssl 1.1.1k h27cfd23_0 defaults\r\nopt_einsum 3.1.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npackaging 20.8 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npandas 1.1.3 py36he6710b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npandoc 2.11 hb0f4dca_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npandocfilters 1.4.3 py36h06a4308_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npcre 8.44 he6710b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npexpect 4.8.0 pyhd3eb1b0_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npickleshare 0.7.5 pyhd3eb1b0_1003 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npillow 8.0.1 py36he98fc37_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npip 20.2.4 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nprometheus_client 0.9.0 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nprompt-toolkit 3.0.8 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nprompt_toolkit 3.0.8 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nprotobuf 3.13.0.1 py36he6710b0_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nptyprocess 0.6.0 pyhd3eb1b0_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npyasn1 0.4.8 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npyasn1-modules 0.2.8 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npycparser 2.20 py_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npygments 2.7.3 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npyjwt 1.7.1 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npyopenssl 19.1.0 pyhd3eb1b0_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npyparsing 2.4.7 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npyqt 5.9.2 py36h05f1152_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npyrsistent 0.17.3 py36h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npysocks 1.7.1 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npython 3.6.12 hcff3b4d_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npython-dateutil 2.8.1 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npython_abi 3.6 1_cp36m https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge\r\npytorch 1.7.0 py3.6_cuda10.1.243_cudnn7.6.3_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch\r\npytz 2020.1 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npyyaml 5.3.1 py36h7b6447c_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npyzmq 20.0.0 py36h2531618_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nqt 5.9.7 h5867ecd_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nqtconsole 4.7.7 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nqtpy 1.9.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nreadline 8.0 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nregex 2021.4.4 py36h27cfd23_0 defaults\r\nrequests 2.24.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nrequests-oauthlib 1.3.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nrsa 4.6 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nsacremoses 0.0.44 pypi_0 pypi\r\nscikit-learn 0.23.2 py36h0573a6f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nscipy 1.5.2 py36h0b6359f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nseaborn 0.11.1 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nsend2trash 1.5.0 pyhd3eb1b0_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nsetuptools 50.3.1 py36h06a4308_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nsip 4.19.8 py36hf484d3e_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nsix 1.15.0 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nsqlite 3.33.0 h62c20be_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ntensorboard 2.3.0 pyh4dce500_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ntensorboard-plugin-wit 1.6.0 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ntensorflow 2.1.0 gpu_py36h2e5cdaa_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ntensorflow-base 2.1.0 gpu_py36h6c5654b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ntensorflow-estimator 2.1.0 pyhd54b08b_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ntensorflow-gpu 2.1.0 h0d30ee6_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ntermcolor 1.1.0 py36_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nterminado 0.9.1 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ntestpath 0.4.4 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nthreadpoolctl 2.1.0 pyh5ca1d4c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ntk 8.6.10 hbc83047_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ntokenizers 0.10.2 pypi_0 pypi\r\ntorchaudio 0.7.0 py36 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch\r\ntorchvision 0.1.8 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free\r\ntornado 6.0.4 py36h7b6447c_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ntqdm 4.60.0 pypi_0 pypi\r\ntraitlets 4.3.3 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ntransformers 4.4.2 py_0 huggingface\r\ntyping_extensions 3.7.4.3 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nurllib3 1.25.11 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nwcwidth 0.2.5 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nwebencodings 0.5.1 py36_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nwerkzeug 1.0.1 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nwheel 0.35.1 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nwidgetsnbextension 3.5.1 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nwrapt 1.12.1 py36h7b6447c_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nxz 5.2.5 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nyaml 0.2.5 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nyarl 1.6.2 py36h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nzeromq 4.3.3 he6710b0_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nzipp 3.4.0 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nzlib 1.2.11 h7b6447c_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nzstd 1.4.5 h9ceee32_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\n\r\n```\r\n### env2: copied from env0 but not worked\r\n```bash\r\n# Name Version Build Channel\r\n_libgcc_mutex 0.1 main https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nabsl-py 0.12.0 pypi_0 pypi\r\nastunparse 1.6.3 py_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nblas 1.0 mkl https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nboto3 1.17.53 pypi_0 pypi\r\nbotocore 1.20.53 pypi_0 pypi\r\nbrotlipy 0.7.0 py36h27cfd23_1003 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nca-certificates 2021.4.13 h06a4308_1 \r\ncachetools 4.2.1 pypi_0 pypi\r\ncertifi 2020.12.5 py36h06a4308_0 \r\ncffi 1.14.5 py36h261ae71_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nchardet 4.0.0 py36h06a4308_1003 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nclick 7.1.2 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ncryptography 3.4.7 py36hd23ed53_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ncudatoolkit 10.0.130 0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ndataclasses 0.8 pyh4f3eec9_6 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ndill 0.3.3 pypi_0 pypi\r\nfilelock 3.0.12 pyhd3eb1b0_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nfreetype 2.10.4 h5ab3b9f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nfuture 0.18.2 pypi_0 pypi\r\ngoogle-auth 1.29.0 pypi_0 pypi\r\ngoogle-auth-oauthlib 0.4.4 pypi_0 pypi\r\ngrpcio 1.37.0 pypi_0 pypi\r\nidna 2.10 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nimageio 2.9.0 pypi_0 pypi\r\nimportlib-metadata 2.0.0 py_1 anaconda\r\nintel-openmp 2020.2 254 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\njmespath 0.10.0 pypi_0 pypi\r\njoblib 1.0.1 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\njpeg 9b h024ee3a_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlcms2 2.12 h3be6417_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nld_impl_linux-64 2.33.1 h53a641e_7 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlibffi 3.3 he6710b0_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlibgcc-ng 9.1.0 hdf63c60_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlibpng 1.6.37 hbc83047_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlibprotobuf 3.14.0 h8c45485_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlibstdcxx-ng 9.1.0 hdf63c60_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlibtiff 4.1.0 h2733197_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nlongformer 0.1 pypi_0 pypi\r\nlz4-c 1.9.3 h2531618_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nmarkdown 3.3.4 pypi_0 pypi\r\nmkl 2020.2 256 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nmkl-service 2.3.0 py36he8ac12f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nmkl_fft 1.3.0 py36h54f3939_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nmkl_random 1.1.1 py36h0573a6f_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nncurses 6.2 he6710b0_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nninja 1.10.2 py36hff7bd54_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nnlp 0.4.0 pypi_0 pypi\r\nnltk 3.6.1 pypi_0 pypi\r\nnumpy 1.19.2 py36h54aff64_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nnumpy-base 1.19.2 py36hfa32c7d_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\noauthlib 3.1.0 pypi_0 pypi\r\nolefile 0.46 py36_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nopenssl 1.1.1k h27cfd23_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npackaging 20.9 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npandas 1.1.5 pypi_0 pypi\r\npillow 8.2.0 py36he98fc37_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npip 21.0.1 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nprotobuf 3.15.8 pypi_0 pypi\r\npyarrow 3.0.0 pypi_0 pypi\r\npyasn1 0.4.8 pypi_0 pypi\r\npyasn1-modules 0.2.8 pypi_0 pypi\r\npycparser 2.20 py_2 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npyopenssl 20.0.1 pyhd3eb1b0_1 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npyparsing 2.4.7 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npysocks 1.7.1 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npython 3.6.13 hdb3f193_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\npython-dateutil 2.8.1 pypi_0 pypi\r\npython_abi 3.6 1_cp36m huggingface\r\npytorch-lightning 0.8.5 pypi_0 pypi\r\npytorch-transformers 1.2.0 pypi_0 pypi\r\npytz 2021.1 pypi_0 pypi\r\npyyaml 5.4.1 pypi_0 pypi\r\nreadline 8.1 h27cfd23_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nregex 2021.4.4 py36h27cfd23_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nrequests 2.25.1 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nrequests-oauthlib 1.3.0 pypi_0 pypi\r\nrouge-score 0.0.4 pypi_0 pypi\r\nrsa 4.7.2 pypi_0 pypi\r\ns3transfer 0.3.7 pypi_0 pypi\r\nsacremoses 0.0.44 pypi_0 pypi\r\nsentencepiece 0.1.95 pypi_0 pypi\r\nsetuptools 52.0.0 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nsix 1.15.0 py36h06a4308_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nsqlite 3.35.4 hdfb4753_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ntensorboard 2.4.1 pypi_0 pypi\r\ntensorboard-plugin-wit 1.8.0 pypi_0 pypi\r\ntensorboardx 2.2 pypi_0 pypi\r\ntest-tube 0.7.5 pypi_0 pypi\r\ntk 8.6.10 hbc83047_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\ntokenizers 0.8.1rc2 pypi_0 pypi\r\ntorch 1.6.0 pypi_0 pypi\r\ntorchvision 0.5.0 py36_cu100 pytorch\r\ntqdm 4.60.0 pypi_0 pypi\r\ntransformers 3.1.0 pypi_0 pypi\r\nurllib3 1.26.4 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nwerkzeug 1.0.1 pypi_0 pypi\r\nwheel 0.36.2 pyhd3eb1b0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nxxhash 2.0.2 pypi_0 pypi\r\nxz 5.2.5 h7b6447c_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nzipp 3.4.1 pyhd3eb1b0_0 \r\nzlib 1.2.11 h7b6447c_3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\nzstd 1.4.9 haebb681_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main\r\n```",
"next step i wan to use gene seqs to pretrain longformer, but i has been seemly dead in step 0 ....",
"Can you please post the output of:\r\n```\r\ntype(model)\r\n```\r\nof your working environment?\r\nIn case it is showing something with `....BartModel`, can you please show us the definition of the class BertEncoder? You can locate it in the directory of:\r\n```\r\nimport transformers\r\nprint(transformers.__file__)\r\n```\r\n",
"> Can you please post the output of:\r\n> \r\n> ```\r\n> type(model)\r\n> ```\r\n> \r\n> of your working environment?\r\n> In case it is showing something with `....BartModel`, can you please show us the definition of the class BertEncoder? You can locate it in the directory of:\r\n> \r\n> ```\r\n> import transformers\r\n> print(transformers.__file__)\r\n> ```\r\n\r\n# code\r\n```python\r\nfrom transformers import AutoModel, AutoTokenizer # , pipeline\r\nimport transformers\r\nprint(transformers.__file__)\r\n\r\n\r\nmodel_name = 'pre-model/' + 'longformer-encdec-base-16384'\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\nmodel = AutoModel.from_pretrained(model_name)\r\n# classifier = pipeline('feature-extraction', model=model, tokenizer=tokenizer)\r\n\r\nprint(type(model))\r\n\r\n\r\n```\r\n\r\n# env0:\r\n```bash\r\n/home/pbc/anaconda3/envs/dnabert/lib/python3.6/site-packages/transformers/__init__.py\r\nSome weights of the model checkpoint at pre-model/longformer-encdec-base-16384 were not used when initializing BartModel: ['model.encoder.layers.0.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.0.self_attn.output.weight', 'model.encoder.layers.0.self_attn.output.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.1.self_attn.output.weight', 'model.encoder.layers.1.self_attn.output.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.2.self_attn.output.weight', 'model.encoder.layers.2.self_attn.output.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.3.self_attn.output.weight', 'model.encoder.layers.3.self_attn.output.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.4.self_attn.output.weight', 'model.encoder.layers.4.self_attn.output.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.5.self_attn.output.weight', 'model.encoder.layers.5.self_attn.output.bias']\r\n- This IS expected if you are initializing BartModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).\r\n- This IS NOT expected if you are initializing BartModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of BartModel were not initialized from the model checkpoint at pre-model/longformer-encdec-base-16384 and are newly initialized: ['model.encoder.layers.0.self_attn.k_proj.weight', 'model.encoder.layers.0.self_attn.k_proj.bias', 'model.encoder.layers.0.self_attn.v_proj.weight', 'model.encoder.layers.0.self_attn.v_proj.bias', 'model.encoder.layers.0.self_attn.q_proj.weight', 'model.encoder.layers.0.self_attn.q_proj.bias', 'model.encoder.layers.0.self_attn.out_proj.weight', 'model.encoder.layers.0.self_attn.out_proj.bias', 'model.encoder.layers.1.self_attn.k_proj.weight', 'model.encoder.layers.1.self_attn.k_proj.bias', 'model.encoder.layers.1.self_attn.v_proj.weight', 'model.encoder.layers.1.self_attn.v_proj.bias', 'model.encoder.layers.1.self_attn.q_proj.weight', 'model.encoder.layers.1.self_attn.q_proj.bias', 'model.encoder.layers.1.self_attn.out_proj.weight', 'model.encoder.layers.1.self_attn.out_proj.bias', 'model.encoder.layers.2.self_attn.k_proj.weight', 'model.encoder.layers.2.self_attn.k_proj.bias', 'model.encoder.layers.2.self_attn.v_proj.weight', 'model.encoder.layers.2.self_attn.v_proj.bias', 'model.encoder.layers.2.self_attn.q_proj.weight', 'model.encoder.layers.2.self_attn.q_proj.bias', 'model.encoder.layers.2.self_attn.out_proj.weight', 'model.encoder.layers.2.self_attn.out_proj.bias', 'model.encoder.layers.3.self_attn.k_proj.weight', 'model.encoder.layers.3.self_attn.k_proj.bias', 'model.encoder.layers.3.self_attn.v_proj.weight', 'model.encoder.layers.3.self_attn.v_proj.bias', 'model.encoder.layers.3.self_attn.q_proj.weight', 'model.encoder.layers.3.self_attn.q_proj.bias', 'model.encoder.layers.3.self_attn.out_proj.weight', 'model.encoder.layers.3.self_attn.out_proj.bias', 'model.encoder.layers.4.self_attn.k_proj.weight', 'model.encoder.layers.4.self_attn.k_proj.bias', 'model.encoder.layers.4.self_attn.v_proj.weight', 'model.encoder.layers.4.self_attn.v_proj.bias', 'model.encoder.layers.4.self_attn.q_proj.weight', 'model.encoder.layers.4.self_attn.q_proj.bias', 'model.encoder.layers.4.self_attn.out_proj.weight', 'model.encoder.layers.4.self_attn.out_proj.bias', 'model.encoder.layers.5.self_attn.k_proj.weight', 'model.encoder.layers.5.self_attn.k_proj.bias', 'model.encoder.layers.5.self_attn.v_proj.weight', 'model.encoder.layers.5.self_attn.v_proj.bias', 'model.encoder.layers.5.self_attn.q_proj.weight', 'model.encoder.layers.5.self_attn.q_proj.bias', 'model.encoder.layers.5.self_attn.out_proj.weight', 'model.encoder.layers.5.self_attn.out_proj.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n<class 'transformers.modeling_bart.BartModel'>\r\n```\r\n# env1:tf2-pt-keras\r\n```bash\r\n/home/pbc/anaconda3/envs/tf2_pt_kr2/lib/python3.6/site-packages/transformers-4.4.2-py3.8.egg/transformers/__init__.py\r\nSome weights of the model checkpoint at pre-model/longformer-encdec-base-16384 were not used when initializing BartModel: ['model.encoder.layers.0.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.0.self_attn.output.weight', 'model.encoder.layers.0.self_attn.output.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.1.self_attn.output.weight', 'model.encoder.layers.1.self_attn.output.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.2.self_attn.output.weight', 'model.encoder.layers.2.self_attn.output.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.3.self_attn.output.weight', 'model.encoder.layers.3.self_attn.output.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.4.self_attn.output.weight', 'model.encoder.layers.4.self_attn.output.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.5.self_attn.output.weight', 'model.encoder.layers.5.self_attn.output.bias']\r\n- This IS expected if you are initializing BartModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing BartModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of BartModel were not initialized from the model checkpoint at pre-model/longformer-encdec-base-16384 and are newly initialized: ['model.encoder.layers.0.self_attn.k_proj.weight', 'model.encoder.layers.0.self_attn.k_proj.bias', 'model.encoder.layers.0.self_attn.v_proj.weight', 'model.encoder.layers.0.self_attn.v_proj.bias', 'model.encoder.layers.0.self_attn.q_proj.weight', 'model.encoder.layers.0.self_attn.q_proj.bias', 'model.encoder.layers.0.self_attn.out_proj.weight', 'model.encoder.layers.0.self_attn.out_proj.bias', 'model.encoder.layers.1.self_attn.k_proj.weight', 'model.encoder.layers.1.self_attn.k_proj.bias', 'model.encoder.layers.1.self_attn.v_proj.weight', 'model.encoder.layers.1.self_attn.v_proj.bias', 'model.encoder.layers.1.self_attn.q_proj.weight', 'model.encoder.layers.1.self_attn.q_proj.bias', 'model.encoder.layers.1.self_attn.out_proj.weight', 'model.encoder.layers.1.self_attn.out_proj.bias', 'model.encoder.layers.2.self_attn.k_proj.weight', 'model.encoder.layers.2.self_attn.k_proj.bias', 'model.encoder.layers.2.self_attn.v_proj.weight', 'model.encoder.layers.2.self_attn.v_proj.bias', 'model.encoder.layers.2.self_attn.q_proj.weight', 'model.encoder.layers.2.self_attn.q_proj.bias', 'model.encoder.layers.2.self_attn.out_proj.weight', 'model.encoder.layers.2.self_attn.out_proj.bias', 'model.encoder.layers.3.self_attn.k_proj.weight', 'model.encoder.layers.3.self_attn.k_proj.bias', 'model.encoder.layers.3.self_attn.v_proj.weight', 'model.encoder.layers.3.self_attn.v_proj.bias', 'model.encoder.layers.3.self_attn.q_proj.weight', 'model.encoder.layers.3.self_attn.q_proj.bias', 'model.encoder.layers.3.self_attn.out_proj.weight', 'model.encoder.layers.3.self_attn.out_proj.bias', 'model.encoder.layers.4.self_attn.k_proj.weight', 'model.encoder.layers.4.self_attn.k_proj.bias', 'model.encoder.layers.4.self_attn.v_proj.weight', 'model.encoder.layers.4.self_attn.v_proj.bias', 'model.encoder.layers.4.self_attn.q_proj.weight', 'model.encoder.layers.4.self_attn.q_proj.bias', 'model.encoder.layers.4.self_attn.out_proj.weight', 'model.encoder.layers.4.self_attn.out_proj.bias', 'model.encoder.layers.5.self_attn.k_proj.weight', 'model.encoder.layers.5.self_attn.k_proj.bias', 'model.encoder.layers.5.self_attn.v_proj.weight', 'model.encoder.layers.5.self_attn.v_proj.bias', 'model.encoder.layers.5.self_attn.q_proj.weight', 'model.encoder.layers.5.self_attn.q_proj.bias', 'model.encoder.layers.5.self_attn.out_proj.weight', 'model.encoder.layers.5.self_attn.out_proj.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nTraceback (most recent call last):\r\n File \"/home/pbc/anaconda3/envs/tf2_pt_kr2/lib/python3.6/site-packages/IPython/core/interactiveshell.py\", line 3331, in run_code\r\n exec(code_obj, self.user_global_ns, self.user_ns)\r\n File \"<ipython-input-2-b1f8935f1cfa>\", line 1, in <module>\r\n runfile('/home/pbc/Documents/PycharmProjects/myEPI/src/github.py', wdir='/home/pbc/Documents/PycharmProjects/myEPI/src')\r\n File \"/home/pbc/.local/share/JetBrains/Toolbox/apps/PyCharm-P/ch-0/202.7660.27/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py\", line 197, in runfile\r\n pydev_imports.execfile(filename, global_vars, local_vars) # execute the script\r\n File \"/home/pbc/.local/share/JetBrains/Toolbox/apps/PyCharm-P/ch-0/202.7660.27/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py\", line 18, in execfile\r\n exec(compile(contents+\"\\n\", file, 'exec'), glob, loc)\r\n File \"/home/pbc/Documents/PycharmProjects/myEPI/src/github.py\", line 8, in <module>\r\n model = AutoModel.from_pretrained(model_name)\r\n File \"/home/pbc/anaconda3/envs/tf2_pt_kr2/lib/python3.6/site-packages/transformers-4.4.2-py3.8.egg/transformers/models/auto/modeling_auto.py\", line 815, in from_pretrained\r\n pretrained_model_name_or_path, *model_args, config=config, **kwargs\r\n File \"/home/pbc/anaconda3/envs/tf2_pt_kr2/lib/python3.6/site-packages/transformers-4.4.2-py3.8.egg/transformers/modeling_utils.py\", line 1183, in from_pretrained\r\n model.__class__.__name__, \"\\n\\t\".join(error_msgs)\r\nRuntimeError: Error(s) in loading state_dict for BartModel:\r\n\tsize mismatch for model.encoder.embed_positions.weight: copying a param with shape torch.Size([16386, 768]) from checkpoint, the shape in current model is torch.Size([1026, 768]).\r\n```\r\n# env2: copied from env0 but not worked\r\n```bash\r\nhome/pbc/anaconda3/envs/dnabert/lib/python3.6/site-packages/transformers/__init__.py\r\nSome weights of the model checkpoint at pre-model/longformer-encdec-base-16384 were not used when initializing BartModel: ['model.encoder.layers.0.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.0.self_attn.output.weight', 'model.encoder.layers.0.self_attn.output.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.1.self_attn.output.weight', 'model.encoder.layers.1.self_attn.output.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.2.self_attn.output.weight', 'model.encoder.layers.2.self_attn.output.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.3.self_attn.output.weight', 'model.encoder.layers.3.self_attn.output.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.4.self_attn.output.weight', 'model.encoder.layers.4.self_attn.output.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.5.self_attn.output.weight', 'model.encoder.layers.5.self_attn.output.bias']\r\n- This IS expected if you are initializing BartModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).\r\n- This IS NOT expected if you are initializing BartModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of BartModel were not initialized from the model checkpoint at pre-model/longformer-encdec-base-16384 and are newly initialized: ['model.encoder.layers.0.self_attn.k_proj.weight', 'model.encoder.layers.0.self_attn.k_proj.bias', 'model.encoder.layers.0.self_attn.v_proj.weight', 'model.encoder.layers.0.self_attn.v_proj.bias', 'model.encoder.layers.0.self_attn.q_proj.weight', 'model.encoder.layers.0.self_attn.q_proj.bias', 'model.encoder.layers.0.self_attn.out_proj.weight', 'model.encoder.layers.0.self_attn.out_proj.bias', 'model.encoder.layers.1.self_attn.k_proj.weight', 'model.encoder.layers.1.self_attn.k_proj.bias', 'model.encoder.layers.1.self_attn.v_proj.weight', 'model.encoder.layers.1.self_attn.v_proj.bias', 'model.encoder.layers.1.self_attn.q_proj.weight', 'model.encoder.layers.1.self_attn.q_proj.bias', 'model.encoder.layers.1.self_attn.out_proj.weight', 'model.encoder.layers.1.self_attn.out_proj.bias', 'model.encoder.layers.2.self_attn.k_proj.weight', 'model.encoder.layers.2.self_attn.k_proj.bias', 'model.encoder.layers.2.self_attn.v_proj.weight', 'model.encoder.layers.2.self_attn.v_proj.bias', 'model.encoder.layers.2.self_attn.q_proj.weight', 'model.encoder.layers.2.self_attn.q_proj.bias', 'model.encoder.layers.2.self_attn.out_proj.weight', 'model.encoder.layers.2.self_attn.out_proj.bias', 'model.encoder.layers.3.self_attn.k_proj.weight', 'model.encoder.layers.3.self_attn.k_proj.bias', 'model.encoder.layers.3.self_attn.v_proj.weight', 'model.encoder.layers.3.self_attn.v_proj.bias', 'model.encoder.layers.3.self_attn.q_proj.weight', 'model.encoder.layers.3.self_attn.q_proj.bias', 'model.encoder.layers.3.self_attn.out_proj.weight', 'model.encoder.layers.3.self_attn.out_proj.bias', 'model.encoder.layers.4.self_attn.k_proj.weight', 'model.encoder.layers.4.self_attn.k_proj.bias', 'model.encoder.layers.4.self_attn.v_proj.weight', 'model.encoder.layers.4.self_attn.v_proj.bias', 'model.encoder.layers.4.self_attn.q_proj.weight', 'model.encoder.layers.4.self_attn.q_proj.bias', 'model.encoder.layers.4.self_attn.out_proj.weight', 'model.encoder.layers.4.self_attn.out_proj.bias', 'model.encoder.layers.5.self_attn.k_proj.weight', 'model.encoder.layers.5.self_attn.k_proj.bias', 'model.encoder.layers.5.self_attn.v_proj.weight', 'model.encoder.layers.5.self_attn.v_proj.bias', 'model.encoder.layers.5.self_attn.q_proj.weight', 'model.encoder.layers.5.self_attn.q_proj.bias', 'model.encoder.layers.5.self_attn.out_proj.weight', 'model.encoder.layers.5.self_attn.out_proj.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nTraceback (most recent call last):\r\n File \"<input>\", line 1, in <module>\r\n File \"/home/pbc/.local/share/JetBrains/Toolbox/apps/PyCharm-P/ch-0/203.7148.72/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py\", line 197, in runfile\r\n pydev_imports.execfile(filename, global_vars, local_vars) # execute the script\r\n File \"/home/pbc/.local/share/JetBrains/Toolbox/apps/PyCharm-P/ch-0/203.7148.72/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py\", line 18, in execfile\r\n exec(compile(contents+\"\\n\", file, 'exec'), glob, loc)\r\n File \"/home/pbc/PycharmProjects/bert/github.py\", line 7, in <module>\r\n model = AutoModel.from_pretrained(model_name)\r\n File \"/home/pbc/anaconda3/envs/dnabert/lib/python3.6/site-packages/transformers/modeling_auto.py\", line 523, in from_pretrained\r\n return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)\r\n File \"/home/pbc/anaconda3/envs/dnabert/lib/python3.6/site-packages/transformers/modeling_utils.py\", line 972, in from_pretrained\r\n model.__class__.__name__, \"\\n\\t\".join(error_msgs)\r\nRuntimeError: Error(s) in loading state_dict for BartModel:\r\n\tsize mismatch for model.encoder.embed_positions.weight: copying a param with shape torch.Size([16386, 768]) from checkpoint, the shape in current model is torch.Size([1026, 768]).\r\n ```\r\n\r\nI found transformers.__file__ all are different\r\n\r\n",
"Now please check this directory `/home/pbc/anaconda3/envs/dnabert/lib/python3.6/site-packages/transformers/` and locate the file called `modeling_bart.py`. Post the BertEncoder class definition here.\r\n\r\nYou should also pay attention to the weights that were not used from the pre-trained weights:\r\n`['model.encoder.layers.0.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.0.self_attn.output.weight', 'model.encoder.layers.0.self_attn.output.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.1.self_attn.output.weight', 'model.encoder.layers.1.self_attn.output.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.2.self_attn.output.weight', 'model.encoder.layers.2.self_attn.output.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.3.self_attn.output.weight', 'model.encoder.layers.3.self_attn.output.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.4.self_attn.output.weight', 'model.encoder.layers.4.self_attn.output.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.5.self_attn.output.weight', 'model.encoder.layers.5.self_attn.output.bias']\r\n`\r\n\r\nAre you sure that this model (including config and weights) can be used with the transformers AutoModel class? Currently, it looks to me that someone has built his own model with the transformers library (which is not supposed to work with the AutoClasses). ",
"> Now please check this directory `/home/pbc/anaconda3/envs/dnabert/lib/python3.6/site-packages/transformers/` and locate the file called `modeling_bart.py`. Post the BertEncoder class definition here.\r\n> \r\n> You should also pay attention to the weights that were not used from the pre-trained weights:\r\n> `['model.encoder.layers.0.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.0.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.0.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.0.self_attn.output.weight', 'model.encoder.layers.0.self_attn.output.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.1.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.1.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.1.self_attn.output.weight', 'model.encoder.layers.1.self_attn.output.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.2.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.2.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.2.self_attn.output.weight', 'model.encoder.layers.2.self_attn.output.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.3.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.3.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.3.self_attn.output.weight', 'model.encoder.layers.3.self_attn.output.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.4.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.4.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.4.self_attn.output.weight', 'model.encoder.layers.4.self_attn.output.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.query.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.query.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.key.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.key.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.value.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.value.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.query_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.query_global.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.key_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.key_global.bias', 'model.encoder.layers.5.self_attn.longformer_self_attn.value_global.weight', 'model.encoder.layers.5.self_attn.longformer_self_attn.value_global.bias', 'model.encoder.layers.5.self_attn.output.weight', 'model.encoder.layers.5.self_attn.output.bias'] `\r\n> \r\n> Are you sure that this model (including config and weights) can be used with the transformers AutoModel class? Currently, it looks to me that someone has built his own model with the transformers library (which is not supposed to work with the AutoClasses).\r\n\r\nI am not sure, when I start to use bert model by the same way , it works, So I make it again in longformer, then many errors occured one by one",
"That is a different thing. What you have here `longformer-encdec-base-16384` is something that is provided by someone that is not supposed to work with the provided AutoClasses by hugging face. Please check the code of this someone and see what this person did.\r\nI think this is the repository you should check out: https://github.com/allenai/ms2 \r\nor maybe this code snippet: https://github.com/allenai/longformer/issues/154 ",
"> That is a different thing. What you have here `longformer-encdec-base-16384` is something that is provided by someone that is not supposed to work with the provided AutoClasses by hugging face. Please check the code of this someone and see what this person did.\r\n> I think this is the repository you should check out: https://github.com/allenai/ms2\r\n> or maybe this code snippet: [allenai/longformer#154](https://github.com/allenai/longformer/issues/154)\r\n\r\nyeah, you means that I should install env with allenai/longformer and not huggingface, at start I read allenai/longformer's readme, i just found that it may from huggingface and don't look for any things about how to use its longformer model by python code. \r\n\r\nI have seen [allenai/longformer#154](https://github.com/allenai/longformer/issues/154), and I will try it through Imitating her code. \r\n\r\nAnd another question is, if I want to use hugging face env to load model, that means I should download in https://huggingface.co/? \r\n\r\nAs for ms2, I will view it soon, Thanks!\r\n\r\n# Finally,thank you very much! you save me!Thanks!ORZ",
"Yes, the `allenai/longformer` is the framework you should use for `longformer-encdec-base-16384`.\r\n\r\n> And another question is, if I want to use hugging face env to load model, that means I should download in https://huggingface.co/?\r\n\r\nYes, you can check the pre-trained models here: https://huggingface.co/models",
"Okay, and I am curious that how do you find `allenai/longformer#154` and https://github.com/allenai/ms2. If I have the skill, I can save myself quickly,haha.",
"Use a search engine of your choice and look for `longformer-encdec-base-16384` ;-)",
"> longformer-encdec-base-16384\r\n\r\nOK,thank you very much!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,622 | 1,622 | NONE | null | ```bash
RuntimeError: Error(s) in loading state_dict for BartModel:
size mismatch for model.encoder.embed_positions.weight: copying a param with shape torch.Size([16386, 768]) from checkpoint, the shape in current model is torch.Size([1026, 768]).
```
I use longformer model called longformer-encdec-base-16384 which is downloaded in https://github.com/allenai/longformer,and use huggingface to load the model,when transformers’ version is 3.1.0, the code can run, but when it is 4.4.2,the error happened.
MeanWhile,when I use the model to proposal pairs of sentences,I found it that the returned token_type_ids values are just zero
without one. how ever,in the model's special_tokens_map.json, it has defined cls_token and sep_token.
Finally, I sincerely hope you would reply me soon. Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11301/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11300 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11300/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11300/comments | https://api.github.com/repos/huggingface/transformers/issues/11300/events | https://github.com/huggingface/transformers/pull/11300 | 860,524,909 | MDExOlB1bGxSZXF1ZXN0NjE3Mzg3ODU3 | 11,300 | EncoderDecoderConfigs should not create new objects | {
"login": "cronoik",
"id": 18630848,
"node_id": "MDQ6VXNlcjE4NjMwODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cronoik",
"html_url": "https://github.com/cronoik",
"followers_url": "https://api.github.com/users/cronoik/followers",
"following_url": "https://api.github.com/users/cronoik/following{/other_user}",
"gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cronoik/subscriptions",
"organizations_url": "https://api.github.com/users/cronoik/orgs",
"repos_url": "https://api.github.com/users/cronoik/repos",
"events_url": "https://api.github.com/users/cronoik/events{/privacy}",
"received_events_url": "https://api.github.com/users/cronoik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patrickvonplaten @patil-suraj : Could please help me with the failed test? The error message is not that expressive:\r\n\r\n> Run cat reports/tests_templates_failures_short.txt\r\n> cat reports/tests_templates_failures_short.txt\r\n> shell: /usr/bin/bash -e {0}\r\n> env:\r\n> pythonLocation: /opt/hostedtoolcache/Python/3.6.13/x64\r\n> cat: reports/tests_templates_failures_short.txt: No such file or directory\r\n> Error: Process completed with exit code 1.",
"> Instead of modifying the config, I think one alternate solution is to assign the shared config object to the encoder and decoder, after this line\r\n> \r\n> https://github.com/huggingface/transformers/blob/95dab34d5588fb155dfed8293ac2fbb1217a95a7/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L345\r\n> \r\n> ```python\r\n> encoder.config = config.encoder\r\n> decoder.config = config.decoder\r\n> ```\r\nPlease correct me if I am wrong but what happens, in that case, is the following:\r\n1. Casting already existing config objects to dictionaries.\r\n2. Recreating config objects from those dictionaries.\r\n3. Initializing EncoderDecoderConfig with those new config objects.\r\n4. Throwing away the newly generating config objects by assigning the ones that were already present before step 1.\r\n\r\nI think this PR is a cleaner implementation by avoids steps 1-3 and executing 4 directly.\r\n\r\n",
"Good point!\r\n\r\nBut this breaks backward compatibility. With this change, none of the previously trained models will be able to load because the config will now be in-compatible. For ex if you try\r\n```python\r\nconfig = EncoderDecoderConfig.from_pretrained(\"google/bert2bert_L-24_wmt_de_en\")\r\n```\r\non this PR, it raises an exception. So loading model fails.\r\n\r\nIn any case, backward compatibility is of utmost importance.",
"Hi @patil-suraj \r\nI have pushed a new version that is now backward compatible and also covers a case I have previously overlooked. After checking the implementation of the parent classes `PreTrainedModel` and `PretrainedConfig` I came to the conclusion that your suggestion is the best because they all transfer dictionaries as parameters and not config objects. \r\nWe could of course implement a type check like;\r\n```\r\nif type(encoder) == dict:\r\n#.....\r\n```\r\nbut I think this makes the code less readable. Would be great if you could have a look again and thanks for the constructive review so far :+1:.",
"Thanks a lot for taking care of this @cronoik :-) It's a nice fix. It would be awesome if you could check out the suggestions and then we can merge this IMO.",
"Hi @patil-suraj @patrickvonplaten, \r\nthanks for all the suggestions. I think I am done. Could you please have a look?"
] | 1,618 | 1,619 | 1,619 | CONTRIBUTOR | null | # What does this PR do?
1. Removes the creation of separate config objects (pre PR 3:encoderdecoderConfig, encoderConfig, decoderConfig) and uses the existing ones (encoderConfig and decoderConfig now part of the encoderdecoderConfig)
2. Overwrite `resize_token_embeddings` from the parent class because it is not working for the EncoderDecoderModel and currently throws an error
Fixes #11285
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11300/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11300/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11300",
"html_url": "https://github.com/huggingface/transformers/pull/11300",
"diff_url": "https://github.com/huggingface/transformers/pull/11300.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11300.patch",
"merged_at": 1619343947000
} |
https://api.github.com/repos/huggingface/transformers/issues/11299 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11299/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11299/comments | https://api.github.com/repos/huggingface/transformers/issues/11299/events | https://github.com/huggingface/transformers/pull/11299 | 860,519,905 | MDExOlB1bGxSZXF1ZXN0NjE3Mzg0MzI1 | 11,299 | Pr2keep encoder decoder synced | {
"login": "cronoik",
"id": 18630848,
"node_id": "MDQ6VXNlcjE4NjMwODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cronoik",
"html_url": "https://github.com/cronoik",
"followers_url": "https://api.github.com/users/cronoik/followers",
"following_url": "https://api.github.com/users/cronoik/following{/other_user}",
"gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cronoik/subscriptions",
"organizations_url": "https://api.github.com/users/cronoik/orgs",
"repos_url": "https://api.github.com/users/cronoik/repos",
"events_url": "https://api.github.com/users/cronoik/events{/privacy}",
"received_events_url": "https://api.github.com/users/cronoik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,618 | 1,618 | 1,618 | CONTRIBUTOR | null | # What does this PR do?
Fixes #11285 + adds an implementation for the `resize_token_embeddings method` (currently the parent class implementation is used which throws an error).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten, @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11299/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11299/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11299",
"html_url": "https://github.com/huggingface/transformers/pull/11299",
"diff_url": "https://github.com/huggingface/transformers/pull/11299.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11299.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11297 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11297/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11297/comments | https://api.github.com/repos/huggingface/transformers/issues/11297/events | https://github.com/huggingface/transformers/pull/11297 | 860,475,207 | MDExOlB1bGxSZXF1ZXN0NjE3MzU1MTAw | 11,297 | Fixing bug in generation | {
"login": "nicola-decao",
"id": 9703100,
"node_id": "MDQ6VXNlcjk3MDMxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9703100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nicola-decao",
"html_url": "https://github.com/nicola-decao",
"followers_url": "https://api.github.com/users/nicola-decao/followers",
"following_url": "https://api.github.com/users/nicola-decao/following{/other_user}",
"gists_url": "https://api.github.com/users/nicola-decao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nicola-decao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nicola-decao/subscriptions",
"organizations_url": "https://api.github.com/users/nicola-decao/orgs",
"repos_url": "https://api.github.com/users/nicola-decao/repos",
"events_url": "https://api.github.com/users/nicola-decao/events{/privacy}",
"received_events_url": "https://api.github.com/users/nicola-decao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Circle CI is unrelated - merging! Thanks a lot @nicola-decao "
] | 1,618 | 1,619 | 1,619 | CONTRIBUTOR | null | When passing `inputs_embeds` and not `input_ids=None` the generation function fails because `input_ids` is created but the function but it should not.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11297/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11297/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11297",
"html_url": "https://github.com/huggingface/transformers/pull/11297",
"diff_url": "https://github.com/huggingface/transformers/pull/11297.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11297.patch",
"merged_at": 1619195066000
} |
https://api.github.com/repos/huggingface/transformers/issues/11296 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11296/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11296/comments | https://api.github.com/repos/huggingface/transformers/issues/11296/events | https://github.com/huggingface/transformers/issues/11296 | 860,472,136 | MDU6SXNzdWU4NjA0NzIxMzY= | 11,296 | Cannot save GPT2 model with signature | {
"login": "ZhangTianrong",
"id": 20651728,
"node_id": "MDQ6VXNlcjIwNjUxNzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/20651728?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhangTianrong",
"html_url": "https://github.com/ZhangTianrong",
"followers_url": "https://api.github.com/users/ZhangTianrong/followers",
"following_url": "https://api.github.com/users/ZhangTianrong/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhangTianrong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhangTianrong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhangTianrong/subscriptions",
"organizations_url": "https://api.github.com/users/ZhangTianrong/orgs",
"repos_url": "https://api.github.com/users/ZhangTianrong/repos",
"events_url": "https://api.github.com/users/ZhangTianrong/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhangTianrong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Tbh, I've never really worked with `get_concrete_function()`, etc.... @Rocketknight1 - do you have an idea by any chance?",
"> Tbh, I've never really worked with `get_concrete_function()`, etc.... @Rocketknight1 - do you have an idea by any chance?\n\nI just realized I should tag the authors of the post I read about in the issue. I have edited the issue.",
"Hi, I'm the TF maintainer! There are two problems here. The first is that the first two arguments to `TFGPT2LMHeadModel` are not `input_ids` and `attention_mask`, they are `input_ids` and `past`, see [here](https://huggingface.co/transformers/model_doc/gpt2.html#tfgpt2lmheadmodel). Also, `TFGPT2LMHeadModel` returns a tuple/dict of Tensors. Concrete functions do not support that - you need to pick which one you want. Try something like this, which should work (if you want an output other than \"logits\", you can just change that bit):\r\n\r\n```\r\nimport tensorflow as tf\r\nfrom transformers import TFGPT2LMHeadModel\r\nfrom transformers import GPT2Tokenizer\r\n\r\[email protected]\r\ndef call_model(input_ids, attention_mask):\r\n return model(input_ids=input_ids, attention_mask=attention_mask)['logits']\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"distilgpt2\")\r\nmodel = TFGPT2LMHeadModel.from_pretrained(\"distilgpt2\", pad_token_id=tokenizer.eos_token_id)\r\nconcrete_function = call_model.get_concrete_function(tf.TensorSpec([None, 384], tf.int32, name=\"input_ids\"), tf.TensorSpec([None, 384], tf.int32, name=\"attention_mask\"))\r\ntf.saved_model.save(model, 'distilgpt2_sig', signatures=concrete_function)\r\n```",
"> Hi, I'm the TF maintainer! There are two problems here. The first is that the first two arguments to `TFGPT2LMHeadModel` are not `input_ids` and `attention_mask`, they are `input_ids` and `past`, see [here](https://huggingface.co/transformers/model_doc/gpt2.html#tfgpt2lmheadmodel). Also, `TFGPT2LMHeadModel` returns a tuple/dict of Tensors. Concrete functions do not support that - you need to pick which one you want. Try something like this, which should work (if you want an output other than \"logits\", you can just change that bit):\r\n> \r\n> ```\r\n> import tensorflow as tf\r\n> from transformers import TFGPT2LMHeadModel\r\n> from transformers import GPT2Tokenizer\r\n> \r\n> @tf.function\r\n> def call_model(input_ids, attention_mask):\r\n> return model(input_ids=input_ids, attention_mask=attention_mask)['logits']\r\n> \r\n> tokenizer = GPT2Tokenizer.from_pretrained(\"distilgpt2\")\r\n> model = TFGPT2LMHeadModel.from_pretrained(\"distilgpt2\", pad_token_id=tokenizer.eos_token_id)\r\n> concrete_function = call_model.get_concrete_function(tf.TensorSpec([None, 384], tf.int32, name=\"input_ids\"), tf.TensorSpec([None, 384], tf.int32, name=\"attention_mask\"))\r\n> tf.saved_model.save(model, 'distilgpt2_sig', signatures=concrete_function)\r\n> ```\r\n\r\nThat works! Thank you for the help. I am not familiar with TF especially things like `get_concrete_function`, I didn't know you can define a function outside the model and then save it. "
] | 1,618 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.4
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@VictorSanh @n1t0 @Pierrci
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
I am trying to follow this [post](https://blog.tensorflow.org/2020/05/how-hugging-face-achieved-2x-performance-boost-question-answering.html) where @Pierrci illustrated how to convert a distilled BERT model into Tensorflow saved model and serve it with Tensorflow.js in the end. I would like to do similar stuff with a distilgpt2 model.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
import tensorflow as tf
from transformers import TFGPT2LMHeadModel
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("distilgpt2")
model = TFGPT2LMHeadModel.from_pretrained("distilgpt2", pad_token_id=tokenizer.eos_token_id)
callable = tf.function(model.call)
concrete_function = callable.get_concrete_function([tf.TensorSpec([None, 384], tf.int32, name="input_ids"), tf.TensorSpec([None, 384], tf.int32, name="attention_mask")])
tf.saved_model.save(model, 'distilgpt2_sig', signatures=concrete_function)
```
and the error messages are as follow
```
ValueError: Got a non-Tensor value (<tf.Tensor 'StatefulPartitionedCall:1' shape=(2, None, 12, 384, 64) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:2' shape=(2, None, 12, 384, 64) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:3' shape=(2, None, 12, 384, 64) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:4' shape=(2, None, 12, 384, 64) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:5' shape=(2, None, 12, 384, 64) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:6' shape=(2, None, 12, 384, 64) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:7' shape=(2, None, 12, 384, 64) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:8' shape=(2, None, 12, 384, 64) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:9' shape=(2, None, 12, 384, 64) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:10' shape=(2, None, 12, 384, 64) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:11' shape=(2, None, 12, 384, 64) dtype=float32>, <tf.Tensor 'StatefulPartitionedCall:12' shape=(2, None, 12, 384, 64) dtype=float32>) for key 'past_key_values' in the output of the function __inference_call_90110 used to generate the SavedModel signature 'serving_default'. Outputs for functions used as signatures must be a single Tensor, a sequence of Tensors, or a dictionary from string to Tensor.
```
I can save the model if I don't specify `signitures` but in that case the input shape is default to [-1,5].
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I expect the model to be saved without a problem.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11296/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11296/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11295 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11295/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11295/comments | https://api.github.com/repos/huggingface/transformers/issues/11295/events | https://github.com/huggingface/transformers/pull/11295 | 860,383,356 | MDExOlB1bGxSZXF1ZXN0NjE3Mjg4NjM1 | 11,295 | Improve "infer_framework_from_model" func readability | {
"login": "shabie",
"id": 30535146,
"node_id": "MDQ6VXNlcjMwNTM1MTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/30535146?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shabie",
"html_url": "https://github.com/shabie",
"followers_url": "https://api.github.com/users/shabie/followers",
"following_url": "https://api.github.com/users/shabie/following{/other_user}",
"gists_url": "https://api.github.com/users/shabie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shabie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shabie/subscriptions",
"organizations_url": "https://api.github.com/users/shabie/orgs",
"repos_url": "https://api.github.com/users/shabie/repos",
"events_url": "https://api.github.com/users/shabie/events{/privacy}",
"received_events_url": "https://api.github.com/users/shabie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,638 | 1,622 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11295/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11295/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11295",
"html_url": "https://github.com/huggingface/transformers/pull/11295",
"diff_url": "https://github.com/huggingface/transformers/pull/11295.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11295.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11294 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11294/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11294/comments | https://api.github.com/repos/huggingface/transformers/issues/11294/events | https://github.com/huggingface/transformers/issues/11294 | 860,379,431 | MDU6SXNzdWU4NjAzNzk0MzE= | 11,294 | serious bug with trainer.py when restarting the training from a checkpoint | {
"login": "dorooddorood606",
"id": 79288051,
"node_id": "MDQ6VXNlcjc5Mjg4MDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/79288051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorooddorood606",
"html_url": "https://github.com/dorooddorood606",
"followers_url": "https://api.github.com/users/dorooddorood606/followers",
"following_url": "https://api.github.com/users/dorooddorood606/following{/other_user}",
"gists_url": "https://api.github.com/users/dorooddorood606/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorooddorood606/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorooddorood606/subscriptions",
"organizations_url": "https://api.github.com/users/dorooddorood606/orgs",
"repos_url": "https://api.github.com/users/dorooddorood606/repos",
"events_url": "https://api.github.com/users/dorooddorood606/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorooddorood606/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can also consider the modification in finetune_trainer.py\r\n\r\nhttps://github.com/huggingface/transformers/blob/1c06240e1b3477728129bb58e7b6c7734bb5074e/examples/legacy/seq2seq/finetune_trainer.py#L233\r\n\r\nIf you freeze some parameters as done in the line above, those would not be there when you load the model and restarting the training, this is really a serious issue, thanks for the help \r\n",
"Here is the minimal code to generate this bug, we make a model, we freeze, then we save it (as done in trainer checkpoint), then we load it (as done in train() in trainer.py) and we see if the feezed parameters are there or not\r\n\r\n```\r\nfrom transformers import T5ForConditionalGeneration\r\nfrom typing import Optional \r\nimport torch\r\nimport os\r\n\r\n# This is copied from trainer.py\r\ndef _save(model, output_dir: Optional[str] = None):\r\n os.makedirs(output_dir, exist_ok=True)\r\n print(f\"Saving model checkpoint to {output_dir}\")\r\n # Save a trained model and configuration using `save_pretrained()`.\r\n # They can then be reloaded using `from_pretrained()`\r\n state_dict = model.state_dict()\r\n model.save_pretrained(output_dir, state_dict=state_dict)\r\n\r\ndef print_num_parameters(model):\r\n for n,p in model.named_parameters():\r\n if (p.requires_grad):\r\n print(\"n \", n) \r\n\r\ndef freeze_params(model):\r\n for n,p in model.named_parameters():\r\n p.requires_grad = False\r\n\r\n\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"t5-base\")\r\nfreeze_params(model)\r\nprint(\"#### parameters before saving ####\")\r\nprint_num_parameters(model)\r\n_save(model, \"temp_model\")\r\n\r\n# Now lets load the model as done in trainer from the checkpoint.\r\nmodel = model.from_pretrained(\"temp_model\")\r\n\r\n# Now lets print the number of parameters\r\nprint(\"#### parameters after saving ####\")\r\nprint_num_parameters(model)\r\n```\r\n\r\nsurprisingly, no, the freezed parameters are not freezed anymore after loading the checkpoint:\r\n```\r\n#### parameters before saving ####\r\nSaving model checkpoint to temp_model\r\n#### parameters after saving ####\r\nn shared.weight\r\nn encoder.block.0.layer.0.SelfAttention.q.weight\r\nn encoder.block.0.layer.0.SelfAttention.k.weight\r\nn encoder.block.0.layer.0.SelfAttention.v.weight\r\nn encoder.block.0.layer.0.SelfAttention.o.weight\r\nn encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight\r\nn encoder.block.0.layer.0.layer_norm.weight\r\nn encoder.block.0.layer.1.DenseReluDense.wi.weight\r\nn encoder.block.0.layer.1.DenseReluDense.wo.weight\r\nn encoder.block.0.layer.1.layer_norm.weight\r\nn encoder.block.1.layer.0.SelfAttention.q.weight\r\nn encoder.block.1.layer.0.SelfAttention.k.weight\r\nn encoder.block.1.layer.0.SelfAttention.v.weight\r\nn encoder.block.1.layer.0.SelfAttention.o.weight\r\nn encoder.block.1.layer.0.layer_norm.weight\r\nn encoder.block.1.layer.1.DenseReluDense.wi.weight\r\nn encoder.block.1.layer.1.DenseReluDense.wo.weight\r\nn encoder.block.1.layer.1.layer_norm.weight\r\nn encoder.block.2.layer.0.SelfAttention.q.weight\r\nn encoder.block.2.layer.0.SelfAttention.k.weight\r\nn encoder.block.2.layer.0.SelfAttention.v.weight\r\nn encoder.block.2.layer.0.SelfAttention.o.weight\r\nn encoder.block.2.layer.0.layer_norm.weight\r\nn encoder.block.2.layer.1.DenseReluDense.wi.weight\r\nn encoder.block.2.layer.1.DenseReluDense.wo.weight\r\nn encoder.block.2.layer.1.layer_norm.weight\r\nn encoder.block.3.layer.0.SelfAttention.q.weight\r\nn encoder.block.3.layer.0.SelfAttention.k.weight\r\nn encoder.block.3.layer.0.SelfAttention.v.weight\r\nn encoder.block.3.layer.0.SelfAttention.o.weight\r\nn encoder.block.3.layer.0.layer_norm.weight\r\nn encoder.block.3.layer.1.DenseReluDense.wi.weight\r\nn encoder.block.3.layer.1.DenseReluDense.wo.weight\r\nn encoder.block.3.layer.1.layer_norm.weight\r\nn encoder.block.4.layer.0.SelfAttention.q.weight\r\nn encoder.block.4.layer.0.SelfAttention.k.weight\r\nn encoder.block.4.layer.0.SelfAttention.v.weight\r\nn encoder.block.4.layer.0.SelfAttention.o.weight\r\nn encoder.block.4.layer.0.layer_norm.weight\r\nn encoder.block.4.layer.1.DenseReluDense.wi.weight\r\nn encoder.block.4.layer.1.DenseReluDense.wo.weight\r\nn encoder.block.4.layer.1.layer_norm.weight\r\nn encoder.block.5.layer.0.SelfAttention.q.weight\r\nn encoder.block.5.layer.0.SelfAttention.k.weight\r\nn encoder.block.5.layer.0.SelfAttention.v.weight\r\nn encoder.block.5.layer.0.SelfAttention.o.weight\r\nn encoder.block.5.layer.0.layer_norm.weight\r\nn encoder.block.5.layer.1.DenseReluDense.wi.weight\r\nn encoder.block.5.layer.1.DenseReluDense.wo.weight\r\nn encoder.block.5.layer.1.layer_norm.weight\r\nn encoder.block.6.layer.0.SelfAttention.q.weight\r\nn encoder.block.6.layer.0.SelfAttention.k.weight\r\nn encoder.block.6.layer.0.SelfAttention.v.weight\r\nn encoder.block.6.layer.0.SelfAttention.o.weight\r\nn encoder.block.6.layer.0.layer_norm.weight\r\nn encoder.block.6.layer.1.DenseReluDense.wi.weight\r\nn encoder.block.6.layer.1.DenseReluDense.wo.weight\r\nn encoder.block.6.layer.1.layer_norm.weight\r\nn encoder.block.7.layer.0.SelfAttention.q.weight\r\nn encoder.block.7.layer.0.SelfAttention.k.weight\r\nn encoder.block.7.layer.0.SelfAttention.v.weight\r\nn encoder.block.7.layer.0.SelfAttention.o.weight\r\nn encoder.block.7.layer.0.layer_norm.weight\r\nn encoder.block.7.layer.1.DenseReluDense.wi.weight\r\nn encoder.block.7.layer.1.DenseReluDense.wo.weight\r\nn encoder.block.7.layer.1.layer_norm.weight\r\nn encoder.block.8.layer.0.SelfAttention.q.weight\r\nn encoder.block.8.layer.0.SelfAttention.k.weight\r\nn encoder.block.8.layer.0.SelfAttention.v.weight\r\nn encoder.block.8.layer.0.SelfAttention.o.weight\r\nn encoder.block.8.layer.0.layer_norm.weight\r\nn encoder.block.8.layer.1.DenseReluDense.wi.weight\r\nn encoder.block.8.layer.1.DenseReluDense.wo.weight\r\nn encoder.block.8.layer.1.layer_norm.weight\r\nn encoder.block.9.layer.0.SelfAttention.q.weight\r\nn encoder.block.9.layer.0.SelfAttention.k.weight\r\nn encoder.block.9.layer.0.SelfAttention.v.weight\r\nn encoder.block.9.layer.0.SelfAttention.o.weight\r\nn encoder.block.9.layer.0.layer_norm.weight\r\nn encoder.block.9.layer.1.DenseReluDense.wi.weight\r\nn encoder.block.9.layer.1.DenseReluDense.wo.weight\r\nn encoder.block.9.layer.1.layer_norm.weight\r\nn encoder.block.10.layer.0.SelfAttention.q.weight\r\nn encoder.block.10.layer.0.SelfAttention.k.weight\r\nn encoder.block.10.layer.0.SelfAttention.v.weight\r\nn encoder.block.10.layer.0.SelfAttention.o.weight\r\nn encoder.block.10.layer.0.layer_norm.weight\r\nn encoder.block.10.layer.1.DenseReluDense.wi.weight\r\nn encoder.block.10.layer.1.DenseReluDense.wo.weight\r\nn encoder.block.10.layer.1.layer_norm.weight\r\nn encoder.block.11.layer.0.SelfAttention.q.weight\r\nn encoder.block.11.layer.0.SelfAttention.k.weight\r\nn encoder.block.11.layer.0.SelfAttention.v.weight\r\nn encoder.block.11.layer.0.SelfAttention.o.weight\r\nn encoder.block.11.layer.0.layer_norm.weight\r\nn encoder.block.11.layer.1.DenseReluDense.wi.weight\r\nn encoder.block.11.layer.1.DenseReluDense.wo.weight\r\nn encoder.block.11.layer.1.layer_norm.weight\r\nn encoder.final_layer_norm.weight\r\nn decoder.block.0.layer.0.SelfAttention.q.weight\r\nn decoder.block.0.layer.0.SelfAttention.k.weight\r\nn decoder.block.0.layer.0.SelfAttention.v.weight\r\nn decoder.block.0.layer.0.SelfAttention.o.weight\r\nn decoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight\r\nn decoder.block.0.layer.0.layer_norm.weight\r\nn decoder.block.0.layer.1.EncDecAttention.q.weight\r\nn decoder.block.0.layer.1.EncDecAttention.k.weight\r\nn decoder.block.0.layer.1.EncDecAttention.v.weight\r\nn decoder.block.0.layer.1.EncDecAttention.o.weight\r\nn decoder.block.0.layer.1.layer_norm.weight\r\nn decoder.block.0.layer.2.DenseReluDense.wi.weight\r\nn decoder.block.0.layer.2.DenseReluDense.wo.weight\r\nn decoder.block.0.layer.2.layer_norm.weight\r\nn decoder.block.1.layer.0.SelfAttention.q.weight\r\nn decoder.block.1.layer.0.SelfAttention.k.weight\r\nn decoder.block.1.layer.0.SelfAttention.v.weight\r\nn decoder.block.1.layer.0.SelfAttention.o.weight\r\nn decoder.block.1.layer.0.layer_norm.weight\r\nn decoder.block.1.layer.1.EncDecAttention.q.weight\r\nn decoder.block.1.layer.1.EncDecAttention.k.weight\r\nn decoder.block.1.layer.1.EncDecAttention.v.weight\r\nn decoder.block.1.layer.1.EncDecAttention.o.weight\r\nn decoder.block.1.layer.1.layer_norm.weight\r\nn decoder.block.1.layer.2.DenseReluDense.wi.weight\r\nn decoder.block.1.layer.2.DenseReluDense.wo.weight\r\nn decoder.block.1.layer.2.layer_norm.weight\r\nn decoder.block.2.layer.0.SelfAttention.q.weight\r\nn decoder.block.2.layer.0.SelfAttention.k.weight\r\nn decoder.block.2.layer.0.SelfAttention.v.weight\r\nn decoder.block.2.layer.0.SelfAttention.o.weight\r\nn decoder.block.2.layer.0.layer_norm.weight\r\nn decoder.block.2.layer.1.EncDecAttention.q.weight\r\nn decoder.block.2.layer.1.EncDecAttention.k.weight\r\nn decoder.block.2.layer.1.EncDecAttention.v.weight\r\nn decoder.block.2.layer.1.EncDecAttention.o.weight\r\nn decoder.block.2.layer.1.layer_norm.weight\r\nn decoder.block.2.layer.2.DenseReluDense.wi.weight\r\nn decoder.block.2.layer.2.DenseReluDense.wo.weight\r\nn decoder.block.2.layer.2.layer_norm.weight\r\nn decoder.block.3.layer.0.SelfAttention.q.weight\r\nn decoder.block.3.layer.0.SelfAttention.k.weight\r\nn decoder.block.3.layer.0.SelfAttention.v.weight\r\nn decoder.block.3.layer.0.SelfAttention.o.weight\r\nn decoder.block.3.layer.0.layer_norm.weight\r\nn decoder.block.3.layer.1.EncDecAttention.q.weight\r\nn decoder.block.3.layer.1.EncDecAttention.k.weight\r\nn decoder.block.3.layer.1.EncDecAttention.v.weight\r\nn decoder.block.3.layer.1.EncDecAttention.o.weight\r\nn decoder.block.3.layer.1.layer_norm.weight\r\nn decoder.block.3.layer.2.DenseReluDense.wi.weight\r\nn decoder.block.3.layer.2.DenseReluDense.wo.weight\r\nn decoder.block.3.layer.2.layer_norm.weight\r\nn decoder.block.4.layer.0.SelfAttention.q.weight\r\nn decoder.block.4.layer.0.SelfAttention.k.weight\r\nn decoder.block.4.layer.0.SelfAttention.v.weight\r\nn decoder.block.4.layer.0.SelfAttention.o.weight\r\nn decoder.block.4.layer.0.layer_norm.weight\r\nn decoder.block.4.layer.1.EncDecAttention.q.weight\r\nn decoder.block.4.layer.1.EncDecAttention.k.weight\r\nn decoder.block.4.layer.1.EncDecAttention.v.weight\r\nn decoder.block.4.layer.1.EncDecAttention.o.weight\r\nn decoder.block.4.layer.1.layer_norm.weight\r\nn decoder.block.4.layer.2.DenseReluDense.wi.weight\r\nn decoder.block.4.layer.2.DenseReluDense.wo.weight\r\nn decoder.block.4.layer.2.layer_norm.weight\r\nn decoder.block.5.layer.0.SelfAttention.q.weight\r\nn decoder.block.5.layer.0.SelfAttention.k.weight\r\nn decoder.block.5.layer.0.SelfAttention.v.weight\r\nn decoder.block.5.layer.0.SelfAttention.o.weight\r\nn decoder.block.5.layer.0.layer_norm.weight\r\nn decoder.block.5.layer.1.EncDecAttention.q.weight\r\nn decoder.block.5.layer.1.EncDecAttention.k.weight\r\nn decoder.block.5.layer.1.EncDecAttention.v.weight\r\nn decoder.block.5.layer.1.EncDecAttention.o.weight\r\nn decoder.block.5.layer.1.layer_norm.weight\r\nn decoder.block.5.layer.2.DenseReluDense.wi.weight\r\nn decoder.block.5.layer.2.DenseReluDense.wo.weight\r\nn decoder.block.5.layer.2.layer_norm.weight\r\nn decoder.block.6.layer.0.SelfAttention.q.weight\r\nn decoder.block.6.layer.0.SelfAttention.k.weight\r\nn decoder.block.6.layer.0.SelfAttention.v.weight\r\nn decoder.block.6.layer.0.SelfAttention.o.weight\r\nn decoder.block.6.layer.0.layer_norm.weight\r\nn decoder.block.6.layer.1.EncDecAttention.q.weight\r\nn decoder.block.6.layer.1.EncDecAttention.k.weight\r\nn decoder.block.6.layer.1.EncDecAttention.v.weight\r\nn decoder.block.6.layer.1.EncDecAttention.o.weight\r\nn decoder.block.6.layer.1.layer_norm.weight\r\nn decoder.block.6.layer.2.DenseReluDense.wi.weight\r\nn decoder.block.6.layer.2.DenseReluDense.wo.weight\r\nn decoder.block.6.layer.2.layer_norm.weight\r\nn decoder.block.7.layer.0.SelfAttention.q.weight\r\nn decoder.block.7.layer.0.SelfAttention.k.weight\r\nn decoder.block.7.layer.0.SelfAttention.v.weight\r\nn decoder.block.7.layer.0.SelfAttention.o.weight\r\nn decoder.block.7.layer.0.layer_norm.weight\r\nn decoder.block.7.layer.1.EncDecAttention.q.weight\r\nn decoder.block.7.layer.1.EncDecAttention.k.weight\r\nn decoder.block.7.layer.1.EncDecAttention.v.weight\r\nn decoder.block.7.layer.1.EncDecAttention.o.weight\r\nn decoder.block.7.layer.1.layer_norm.weight\r\nn decoder.block.7.layer.2.DenseReluDense.wi.weight\r\nn decoder.block.7.layer.2.DenseReluDense.wo.weight\r\nn decoder.block.7.layer.2.layer_norm.weight\r\nn decoder.block.8.layer.0.SelfAttention.q.weight\r\nn decoder.block.8.layer.0.SelfAttention.k.weight\r\nn decoder.block.8.layer.0.SelfAttention.v.weight\r\nn decoder.block.8.layer.0.SelfAttention.o.weight\r\nn decoder.block.8.layer.0.layer_norm.weight\r\nn decoder.block.8.layer.1.EncDecAttention.q.weight\r\nn decoder.block.8.layer.1.EncDecAttention.k.weight\r\nn decoder.block.8.layer.1.EncDecAttention.v.weight\r\nn decoder.block.8.layer.1.EncDecAttention.o.weight\r\nn decoder.block.8.layer.1.layer_norm.weight\r\nn decoder.block.8.layer.2.DenseReluDense.wi.weight\r\nn decoder.block.8.layer.2.DenseReluDense.wo.weight\r\nn decoder.block.8.layer.2.layer_norm.weight\r\nn decoder.block.9.layer.0.SelfAttention.q.weight\r\nn decoder.block.9.layer.0.SelfAttention.k.weight\r\nn decoder.block.9.layer.0.SelfAttention.v.weight\r\nn decoder.block.9.layer.0.SelfAttention.o.weight\r\nn decoder.block.9.layer.0.layer_norm.weight\r\nn decoder.block.9.layer.1.EncDecAttention.q.weight\r\nn decoder.block.9.layer.1.EncDecAttention.k.weight\r\nn decoder.block.9.layer.1.EncDecAttention.v.weight\r\nn decoder.block.9.layer.1.EncDecAttention.o.weight\r\nn decoder.block.9.layer.1.layer_norm.weight\r\nn decoder.block.9.layer.2.DenseReluDense.wi.weight\r\nn decoder.block.9.layer.2.DenseReluDense.wo.weight\r\nn decoder.block.9.layer.2.layer_norm.weight\r\nn decoder.block.10.layer.0.SelfAttention.q.weight\r\nn decoder.block.10.layer.0.SelfAttention.k.weight\r\nn decoder.block.10.layer.0.SelfAttention.v.weight\r\nn decoder.block.10.layer.0.SelfAttention.o.weight\r\nn decoder.block.10.layer.0.layer_norm.weight\r\nn decoder.block.10.layer.1.EncDecAttention.q.weight\r\nn decoder.block.10.layer.1.EncDecAttention.k.weight\r\nn decoder.block.10.layer.1.EncDecAttention.v.weight\r\nn decoder.block.10.layer.1.EncDecAttention.o.weight\r\nn decoder.block.10.layer.1.layer_norm.weight\r\nn decoder.block.10.layer.2.DenseReluDense.wi.weight\r\nn decoder.block.10.layer.2.DenseReluDense.wo.weight\r\nn decoder.block.10.layer.2.layer_norm.weight\r\nn decoder.block.11.layer.0.SelfAttention.q.weight\r\nn decoder.block.11.layer.0.SelfAttention.k.weight\r\nn decoder.block.11.layer.0.SelfAttention.v.weight\r\nn decoder.block.11.layer.0.SelfAttention.o.weight\r\nn decoder.block.11.layer.0.layer_norm.weight\r\nn decoder.block.11.layer.1.EncDecAttention.q.weight\r\nn decoder.block.11.layer.1.EncDecAttention.k.weight\r\nn decoder.block.11.layer.1.EncDecAttention.v.weight\r\nn decoder.block.11.layer.1.EncDecAttention.o.weight\r\nn decoder.block.11.layer.1.layer_norm.weight\r\nn decoder.block.11.layer.2.DenseReluDense.wi.weight\r\nn decoder.block.11.layer.2.DenseReluDense.wo.weight\r\nn decoder.block.11.layer.2.layer_norm.weight\r\nn decoder.final_layer_norm.weight\r\n```\r\n",
"Your end example is not surprising: you are re-loading a new model so of course modifications done on the original model are erased.\r\nWe'll look at how we can fix this inside the trainer, to load the weights differently instead of using `from_pretrained`.",
"Hi @sgugger thank you for the response, my intention from the example was showing that the procedure happening inside the trainer if the user resume training from a modified model. thank you. ",
"@sgugger I also see some more issues with the trainer.py when I load the model from the checkpoint, the model which previously was training fine on the GPU gets out of memory issue, there must be a leakge of memory during the loading from a checkpoint, shall I make a separate ticket for this issue? thanks ",
"If it's because you add frozen parameters previously, that would explain the OOM error.",
"Hi @LysandreJik @sgugger \r\nthanks for the respose, but I do not see how this is relevant. \r\nDuring training, I also load the model and then freeze some of the parameters, then this trains fine, only during loading from a checkpoint, this goes to memory issues, but the procedure remains the same, by loading the model and then freezing params, i personally think there must be a bug somewhere in checkpoint loading, resulting in extra usage of memory,. thanks for your help ",
"One simple test to check this @sgugger would be get a model, and choose a batch size in a way that it just fit the memory but larger than that wont, then resume the training from a checkpoint, then I am sure you would also see the memory issue, even without any modification, just the baseline t5 I do see this issue with huggingface codes. thanks for your help ",
"> We'll look at how we can fix this inside the trainer, to load the weights differently instead of using `from_pretrained`.\r\n\r\n@sgugger, this is definitely related to what I need for deepspeed checkpoint resume. Currently we first load the model `from_pretrained` and then it gets dropped and replaced by the deepspeed checkpoint, which for huge models is a huge slowdown.\r\nSo let's coordinate this work.\r\n\r\nMy preliminary idea was to pass a new flag to `from_pretrained` which will do everything except actually loading the weights. I was planning to work on this this week.\r\n\r\n(plus we need to stop random init the weights when they are replaced with pre-trained weights, so this is related too but not directly to this particular issue)",
"Thanks @stas00 for your attention to this issue, this would be really awesome to have this fixed, thanks a lot for the great work you do ",
"You're in good hands, @dorooddorood606 - @sgugger is taking care of it already, I was just commenting that something similar needs to be done for deepspeed, so once @sgugger's PR goes in I will work on doing the same for deepspeed.",
"Dear @stas00 \r\nThank you very much both of you @sgugger to your great efforts and the great job you do, I also observe the vanilla t5-base checkpointing gets very different results after resume, I reported the bug here, this can be a relevant bug to this one:\r\n\r\nhttps://github.com/huggingface/transformers/issues/11323 \r\n\r\nSo this seems there are some randomness in t5-base model which is not considered in trainer.py resume from checkpointing part, if you also think these bugs can be related, I would greatly appreciate if you could also consider the losses when one resume from a checkpoint.\r\n I would like to thank you so much for your time and your efforts and the great and awesome job you do. "
] | 1,618 | 1,618 | 1,618 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Linux
- Python version: 3.8
- PyTorch version (GPU?): 1.8
- Tensorflow version (GPU?): -
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: -
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
trainer: @sgugger, @patil-suraj
## Information
Hi, I see this serious issue with trainer.py class, let please consider run_translation.py script [1] after you define the model, let freeze the encoder, or wrap the model in a class. So one can modify the model after this line https://github.com/huggingface/transformers/blob/d9c62047a8d75e18d2849d345ab3394875a712ef/examples/seq2seq/run_translation.py#L331
Then, during the training, one can stop the training, and now would like to continue the training from the place it is stopped, if you print the number of parameters inside trainer.py, right before this line:
https://github.com/huggingface/transformers/blob/d9c62047a8d75e18d2849d345ab3394875a712ef/src/transformers/trainer.py#L1062
like this
```
for n,p in model.named_parameters():
if p.requires_grad:
print(n)
```
what would we see? We see all parameters are there, even the ones we made frozen, this is a serious bug that if the user modify the model after creation, those modifications are not considered when restarting the training, could you kindly have a look?
thanks
[1] https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_translation.py
## Expected behavior
The user should be able to continue training the modified model as they are modified. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11294/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11294/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11293 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11293/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11293/comments | https://api.github.com/repos/huggingface/transformers/issues/11293/events | https://github.com/huggingface/transformers/issues/11293 | 860,357,154 | MDU6SXNzdWU4NjAzNTcxNTQ= | 11,293 | OSError: Unable to load weights from pytorch checkpoint file | {
"login": "notooth1",
"id": 61880277,
"node_id": "MDQ6VXNlcjYxODgwMjc3",
"avatar_url": "https://avatars.githubusercontent.com/u/61880277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/notooth1",
"html_url": "https://github.com/notooth1",
"followers_url": "https://api.github.com/users/notooth1/followers",
"following_url": "https://api.github.com/users/notooth1/following{/other_user}",
"gists_url": "https://api.github.com/users/notooth1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/notooth1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/notooth1/subscriptions",
"organizations_url": "https://api.github.com/users/notooth1/orgs",
"repos_url": "https://api.github.com/users/notooth1/repos",
"events_url": "https://api.github.com/users/notooth1/events{/privacy}",
"received_events_url": "https://api.github.com/users/notooth1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 2. Your PyTorch installation may be too old. \r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,622 | 1,622 | NONE | null | I got this error with MT5 model. Can anyone help?
```
(base) notooth@Debian:~$ python
Python 3.8.8 (default, Apr 13 2021, 19:58:26)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import MT5Model, T5Tokenizer
>>> model = MT5Model.from_pretrained("google/mt5-small")
Traceback (most recent call last):
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers-4.4.2-py3.8.egg/transformers/modeling_utils.py", line 1062, in from_pretrained
File "/home/notooth/anaconda3/lib/python3.8/site-packages/torch/serialization.py", line 527, in load
with _open_zipfile_reader(f) as opened_zipfile:
File "/home/notooth/anaconda3/lib/python3.8/site-packages/torch/serialization.py", line 224, in __init__
super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer))
RuntimeError: version_ <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at /tmp/pip-req-build-66hwoyb6/caffe2/serialize/inline_container.cc:132, please report a bug to PyTorch. Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 2. Your PyTorch installation may be too old. (init at /tmp/pip-req-build-66hwoyb6/caffe2/serialize/inline_container.cc:132)
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x6d (0x7f2e92daa2ad in /home/notooth/anaconda3/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #1: caffe2::serialize::PyTorchStreamReader::init() + 0x25db (0x7f2e8eba52bb in /home/notooth/anaconda3/lib/python3.8/site-packages/torch/lib/libtorch.so)
frame #2: caffe2::serialize::PyTorchStreamReader::PyTorchStreamReader(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x7b (0x7f2e8eba67cb in /home/notooth/anaconda3/lib/python3.8/site-packages/torch/lib/libtorch.so)
frame #3: <unknown function> + 0x65d00e (0x7f2e91f1c00e in /home/notooth/anaconda3/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #4: <unknown function> + 0x1375f9 (0x7f2e919f65f9 in /home/notooth/anaconda3/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
frame #32: __libc_start_main + 0xea (0x7f2ea2fd5d0a in /lib/x86_64-linux-gnu/libc.so.6)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers-4.4.2-py3.8.egg/transformers/modeling_utils.py", line 1064, in from_pretrained
OSError: Unable to load weights from pytorch checkpoint file for 'google/mt5-small' at '/home/notooth/.cache/huggingface/transformers/8e7b2a80ddcb5611b27d8c89e1e8e33a947e105415051402a22b9c8d7d1caeb0.e22331f3a065b885b30ae3dd1ff11ccaf7fbc444485f6eb07ef5e0138bca8b70'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11293/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11293/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11292 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11292/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11292/comments | https://api.github.com/repos/huggingface/transformers/issues/11292/events | https://github.com/huggingface/transformers/pull/11292 | 860,341,074 | MDExOlB1bGxSZXF1ZXN0NjE3MjYwMDk1 | 11,292 | move device statements outside if statements | {
"login": "e-yi",
"id": 20715359,
"node_id": "MDQ6VXNlcjIwNzE1MzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/20715359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/e-yi",
"html_url": "https://github.com/e-yi",
"followers_url": "https://api.github.com/users/e-yi/followers",
"following_url": "https://api.github.com/users/e-yi/following{/other_user}",
"gists_url": "https://api.github.com/users/e-yi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/e-yi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/e-yi/subscriptions",
"organizations_url": "https://api.github.com/users/e-yi/orgs",
"repos_url": "https://api.github.com/users/e-yi/repos",
"events_url": "https://api.github.com/users/e-yi/events{/privacy}",
"received_events_url": "https://api.github.com/users/e-yi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,618 | 1,618 | 1,618 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Move some device statements outside if statements.
There are three model classes (GPT2Model, GPTNeoModel, CTRLModel) that state the variable `device` inside a `if` statement in their `forward()` method. This may lead to some inconvenience for GPT2Model #11179, and is not consistent with the way it is written in other model classes.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11292/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11292/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11292",
"html_url": "https://github.com/huggingface/transformers/pull/11292",
"diff_url": "https://github.com/huggingface/transformers/pull/11292.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11292.patch",
"merged_at": 1618835140000
} |
https://api.github.com/repos/huggingface/transformers/issues/11290 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11290/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11290/comments | https://api.github.com/repos/huggingface/transformers/issues/11290/events | https://github.com/huggingface/transformers/issues/11290 | 860,204,615 | MDU6SXNzdWU4NjAyMDQ2MTU= | 11,290 | Python crashes when loading Bert model from pretrained | {
"login": "cmazzoni87",
"id": 26312587,
"node_id": "MDQ6VXNlcjI2MzEyNTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/26312587?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cmazzoni87",
"html_url": "https://github.com/cmazzoni87",
"followers_url": "https://api.github.com/users/cmazzoni87/followers",
"following_url": "https://api.github.com/users/cmazzoni87/following{/other_user}",
"gists_url": "https://api.github.com/users/cmazzoni87/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cmazzoni87/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cmazzoni87/subscriptions",
"organizations_url": "https://api.github.com/users/cmazzoni87/orgs",
"repos_url": "https://api.github.com/users/cmazzoni87/repos",
"events_url": "https://api.github.com/users/cmazzoni87/events{/privacy}",
"received_events_url": "https://api.github.com/users/cmazzoni87/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Is it possible you're running out of RAM (not necessarily GPU RAM)?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,622 | 1,622 | NONE | null | Hello all, I am here because I am encountering a outmost obscure problem. Right when I begin by creating my model I encounter that my gpu usage spikes and then my python code crashes. This only happens when I try to use any of the models 'from_pretrained' only, I haven't had issues with neither Tensorflow nor PyTourch by themselves (this behavior is only native to transformers)
For example:
The problem arises when running this line of code, right at the beginning of my script ;
model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased')
I get the following messages, which are pretty standard but as you can see in the bottom the code simply stops.
<
2021-04-16 16:16:35.330093: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
2021-04-16 16:16:38.495667: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library nvcuda.dll
2021-04-16 16:16:38.519178: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1760] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1060 computeCapability: 6.1
coreClock: 1.6705GHz coreCount: 10 deviceMemorySize: 6.00GiB deviceMemoryBandwidth: 178.99GiB/s
2021-04-16 16:16:38.519500: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
2021-04-16 16:16:38.528695: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublas64_11.dll
2021-04-16 16:16:38.528923: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublasLt64_11.dll
2021-04-16 16:16:38.533582: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cufft64_10.dll
2021-04-16 16:16:38.535368: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library curand64_10.dll
2021-04-16 16:16:38.540093: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusolver64_10.dll
2021-04-16 16:16:38.543728: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cusparse64_11.dll
2021-04-16 16:16:38.544662: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudnn64_8.dll
2021-04-16 16:16:38.544888: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1898] Adding visible gpu devices: 0
2021-04-16 16:16:38.545436: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-04-16 16:16:38.546588: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1760] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1060 computeCapability: 6.1
coreClock: 1.6705GHz coreCount: 10 deviceMemorySize: 6.00GiB deviceMemoryBandwidth: 178.99GiB/s
2021-04-16 16:16:38.547283: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1898] Adding visible gpu devices: 0
2021-04-16 16:16:39.115250: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1300] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-04-16 16:16:39.115490: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1306] 0
2021-04-16 16:16:39.115592: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1319] 0: N
2021-04-16 16:16:39.115856: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1446] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4634 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1060, pci bus id: 0000:01:00.0, compute capability: 6.1)
2021-04-16 16:16:39.419407: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublas64_11.dll
2021-04-16 16:16:39.709427: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cublasLt64_11.dll
Process finished with exit code -1073741819 (0xC0000005)
>
Has any one else seen this? There something I am missing here?
Thank you for your help.
Here are the details of my system.
- `transformers` version: Latest
- Platform: Windows
- Python version: 3.7
- PyTorch version (GPU?): Latest
- Tensorflow version (GPU?): Latest
- Using GPU in script?: Yes, GeForce GTX 1060 computeCapability: 6.1
- Using distributed or parallel set-up in script?: No
Models I encountered this error on:
- albert, bert, xlm:
Libraries that are related to this issue:
- text classification: @patrickvonplaten
- trainer: @sgugger
- pipelines: @LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11290/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11290/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11289 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11289/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11289/comments | https://api.github.com/repos/huggingface/transformers/issues/11289/events | https://github.com/huggingface/transformers/issues/11289 | 860,201,404 | MDU6SXNzdWU4NjAyMDE0MDQ= | 11,289 | google/pegasus-cnn_dailymail generates blank file | {
"login": "chz816",
"id": 26696253,
"node_id": "MDQ6VXNlcjI2Njk2MjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/26696253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chz816",
"html_url": "https://github.com/chz816",
"followers_url": "https://api.github.com/users/chz816/followers",
"following_url": "https://api.github.com/users/chz816/following{/other_user}",
"gists_url": "https://api.github.com/users/chz816/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chz816/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chz816/subscriptions",
"organizations_url": "https://api.github.com/users/chz816/orgs",
"repos_url": "https://api.github.com/users/chz816/repos",
"events_url": "https://api.github.com/users/chz816/events{/privacy}",
"received_events_url": "https://api.github.com/users/chz816/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @chz816 \r\n\r\nI can reproduce the issue. This is because pegasus doesn't really work with `fp16`since its trained with `bfloat16`, so in most cases, it overflows and returns `nan` logits. The model works as expected in `fp32`, so if you run the above command without the `--fp16` arg, it should give the expected results.\r\n\r\ncc @stas00 ",
"Thank you @patil-suraj!\r\n\r\nI have generated the summaries using ```pegasus-cnn_dailymail``` with the following performance: ```{'rouge1': 43.146, 'rouge2': 20.7292, 'rougeL': 30.4596, 'n_obs': 11490, 'seconds_per_sample': 0.2415, 'n_gpus': 3}```. It is lower than expected, but I think it can be explained by smaller batch size, which is caused by the memory limitation.\r\n\r\n```bash\r\npython -m torch.distributed.launch --nproc_per_node=3 run_distributed_eval.py \\\r\n --model_name google/pegasus-cnn_dailymail \\\r\n --save_dir $OUTPUT_DIR \\\r\n --data_dir $DATA_DIR \\\r\n --bs 16\r\n```\r\n\r\nCan you maybe explain why this problem does not exist for ```google/pegasus-xsum```? Thank you!",
"As @patil-suraj pointed out many models trained in `bfloat16` can't be run under mixed precision `fp16` (albeit pytorch are discussing bfloat16 mixed precision)\r\n\r\n`pegasus-cnn_dailymail` has an issue of underflow under `fp16`:\r\n\r\nlet's take a single frame - Linear forward for `lm_head`:\r\n\r\nfp32:\r\n\r\n```\r\nabs min abs max metadata\r\n lm_head Linear\r\n4.66e-10 1.13e+01 weight\r\n6.29e-07 4.47e+00 input[0]\r\n1.63e-07 3.00e+01 output\r\n```\r\n\r\nfp16:\r\n\r\n```\r\n lm_head Linear\r\n0.00e+00 1.13e+01 weight\r\n6.76e-07 5.38e+00 input[0]\r\n0.00e+00 3.08e+01 output\r\n```\r\n\r\nAs you can see `4.66e-10` under fp16 underflows into `0.0`.\r\n\r\n**edit:** well, actually this would be the case if we did `model.half()` (which is what deepspeed does, and that's where it'd immediately underflow on the very first use), so here it's probably something else. I will need some time to try to understand what's going on here.\r\n\r\nThis is from WIP PR https://github.com/huggingface/transformers/pull/11274 - still polishing some nuances but should be ready soon.\r\n\r\nLet me check `google/pegasus-xsum`",
"Regarding the cnn_dailymail scores, please see this issue #6844",
"@chz816, meanwhile could you please give me a way to reproduce your case? Ideally with some public dataset and best with the current version of the examples (master or last release), which would be using `examples/seq2seq/run_summarization.py`\r\n\r\ne.g.:\r\n```\r\npython examples/seq2seq/run_summarization.py --model_name_or_path google/pegasus-cnn_dailymail \\\r\n--do_train --do_eval --dataset_name cnn_dailymail --dataset_config \"3.0.0\" --source_prefix \\\r\n\"summarize: \" --output_dir /tmp/tst-summarization --per_device_train_batch_size=1 \\\r\n--per_device_eval_batch_size=1 --overwrite_output_dir --predict_with_generate \r\n```",
"Thank you, @patil-suraj. \r\n\r\nOh, this is the legacy script so it does do:\r\n\r\n```\r\n if fp16:\r\n model = model.half()\r\n```\r\n\r\n",
"```\r\nwget https://cdn-datasets.huggingface.co/summarization/pegasus_data/cnn_dailymail.tar.gz\r\ntar -xvzf cnn_dailymail.tar.gz\r\npython -m torch.distributed.launch --nproc_per_node=1 run_distributed_eval.py \\\r\n--model_name google/pegasus-cnn_dailymail --save_dir output_dir --data_dir cnn_dailymail \\\r\n--bs 8 --fp16\r\n```\r\n\r\nSo the detection is quick (had to bolt it on manually, since this script isn't using the `Trainer`):\r\n```\r\nDetected inf/nan during batch_number=0\r\nLast 10 forward frames:\r\nabs min abs max metadata\r\n model.encoder.layers.14.fc1 Linear\r\n0.00e+00 1.88e+01 weight\r\n2.73e-05 2.54e+00 bias\r\n5.96e-08 9.05e+00 input[0]\r\n0.00e+00 3.16e+02 output\r\n model.encoder.layers.14.fc2 Linear\r\n5.96e-08 3.29e+01 weight\r\n5.40e-03 2.66e+01 bias\r\n0.00e+00 1.03e+02 input[0]\r\n0.00e+00 8.00e+03 output\r\n model.encoder.layers.14 PegasusEncoderLayer\r\n0.00e+00 6.45e+04 input[0]\r\n0.00e+00 0.00e+00 input[1]\r\n0.00e+00 6.45e+04 output[0]\r\n model.encoder.layers.15.self_attn_layer_norm LayerNorm\r\n5.63e-03 3.85e-01 weight\r\n1.69e-05 2.49e-01 bias\r\n0.00e+00 6.45e+04 input[0]\r\n0.00e+00 1.50e+00 output\r\n model.encoder.layers.15.self_attn.q_proj Linear\r\n8.34e-07 2.95e+00 weight\r\n0.00e+00 0.00e+00 bias\r\n0.00e+00 1.50e+00 input[0]\r\n5.96e-08 8.52e+00 output\r\n model.encoder.layers.15.self_attn.k_proj Linear\r\n2.38e-07 1.85e+00 weight\r\n0.00e+00 0.00e+00 bias\r\n0.00e+00 1.50e+00 input[0]\r\n1.19e-07 9.30e+00 output\r\n model.encoder.layers.15.self_attn.v_proj Linear\r\n5.96e-08 4.03e+00 weight\r\n0.00e+00 0.00e+00 bias\r\n0.00e+00 1.50e+00 input[0]\r\n6.56e-07 2.95e+01 output\r\n model.encoder.layers.15.self_attn.out_proj Linear\r\n5.96e-08 2.25e+01 weight\r\n0.00e+00 0.00e+00 bias\r\n5.96e-08 1.25e+01 input[0]\r\n3.58e-07 1.29e+03 output\r\n model.encoder.layers.15.self_attn PegasusAttention\r\n3.58e-07 1.29e+03 output[0]\r\n None output[1]\r\n None output[2]\r\n model.encoder.layers.15.final_layer_norm LayerNorm\r\n7.32e-02 2.69e+00 weight\r\n2.00e-05 1.02e+00 bias\r\n0.00e+00 inf input[0]\r\n nan nan output\r\n```",
"I'm able to reproduce this with the \"modern\" version of the script:\r\n\r\n```\r\nrm -rf output_dir; USE_TF=0 PYTHONPATH=src python examples/seq2seq/run_summarization.py \\\r\n--model_name_or_path google/pegasus-cnn_dailymail --do_eval --dataset_name cnn_dailymail \\\r\n--dataset_config \"3.0.0\" --output_dir output_dir \\\r\n--per_device_eval_batch_size=16 --predict_with_generate --fp16_full_eval --max_val_samples 10\r\n\r\n[...]\r\n\r\n***** eval metrics *****\r\n eval_gen_len = 9.0\r\n eval_loss = nan\r\n eval_mem_cpu_alloc_delta = -55MB\r\n eval_mem_cpu_peaked_delta = 55MB\r\n eval_mem_gpu_alloc_delta = 1089MB\r\n eval_mem_gpu_peaked_delta = 7241MB\r\n eval_rouge1 = 0.0\r\n eval_rouge2 = 0.0\r\n eval_rougeL = 0.0\r\n eval_rougeLsum = 0.0\r\n eval_runtime = 0:00:07.71\r\n eval_samples = 10\r\n eval_samples_per_second = 1.295\r\n init_mem_cpu_alloc_delta = 0MB\r\n init_mem_cpu_peaked_delta = 0MB\r\n init_mem_gpu_alloc_delta = 0MB\r\n init_mem_gpu_peaked_delta = 0MB\r\n```",
"Thank you for your response @stas00 ! Yeah I am able to resolve the issue without ```--fp16```, but I am still little confused why ```google/pegasus-xsum``` works well with ```---fp16``` argument, since they are from the same seq2seq model. Any ideas? Thank you!",
"For some reason I can't even run `google/pegasus-xsum` https://github.com/huggingface/transformers/issues/11344, so I'm not able to look inside.\r\n\r\nI can only guess that perhaps `google/pegasus-xsum` was trained in mixed precision fp16?"
] | 1,618 | 1,619 | 1,619 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.0 and 4.5.1
- Platform: linux
- Python version: 3.6
- PyTorch version (GPU?): 1.7.1
- Tensorflow version (GPU?): NA
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes (and I also try to not use distributed but problem exists)
### Who can help
@patrickvonplaten, @patil-suraj
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): google/pegasus-cnn_dailymail
The problem arises when using:
* [x] the official example scripts: run_distributed_eval.py from https://github.com/huggingface/transformers/tree/master/examples/legacy/seq2seq
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: summarization with ROUGE
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I am trying to generate the summaries from Pegasus on CNN/DM and XSUM datasets. I use the same dataset shared by HuggingFace (from README.md in https://github.com/huggingface/transformers/tree/master/examples/legacy/seq2seq). My experiments are run on 3 V100 GPUs. I use ```google/pegasus-cnn_dailymail``` for CNN/DM and ```google/pegasus-xsum``` for XSUM.
1. The results on XSUM is perfect. I run the following code and receive the ROUGE score as: ```{'rouge1': 47.0271, 'rouge2': 24.4924, 'rougeL': 39.2529, 'n_obs': 11333, 'seconds_per_sample': 0.035, 'n_gpus': 3}```
```bash
python -m torch.distributed.launch --nproc_per_node=3 run_distributed_eval.py \
--model_name google/pegasus-xsum \
--save_dir $OUTPUT_DIR \
--data_dir $DATA_DIR \
--bs 64 \
--fp16
```
2. I was expecting similar SOTA performance on CNNDM, so I run the following code and receive: ```{"n_gpus": 3, "n_obs": 11490, "rouge1": 0.1602, "rouge2": 0.084, "rougeL": 0.1134, "seconds_per_sample": 0.1282}```.
(Note: here the batch size is changed due to memory limitation. Although experiments are performed on the same devices, CNN/DM requires more spaces considering the unique feature of dataset itself.)
```bash
python -m torch.distributed.launch --nproc_per_node=3 run_distributed_eval.py \
--model_name google/pegasus-cnn_dailymail \
--save_dir $OUTPUT_DIR \
--data_dir $DATA_DIR \
--bs 32 \
--fp16
```
3. I look at the generated ```test_generations.txt``` file to try to figure out why ```google/pegasus-cnn_dailymail``` doesn't work. Then I found most of lines in ```test_generations.txt``` are blank. (Please using the attached image for an example)
<img width="682" alt="image" src="https://user-images.githubusercontent.com/26696253/115087890-1b6cac80-9edd-11eb-8289-d45cbcf4f6dc.png">
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
It is so wired that ```google/pegasus-xsum``` works out perfectly while ```google/pegasus-cnn_dailymail``` does not generate summaries successfully. I am confused so I switch the transformers version (4.2.0 and 4.5.1), and I re-run the experiments on different GPUs. This problem exists. Could you please give me any suggestions? Thank you!
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11289/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11289/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11288 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11288/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11288/comments | https://api.github.com/repos/huggingface/transformers/issues/11288/events | https://github.com/huggingface/transformers/issues/11288 | 860,172,377 | MDU6SXNzdWU4NjAxNzIzNzc= | 11,288 | Question about T5-11b model weights | {
"login": "lengstrom",
"id": 760865,
"node_id": "MDQ6VXNlcjc2MDg2NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/760865?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lengstrom",
"html_url": "https://github.com/lengstrom",
"followers_url": "https://api.github.com/users/lengstrom/followers",
"following_url": "https://api.github.com/users/lengstrom/following{/other_user}",
"gists_url": "https://api.github.com/users/lengstrom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lengstrom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lengstrom/subscriptions",
"organizations_url": "https://api.github.com/users/lengstrom/orgs",
"repos_url": "https://api.github.com/users/lengstrom/repos",
"events_url": "https://api.github.com/users/lengstrom/events{/privacy}",
"received_events_url": "https://api.github.com/users/lengstrom/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,618 | 1,618 | 1,618 | NONE | null | Hi, where do the T5-11b model weights come from? Are they from the original paper or have they been trained on the community release version of C4 independently? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11288/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11288/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11287 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11287/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11287/comments | https://api.github.com/repos/huggingface/transformers/issues/11287/events | https://github.com/huggingface/transformers/issues/11287 | 860,076,552 | MDU6SXNzdWU4NjAwNzY1NTI= | 11,287 | Zero-shot pipeline feature extraction | {
"login": "rodrigoheck",
"id": 29047455,
"node_id": "MDQ6VXNlcjI5MDQ3NDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/29047455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rodrigoheck",
"html_url": "https://github.com/rodrigoheck",
"followers_url": "https://api.github.com/users/rodrigoheck/followers",
"following_url": "https://api.github.com/users/rodrigoheck/following{/other_user}",
"gists_url": "https://api.github.com/users/rodrigoheck/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rodrigoheck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rodrigoheck/subscriptions",
"organizations_url": "https://api.github.com/users/rodrigoheck/orgs",
"repos_url": "https://api.github.com/users/rodrigoheck/repos",
"events_url": "https://api.github.com/users/rodrigoheck/events{/privacy}",
"received_events_url": "https://api.github.com/users/rodrigoheck/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The ",
"> The\r\n\r\nYes?",
"Answering my question: pipeline returns the model as well. You just have to use it directly to extract the hidden states."
] | 1,618 | 1,618 | 1,618 | NONE | null | Is it possible to extract the hidden states representation from the zero-shot pipeline? I have these two tasks: feature extraction and zero-shot classification. But I don't want to load the same model twice, since it is a major burden on GPU memory. Any suggestions to how I can do both tasks without having to load it twice? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11287/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11287/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11286 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11286/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11286/comments | https://api.github.com/repos/huggingface/transformers/issues/11286/events | https://github.com/huggingface/transformers/pull/11286 | 859,985,842 | MDExOlB1bGxSZXF1ZXN0NjE2OTcyNjc0 | 11,286 | Trainer support for IterableDataset for evaluation and predict | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,618 | 1,618 | 1,618 | COLLABORATOR | null | # What does this PR do?
This PR rewrites the entirety of the evaluation loop to add support for `IterableDataset`. The main problem with the current training loop is that, in distributed settings, it expects the indices of the evaluation set to come like this:
- `[0, 1, 2, 3, 4, 5, 6, 7, ...., 99]` for process 0
- `[100, 101, 102, 103, 104, 105, 106, 107, ...., 199]` for process 1
(if we have 200 samples)
In an `IterableDataset` we don't know the length at the beginning of the process, so we can cleanly cut the indices in half like that. Therefore, the indices will come like this (with a batch size of 4):
- `[0, 1, 2, 3, 8, 9, 10, 11, ...., 192, 193, 194, 195]` for process 0
- `[4, 5, 6, 7, 12, 13, 14, 15, ...., 196, 197, 198, 199]` for process 1
The rewrite of the evaluation loop is done to:
- change the sampling indices in a normal `Dataset` to be the same as an `IterableDataset`
- change the way predictions and labels are gathered accordingly
- avoid having one evaluation loop for `Dataset` and one for `IterableDataset`
To avoid any breaking change:
- the old evaluation loop is still there with the same name (for people who subclass Trainer) and can be used if one passes the flag `--use_legacy_prediction_loop`.
- the old `DistributedSequentialSampler` and `DistributedTensorGatherer` are left and deprecated | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11286/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11286",
"html_url": "https://github.com/huggingface/transformers/pull/11286",
"diff_url": "https://github.com/huggingface/transformers/pull/11286.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11286.patch",
"merged_at": 1618603318000
} |
https://api.github.com/repos/huggingface/transformers/issues/11285 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11285/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11285/comments | https://api.github.com/repos/huggingface/transformers/issues/11285/events | https://github.com/huggingface/transformers/issues/11285 | 859,916,821 | MDU6SXNzdWU4NTk5MTY4MjE= | 11,285 | `resize_token_embeddings` not taken into account in `save_pretrained` for `EncoderDecoderModel` | {
"login": "rahular",
"id": 1104544,
"node_id": "MDQ6VXNlcjExMDQ1NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1104544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rahular",
"html_url": "https://github.com/rahular",
"followers_url": "https://api.github.com/users/rahular/followers",
"following_url": "https://api.github.com/users/rahular/following{/other_user}",
"gists_url": "https://api.github.com/users/rahular/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rahular/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rahular/subscriptions",
"organizations_url": "https://api.github.com/users/rahular/orgs",
"repos_url": "https://api.github.com/users/rahular/repos",
"events_url": "https://api.github.com/users/rahular/events{/privacy}",
"received_events_url": "https://api.github.com/users/rahular/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is caused by the EncoderDecoderConfig which initializes independent objects ([link](https://github.com/huggingface/transformers/blob/d9c62047a8d75e18d2849d345ab3394875a712ef/src/transformers/models/encoder_decoder/configuration_encoder_decoder.py#L84)) instead of utilizing the already existing ones.\r\n\r\nYou can fix that for the moment by calling:\r\n```\r\nmodel.config.decoder = model.decoder.config\r\nmodel.config.encoder = model.encoder.config\r\n```\r\nPR will follow."
] | 1,618 | 1,619 | 1,619 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.5.0
- Platform: Darwin-17.7.0-x86_64-i386-64bit
- Python version: 3.7.2
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten, @patil-suraj
## Information
I am extending the embeddings of the decoder of an `EncoderDecoderModel` model. When I save it, the config does not reflect the new size. However, it works fine when I try doing the same for non `EncoderDecoderModel` models.
## To reproduce
```
In [1]: model = t.EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased')
In [2]: model.decoder.bert.embeddings.word_embeddings
Out[2]: Embedding(30522, 768, padding_idx=0)
In [3]: model.decoder.resize_token_embeddings(30522+100)
Out[3]: Embedding(30622, 768)
In [4]: model.save_pretrained('test-bert')
```
## Expected behavior
The updated embedding size should be saved in `config.json`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11285/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11285/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11284 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11284/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11284/comments | https://api.github.com/repos/huggingface/transformers/issues/11284/events | https://github.com/huggingface/transformers/issues/11284 | 859,914,569 | MDU6SXNzdWU4NTk5MTQ1Njk= | 11,284 | Loading from checkpoint seems to hang indefinitely for Roberta | {
"login": "elie-h",
"id": 10990339,
"node_id": "MDQ6VXNlcjEwOTkwMzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/10990339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elie-h",
"html_url": "https://github.com/elie-h",
"followers_url": "https://api.github.com/users/elie-h/followers",
"following_url": "https://api.github.com/users/elie-h/following{/other_user}",
"gists_url": "https://api.github.com/users/elie-h/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elie-h/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elie-h/subscriptions",
"organizations_url": "https://api.github.com/users/elie-h/orgs",
"repos_url": "https://api.github.com/users/elie-h/repos",
"events_url": "https://api.github.com/users/elie-h/events{/privacy}",
"received_events_url": "https://api.github.com/users/elie-h/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The script is not hanging, it is skipping the first 660,000 batches since you are resuming training from there which takes a lot of time. If you don't mind continuing training with the same data, you can use the option `ignore_data_skip=True` in your training arguments.",
"@eh-93 Do you remember how much time it took to train that checkpoint? \r\n@sgugger How about we add a progress bar to make the trainer more user-friendly? ",
"@sgugger got it, thanks\r\n@cronoik - around 5 days to get to that checkpoint\r\n\r\nAfter 28 hours the training resumed",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"That's not a good solution. A better solution that should be implemented is to pass some boolean to the CustomDataset, telling it that we are now in a 'skip' mode, so that the CustomDataset could prevent from doing expensive and unneeded steps in the skipping phase, like for example tokenize words. Does HF work on such solution?"
] | 1,618 | 1,676 | 1,622 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.0
- Platform: Linux-5.8.0-48-generic-x86_64-with-glibc2.29
- Python version: 3.8.7
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: Yes - RTX 3090
- Using distributed or parallel set-up in script?: No
Models:
- albert, bert, xlm: @LysandreJik
- Trainer: @sgugger
## Information
Model I am using (Bert, XLNet ...): Roberta
The problem arises when using:
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] my own task or dataset
## To reproduce
I'm trying to resume training my Roberta model from a checkpoint. When the training initialises it seems to pick up the last checkpoint:
```
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 660000
Will skip the first 0 epochs then the first 660000 batches in the first epoch.
```
After that it just hangs, training does not start, no further logging and GPU utilisation is 0. I've left it for over 6 hours and still no progress.
I've tried both loading directly from a checkpoint and initialising the trainer with checkpoint=True: trainer.train("ml/models/araberto/checkpoint-660000") and trainer.train(checkpoint=True)
Code below:
```
from datasets import load_dataset
from datasets import ClassLabel, Value, Sequence
tokenizer = RobertaTokenizerFast.from_pretrained(output_path)
dataset = load_dataset('text',
data_files={
"train":[str(x) for x in Path(f"{dataset_path}/train").glob("*.txt")],
"test": str(Path(f"{dataset_path}/test.txt"))
})
def encode(batch):
tokenized = tokenizer(batch.get("text", ""), padding="max_length", truncation=True, max_length=max_length)
return tokenized
dataset.set_transform(encode)
import torch
if not torch.cuda.is_available():
raise Exception("GPU not available")
from transformers import RobertaForMaskedLM
from transformers import RobertaConfig
from transformers import DataCollatorForLanguageModeling
config = RobertaConfig(
vocab_size=vocab_size,
max_position_embeddings=514,
num_attention_heads=12,
num_hidden_layers=6,
type_vocab_size=1,
)
model = RobertaForMaskedLM(config=config)
model.num_parameters()
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=True, mlm_probability=0.15
)
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir=output_path,
overwrite_output_dir=True,
num_train_epochs=1,
per_device_train_batch_size=48,
save_steps=10_000,
save_total_limit=2,
remove_unused_columns=False,
fp16=True,
fp16_backend="amp"
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset["train"],
eval_dataset=dataset["test"]
)
%%time
torch.cuda.is_available()
torch.cuda.empty_cache()
trainer.train("ml/models/roberto/checkpoint-660000")
```
Debug logs:
```
Loading model from ml/models/roberto/checkpoint-660000).
loading configuration file ml/models/roberto/checkpoint-660000/config.json
Model config RobertaConfig {
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 6,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.5.0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 51000
}
loading weights file ml/models/roberto/checkpoint-660000/pytorch_model.bin
All model checkpoint weights were used when initializing RobertaForMaskedLM.
All the weights of RobertaForMaskedLM were initialized from the model checkpoint at ml/models/roberto/checkpoint-660000.
If your task is similar to the task the model of the checkpoint was trained on, you can already use RobertaForMaskedLM for predictions without further training.
***** Running training *****
Num examples = 808405026
Num Epochs = 1
Instantaneous batch size per device = 48
Total train batch size (w. parallel, distributed & accumulation) = 48
Gradient Accumulation steps = 1
Total optimization steps = 16841772
Continuing training from checkpoint, will skip to saved global_step
Continuing training from epoch 0
Continuing training from global step 660000
Will skip the first 0 epochs then the first 660000 batches in the first epoch.
```
Iteration over the dataset seems fine:
```
x = dataset["train"][808405025]
print(x)
{'input_ids': [0, 14527, 606, 606, 503, 616, 13117, 1319, 7537, 93, 2506, 7712, 4897, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}
```
## Expected behavior
Training to resume from the checkpoint | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11284/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11284/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11283 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11283/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11283/comments | https://api.github.com/repos/huggingface/transformers/issues/11283/events | https://github.com/huggingface/transformers/issues/11283 | 859,879,552 | MDU6SXNzdWU4NTk4Nzk1NTI= | 11,283 | Beam search decoding and language model integration for Wav2Vec2ForCTC models | {
"login": "tanujjain",
"id": 9531254,
"node_id": "MDQ6VXNlcjk1MzEyNTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9531254?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tanujjain",
"html_url": "https://github.com/tanujjain",
"followers_url": "https://api.github.com/users/tanujjain/followers",
"following_url": "https://api.github.com/users/tanujjain/following{/other_user}",
"gists_url": "https://api.github.com/users/tanujjain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tanujjain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanujjain/subscriptions",
"organizations_url": "https://api.github.com/users/tanujjain/orgs",
"repos_url": "https://api.github.com/users/tanujjain/repos",
"events_url": "https://api.github.com/users/tanujjain/events{/privacy}",
"received_events_url": "https://api.github.com/users/tanujjain/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey @tanujjain, \r\n\r\nWe are very interested in adding beam search for Wav2Vec2 + LM support in general, but sadly don't find the time to do so at the moment. We would be really happy about a contribution if you want to give it a try.\r\n\r\nAs a start we could add the logic to `examples/research_projects/wav2vec2` and if it's clean then move to upstream to `src/transformers`",
"@patrickvonplaten Sure, I'll give it a go.",
"Hello @patrickvonplaten and @tanujjain,\r\n\r\nI have already worked with prefix beam search decoding with language models for wav2vec2 and would like to implement it for huggingface, if you guys are okay with it.",
"PRs are very much welcome!",
"Any update on this? Specifically any transformer based lm that one can use with wav2vec 2.0?",
"As a quick solution, I used the code by original author of the algo which can be found [here](https://gist.github.com/awni/56369a90d03953e370f3964c826ed4b0).\r\n\r\n``` python\r\nimport numpy as np\r\nimport math\r\nimport collections\r\n\r\nNEG_INF = -float(\"inf\")\r\n\r\ndef make_new_beam():\r\n fn = lambda : (NEG_INF, NEG_INF)\r\n return collections.defaultdict(fn)\r\n\r\ndef logsumexp(*args):\r\n \"\"\"\r\n Stable log sum exp.\r\n \"\"\"\r\n if all(a == NEG_INF for a in args):\r\n return NEG_INF\r\n a_max = max(args)\r\n lsp = math.log(sum(math.exp(a - a_max)\r\n for a in args))\r\n return a_max + lsp\r\n\r\ndef decode(probs, beam_size=100, blank=0):\r\n \"\"\"\r\n Performs inference for the given output probabilities.\r\n Arguments:\r\n probs: The output probabilities (e.g. post-softmax) for each\r\n time step. Should be an array of shape (time x output dim).\r\n beam_size (int): Size of the beam to use during inference.\r\n blank (int): Index of the CTC blank label.\r\n Returns the output label sequence and the corresponding negative\r\n log-likelihood estimated by the decoder.\r\n \"\"\"\r\n T, S = probs.shape\r\n probs = np.log(probs)\r\n \r\n # Elements in the beam are (prefix, (p_blank, p_no_blank))\r\n # Initialize the beam with the empty sequence, a probability of\r\n # 1 for ending in blank and zero for ending in non-blank\r\n # (in log space).\r\n beam = [(tuple(), (0.0, NEG_INF))]\r\n \r\n for t in range(T): # Loop over time\r\n next_beam = make_new_beam() # A default dictionary to store the next step candidates.\r\n for s in range(S): # Loop over vocab\r\n p = probs[t, s]\r\n # The variables p_b and p_nb are respectively the\r\n # probabilities for the prefix given that it ends in a\r\n # blank and does not end in a blank at this time step.\r\n for prefix, (p_b, p_nb) in beam: # Loop over beam\r\n # If we propose a blank the prefix doesn't change.\r\n # Only the probability of ending in blank gets updated\r\n if s == blank:\r\n n_p_b, n_p_nb = next_beam[prefix]\r\n n_p_b = logsumexp(n_p_b, p_b + p, p_nb + p)\r\n next_beam[prefix] = (n_p_b, n_p_nb)\r\n continue\r\n # Extend the prefix by the new character s and add it to\r\n # the beam. Only the probability of not ending in blank\r\n # gets updated.\r\n end_t = prefix[-1] if prefix else None\r\n n_prefix = prefix + (s,)\r\n n_p_b, n_p_nb = next_beam[n_prefix]\r\n if s != end_t:\r\n n_p_nb = logsumexp(n_p_nb, p_b + p, p_nb + p)\r\n else:\r\n # We don't include the previous probability of not ending\r\n # in blank (p_nb) if s is repeated at the end. The CTC\r\n # algorithm merges characters not separated by a blank.\r\n n_p_nb = logsumexp(n_p_nb, p_b + p)\r\n \r\n # *NB* this would be a good place to include an LM score.\r\n next_beam[n_prefix] = (n_p_b, n_p_nb) ## add lm here\r\n # If s is repeated at the end we also update the unchanged\r\n # prefix. This is the merging case.\r\n if s == end_t:\r\n n_p_b, n_p_nb = next_beam[prefix]\r\n n_p_nb = logsumexp(n_p_nb, p_nb + p)\r\n next_beam[prefix] = (n_p_b, n_p_nb)\r\n # Sort and trim the beam before moving on to the\r\n # next time-step.\r\n beam = sorted(next_beam.items(),\r\n key=lambda x : logsumexp(*x[1]),\r\n reverse=True)\r\n beam = beam[:beam_size]\r\n best = beam[0]\r\n return best[0], -logsumexp(*best[1])\r\n\r\n# Try the algo on an example\r\ntime = 50\r\noutput_dim = 20\r\nbatch_size = 16\r\n\r\nbatch_probs = np.random.rand(batch_size, time, output_dim)\r\ndecoded_batch = []\r\nfor b in batch_probs:\r\n norm_b = b/np.sum(b, axis=1, keepdims=True)\r\n decoded_batch.append(decode(norm_b, beam_size=3)[0])\r\n```\r\n\r\nTrying to add a language model (for german) like so:\r\n``` python\r\n\r\nfrom transformers import AutoTokenizer, AutoModelWithLMHead\r\ntokenizer_de = AutoTokenizer.from_pretrained(\"dbmdz/german-gpt2\")\r\nmodel_de = AutoModelWithLMHead.from_pretrained(\"dbmdz/german-gpt2\", return_dict_in_generate=True)\r\n\r\n\r\ndef lm_prob(sentence):\r\n last_word_token = tokenizer_de.encode(sentence.split(' ')[-1])\r\n earlier_sentence = ' '.join(sentence.split(' ')[:-1])\r\n input_ids_earlier_sent = tokenizer_de.encode(earlier_sentence, return_tensors=\"pt\") # tokenize rest of the sentence\r\n generated_outputs_lm = model_de.generate(input_ids_earlier_sent,\r\n max_length=len(input_ids_earlier_sent[0]) + 1,\r\n do_sample=True, \r\n num_return_sequences=1,\r\n output_scores=True)\r\n sftmax_prob_lm = generated_outputs_lm.scores[0].softmax(-1)\r\n prob = sftmax_prob_lm[0, last_word_token]\r\n return prob\r\n```\r\n\r\nThe lm snippet should give the prob of having the last word in a beam given all the other preceding characters, but the probabilities for the words I expect are almost always close to zero, so still working on figuring out how better to use the LM. Hence, haven't integrated the LM with the above snippet.\r\n\r\nAs for a decent implementation for beamsearchforctc, I'm thinking on the lines of running the above algo (not the same code obviously) with each sequence in the batch running an independent beamsearch on a different thread/process. \r\n\r\n**Anyone with less complex implementational ideas?** \r\n\r\nFound another implementation [here](https://github.com/githubharald/CTCDecoder/blob/master/src/BeamSearch.py) (without consideration for batch inference).\r\n",
"> As for a decent implementation for beamsearchforctc, I'm thinking on the lines of running the above algo (not the same code obviously) with each sequence in the batch running an independent beamsearch on a different thread/process.\r\n\r\nThere you go: https://github.com/mozilla/DeepSpeech/blob/master/native_client/ctcdecode/ctc_beam_search_decoder.cpp#L287\r\n\r\nI'd highly encourage to also consider returning the frames where the probability of the token spikes as it can be used for alignment. Mozilla did it in their implementation and it works quite nicely.\r\n\r\nIs there any restriction on the programming language? The computational complexity of the algorithm is quite high and ctc beam search decoding often the bottleneck.",
"I think we can try to add a dependency to wav2letter: https://github.com/flashlight/wav2letter and add LM decoding as explained here on fairseq: https://github.com/pytorch/fairseq/blob/master/examples/wav2vec/README.md#evaluating-a-ctc-model . It would be awesome if we manage to create a nice `run_wav2vec2_eval_with_lm.py` script that people can use out of the box with every wav2vec2 model. We can also make a nice blog post out of this and publish it on our blog :-) \r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"ping",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"For future developers, you may find this implementation useful. I used the simplest code possible to develop it\r\nhttps://github.com/farisalasmary/wav2vec2-kenlm\r\n",
"I'm now working on this topic full time. \r\n\r\nWe will most likely foster a closer collaboration between [pyctcdecode](https://github.com/kensho-technologies/pyctcdecode) and Transformers. [Here](https://github.com/patrickvonplaten/Wav2Vec2_PyCTCDecode) is a github repo that shows how to use `pyctcdecode` with Wav2Vec2 for LM supported decoding. It works quite well with KenLM."
] | 1,618 | 1,636 | 1,625 | NONE | null | 1. AFAIK, `Wav2Vec2ForCTCTokenizer.decode` method only provides greedy decoding. Is there a Beamsearch implementation for CTC available yet?
2. Also, as it is a common norm in ASR modelling, language models are also generally added on top of the acoustic model. It would also be nice to have a possibility of appending a pretrained Language model which gets taken into consideration at the beamsearch decoding time. Not sure if there's an out-of-box solution implemented for that yet?
I'm also aware of efforts to integrate a language model in #10794 and have had a look at the notebook [here](https://github.com/voidful/huggingface_notebook/blob/main/xlsr_gpt.ipynb). Although it is a nice, simple way to integrate an LM, it is suboptimal when considering CTC semantics. A more appropriate approach would be the one described in [this](https://arxiv.org/pdf/1408.2873.pdf) paper and explained in [this](https://distill.pub/2017/ctc/) distilpub blog. Would be great to have these features added (if they are already not there and I somehow missed them). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11283/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11283/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11282 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11282/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11282/comments | https://api.github.com/repos/huggingface/transformers/issues/11282/events | https://github.com/huggingface/transformers/issues/11282 | 859,853,549 | MDU6SXNzdWU4NTk4NTM1NDk= | 11,282 | tf.function and half precision fails with Roberta models | {
"login": "AWilcke",
"id": 11478679,
"node_id": "MDQ6VXNlcjExNDc4Njc5",
"avatar_url": "https://avatars.githubusercontent.com/u/11478679?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AWilcke",
"html_url": "https://github.com/AWilcke",
"followers_url": "https://api.github.com/users/AWilcke/followers",
"following_url": "https://api.github.com/users/AWilcke/following{/other_user}",
"gists_url": "https://api.github.com/users/AWilcke/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AWilcke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AWilcke/subscriptions",
"organizations_url": "https://api.github.com/users/AWilcke/orgs",
"repos_url": "https://api.github.com/users/AWilcke/repos",
"events_url": "https://api.github.com/users/AWilcke/events{/privacy}",
"received_events_url": "https://api.github.com/users/AWilcke/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for sharing this issue!\r\n\r\nThe issue here indeed comes from 1.0 that is not from the same dtype than `extended_attention_mask`. A better fix would be to align this line such as the other models by replacing it with:\r\n```\r\nextended_attention_mask = tf.cast(extended_attention_mask, dtype=embedding_output.dtype)\r\none_cst = tf.constant(1.0, dtype=embedding_output.dtype)\r\nten_thousand_cst = tf.constant(-10000.0, dtype=embedding_output.dtype)\r\nextended_attention_mask = tf.multiply(tf.subtract(one_cst, extended_attention_mask), ten_thousand_cst)\r\n```\r\nExtracted from BERT. I will do a fix.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,621 | 1,621 | NONE | null | ## Environment info
- `transformers` version: 4.5.1
- Platform: Linux-5.4.0-71-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No (but also fails with GPU)
- Using distributed or parallel set-up in script?: No
### Who can help
As far as I can tell, this worked before #9788, so maybe @jplu can help. Also this is a TF issue so @Rocketknight1 .
## Information
Model I am using (Bert, XLNet ...): TFRoberta, this also happens with TFXLMRoberta
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```py3
import tensorflow as tf
from transformers.models.roberta import RobertaTokenizerFast, TFRobertaModel
@tf.function
def get_embeddings(
tokenizer: RobertaTokenizerFast, model: TFRobertaModel, text: str
) -> tf.Tensor:
return model(**tokenizer(text, return_tensors="tf")).last_hidden_state
if __name__ == "__main__":
tf.keras.mixed_precision.set_global_policy("float16")
name = "roberta-base"
tokenizer = RobertaTokenizerFast.from_pretrained(name)
model = TFRobertaModel.from_pretrained(name)
embeddings = get_embeddings(
tokenizer=tokenizer,
model=model,
text="tf.function and mixed precision",
)
print(embeddings)
```
Traceback:
```
File "roberta_bug.py", line 17, in <module>
embeddings = get_embeddings(
File "/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 828, in __call__
result = self._call(*args, **kwds)
File "/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 871, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 725, in _initialize
self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
File "/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 2969, in _get_concrete_function_internal_garbage_collected
graph_function, _ = self._maybe_define_function(args, kwargs)
File "/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3361, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3196, in _create_graph_function
func_graph_module.func_graph_from_py_func(
File "/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 990, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 634, in wrapped_fn
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 977, in wrapper
raise e.ag_error_metadata.to_exception(e)
TypeError: in user code:
roberta_bug.py:9 get_embeddings *
return model(**tokenizer(text, return_tensors="tf")).last_hidden_state
/home/arthur/reinfer/env/lib/python3.8/site-packages/transformers/models/roberta/modeling_tf_roberta.py:744 call *
outputs = self.roberta(
/home/arthur/reinfer/env/lib/python3.8/site-packages/transformers/models/roberta/modeling_tf_roberta.py:544 call *
extended_attention_mask = tf.multiply(tf.subtract(1.0, extended_attention_mask), -10000.0)
/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py:201 wrapper **
return target(*args, **kwargs)
/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py:561 subtract
return gen_math_ops.sub(x, y, name)
/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/ops/gen_math_ops.py:10316 sub
_, _, _op, _outputs = _op_def_library._apply_op_helper(
/home/arthur/reinfer/env/lib/python3.8/site-packages/tensorflow/python/framework/op_def_library.py:555 _apply_op_helper
raise TypeError(
TypeError: Input 'y' of 'Sub' Op has type float16 that does not match type float32 of argument 'x'.
```
## Expected behavior
The model should calculate embeddings correctly. This is due to `tf.subtract(1.0, extended_attention_mask)` checking that `1.0` and `extended_attention_mask` have the same type, but in `float16` mode they do not. Reverting to `1.0 - extended_attention_mask` fixes the issue.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11282/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11282/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11281 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11281/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11281/comments | https://api.github.com/repos/huggingface/transformers/issues/11281/events | https://github.com/huggingface/transformers/issues/11281 | 859,831,794 | MDU6SXNzdWU4NTk4MzE3OTQ= | 11,281 | Adding and consequently removing tokens leads to incorrect number of input embeddings | {
"login": "bjoernhommel",
"id": 34039172,
"node_id": "MDQ6VXNlcjM0MDM5MTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/34039172?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bjoernhommel",
"html_url": "https://github.com/bjoernhommel",
"followers_url": "https://api.github.com/users/bjoernhommel/followers",
"following_url": "https://api.github.com/users/bjoernhommel/following{/other_user}",
"gists_url": "https://api.github.com/users/bjoernhommel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bjoernhommel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bjoernhommel/subscriptions",
"organizations_url": "https://api.github.com/users/bjoernhommel/orgs",
"repos_url": "https://api.github.com/users/bjoernhommel/repos",
"events_url": "https://api.github.com/users/bjoernhommel/events{/privacy}",
"received_events_url": "https://api.github.com/users/bjoernhommel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @doubleplusnice,\r\n\r\nI can't really reproduce the bug. \r\n\r\nWhen running your code the output I get is:\r\n\r\n```\r\nSetting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\r\n/home/patrick/python_bin/transformers/generation_utils.py:963: UserWarning: `max_length` is deprecated in this function, use `stopping_criteria=StoppingCriteriaList(MaxLengthCriteria(max_length=max_length))` instead.\r\n warnings.warn(\r\nI like cheese, but I don't like cheese. I like cheese because\r\n50257 50257\r\nEmbedding(50257, 768)\r\n\r\n\r\n\r\nSetting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\r\nI like cheese cheese cheese cheese cheese cheese cheese cheese cheese cheese cheese cheese cheese\r\n50258 50257\r\nEmbedding(50258, 768)\r\n\r\n\r\n\r\nSetting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\r\nI like<|endoftext|>The first time I saw the new \"The Walking Dead\"\r\n50257 50257\r\nEmbedding(50257, 768)\r\n```\r\n\r\nwhich seems correct to me.\r\n\r\nCan you maybe try to update on master? Also, I'm testing with the `gpt2` checkpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,622 | 1,622 | NONE | null | ## Environment info
- `transformers` version: 4.2.2
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Models:
- gpt2: @patrickvonplaten, @LysandreJik
Library:
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
## Information
Using `gpt2-medium`.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
I am attempting to undo `add_tokens()` and `resize_token_embeddings()` for a given, fine-tuned gpt2-medium model. I deleted the token `del tokenizer.added_tokens_encoder[token]` and `model.resize_token_embeddings(len(tokenizer))`, but there remain too many embeddings in the model and consequently, the output is corrupted.
Steps to reproduce the behavior:
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
model_path = "gpt2"
tokenizer = GPT2Tokenizer.from_pretrained(model_path)
model = GPT2LMHeadModel.from_pretrained(model_path)
def speak(model, tokenizer, prefix):
input_ids = tokenizer.encode(prefix, return_tensors='pt')
output_ids = model.generate(input_ids, max_length=15, return_dict_in_generate=True, do_sample=False).sequences
print(tokenizer.decode(output_ids[0]))
print(len(tokenizer), len(tokenizer.encoder))
print(model.get_input_embeddings())
print('\n\n')
# out-of-the-box
speak(model, tokenizer, 'I like cheese')
# added token
tokenizer.add_tokens('cheese')
model.resize_token_embeddings(len(tokenizer))
speak(model, tokenizer, 'I like cheese')
# removed token
del tokenizer.added_tokens_encoder['cheese']
model.resize_token_embeddings(len(tokenizer))
speak(model, tokenizer, 'I like cheese')
```
This results in
```
I like cheese.<|endoftext|>
50258 50257
Embedding(50257, 1024)
I like cheese cheese cheese cheese cheese cheese cheese cheese cheese cheese cheese cheese cheese
50259 50257
Embedding(50259, 1024)
I like<|endoftext|> <|startoftext|> <|startoftext|> <|startoftext|> <|startoftext|> <|startoftext|> <|startoftext|> <|startoftext|> <|startoftext|> <|startoftext|> <|startoftext|> <|startoftext|> <|startoftext|>
50258 50257
Embedding(50258, 1024)
```
## Expected behavior
`model.get_input_embeddings()` of the third output should be equal to the first output `Embedding(50257, 1024)`. Note that I used a fine-tuned version of `gpt2-medium` and I wasn't able to recreate the issue entirely with a pretrained model, but even the deterministic output of a pretrained model will change after deleting a previously added token.
Is this expected behavior?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11281/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11281/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11280 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11280/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11280/comments | https://api.github.com/repos/huggingface/transformers/issues/11280/events | https://github.com/huggingface/transformers/issues/11280 | 859,800,647 | MDU6SXNzdWU4NTk4MDA2NDc= | 11,280 | failed to import BertModel | {
"login": "masterbo98",
"id": 73686357,
"node_id": "MDQ6VXNlcjczNjg2MzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/73686357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/masterbo98",
"html_url": "https://github.com/masterbo98",
"followers_url": "https://api.github.com/users/masterbo98/followers",
"following_url": "https://api.github.com/users/masterbo98/following{/other_user}",
"gists_url": "https://api.github.com/users/masterbo98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/masterbo98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/masterbo98/subscriptions",
"organizations_url": "https://api.github.com/users/masterbo98/orgs",
"repos_url": "https://api.github.com/users/masterbo98/repos",
"events_url": "https://api.github.com/users/masterbo98/events{/privacy}",
"received_events_url": "https://api.github.com/users/masterbo98/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1897896961,
"node_id": "MDU6TGFiZWwxODk3ODk2OTYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Migration",
"name": "Migration",
"color": "e99695",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"I don't know what your transformers version is, but from your error message, you should install `sentencepiece`: `pip install sentencepiece` or `conda install sentencepiece` with the appropriate channel (probably `conda-forge`)",
"Thank u very much, I have solved this problem. Your work is really remarkable. \r\n\r\n\r\n\r\n---Original---\r\nFrom: \"Lysandre ***@***.***>\r\nDate: Fri, Apr 16, 2021 21:17 PM\r\nTo: ***@***.***>;\r\nCc: ***@***.******@***.***>;\r\nSubject: Re: [huggingface/transformers] failed to import BertModel (#11280)\r\n\r\n\r\n\r\n\r\n \r\nI don't know what your transformers version is, but from your error message, you should install sentencepiece: pip install sentencepiece or conda install sentencepiece with the appropriate channel (probably conda-forge)\r\n \r\n—\r\nYou are receiving this because you authored the thread.\r\nReply to this email directly, view it on GitHub, or unsubscribe.",
"Happy to help!",
"The "
] | 1,618 | 1,618 | 1,618 | NONE | null | # 📚 Migration
## Information
The version of my torch is 1.6.0. When I want to import BertModel from transformers, it raised an error: ModuleNotFoundError: No module named '_sentencepiece'
I firstly activate my envs and used 'conda install transformers'.
Please help me, how can I address this problem? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11280/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11280/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11279 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11279/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11279/comments | https://api.github.com/repos/huggingface/transformers/issues/11279/events | https://github.com/huggingface/transformers/issues/11279 | 859,779,938 | MDU6SXNzdWU4NTk3Nzk5Mzg= | 11,279 | fp16 compatibility | {
"login": "JiTingyu",
"id": 67445472,
"node_id": "MDQ6VXNlcjY3NDQ1NDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/67445472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JiTingyu",
"html_url": "https://github.com/JiTingyu",
"followers_url": "https://api.github.com/users/JiTingyu/followers",
"following_url": "https://api.github.com/users/JiTingyu/following{/other_user}",
"gists_url": "https://api.github.com/users/JiTingyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JiTingyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JiTingyu/subscriptions",
"organizations_url": "https://api.github.com/users/JiTingyu/orgs",
"repos_url": "https://api.github.com/users/JiTingyu/repos",
"events_url": "https://api.github.com/users/JiTingyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/JiTingyu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please fill in the issue template for us to help you. You seem to be on an older transformers version, this was fixed in recent versions.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,621 | 1,621 | NONE | null | 我使用的是RTX3090,cuda11.0,系统是ubuntu 18.04
现在遇到问题如下,希望请教一下解决方法
^MEpoch: 0%| | 0/2 [00:00<?, ?it/s]
^MIteration: 0%| | 0/10860 [00:00<?, ?it/s]^[[A^MIteration: 0%| | 0/10860 [00:03<?, ?it/s]
^MEpoch: 0%| | 0/2 [00:03<?, ?it/s]
Selected optimization level O1: Insert automatic casts around Pytorch functions and Tensor methods.
Defaults for this optimization level are:
enabled : True
opt_level : O1
cast_model_type : None
patch_torch_functions : True
keep_batchnorm_fp32 : None
master_weights : None
loss_scale : dynamic
Processing user overrides (additional kwargs that are not None)...
After processing overrides, optimization options are:
enabled : True
opt_level : O1
cast_model_type : None
patch_torch_functions : True
keep_batchnorm_fp32 : None
master_weights : None
loss_scale : dynamic
Warning: multi_tensor_applier fused unscale kernel is unavailable, possibly because apex was installed without --cuda_ext --cpp_ext. Using Python fallback. Original ImportError was: ModuleNotFoundError("No module named 'amp_C'",)
Traceback (most recent call last):
File "./examples/run_cls.py", line 645, in <module>
main()
File "./examples/run_cls.py", line 533, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "./examples/run_cls.py", line 159, in train
outputs = model(**inputs)
File "/home/jitingyu/.conda/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/jitingyu/.conda/envs/pytorch/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 161, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/jitingyu/.conda/envs/pytorch/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 171, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/jitingyu/.conda/envs/pytorch/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/home/jitingyu/.conda/envs/pytorch/lib/python3.6/site-packages/torch/_utils.py", line 428, in reraise
raise self.exc_type(msg)
StopIteration: Caught StopIteration in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/jitingyu/.conda/envs/pytorch/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/jitingyu/.conda/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/jitingyu/AwesomeMRC-master/transformer-mrc/transformers/modeling_albert.py", line 688, in forward
inputs_embeds=inputs_embeds
File "/home/jitingyu/.conda/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/jitingyu/AwesomeMRC-master/transformer-mrc/transformers/modeling_albert.py", line 524, in forward
extended_attention_mask = extended_attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility
StopIteration | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11279/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11278 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11278/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11278/comments | https://api.github.com/repos/huggingface/transformers/issues/11278/events | https://github.com/huggingface/transformers/issues/11278 | 859,752,846 | MDU6SXNzdWU4NTk3NTI4NDY= | 11,278 | [Benchmark] | {
"login": "folk45ky",
"id": 75996082,
"node_id": "MDQ6VXNlcjc1OTk2MDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/75996082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/folk45ky",
"html_url": "https://github.com/folk45ky",
"followers_url": "https://api.github.com/users/folk45ky/followers",
"following_url": "https://api.github.com/users/folk45ky/following{/other_user}",
"gists_url": "https://api.github.com/users/folk45ky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/folk45ky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/folk45ky/subscriptions",
"organizations_url": "https://api.github.com/users/folk45ky/orgs",
"repos_url": "https://api.github.com/users/folk45ky/repos",
"events_url": "https://api.github.com/users/folk45ky/events{/privacy}",
"received_events_url": "https://api.github.com/users/folk45ky/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,618 | 1,618 | 1,618 | NONE | null | # 🖥 Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
## Results
Put your results here! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11278/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11278/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11277 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11277/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11277/comments | https://api.github.com/repos/huggingface/transformers/issues/11277/events | https://github.com/huggingface/transformers/issues/11277 | 859,735,432 | MDU6SXNzdWU4NTk3MzU0MzI= | 11,277 | We should make an eco freindly phone and it should be affordable for everyone | {
"login": "Divyansh100",
"id": 82653108,
"node_id": "MDQ6VXNlcjgyNjUzMTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/82653108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Divyansh100",
"html_url": "https://github.com/Divyansh100",
"followers_url": "https://api.github.com/users/Divyansh100/followers",
"following_url": "https://api.github.com/users/Divyansh100/following{/other_user}",
"gists_url": "https://api.github.com/users/Divyansh100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Divyansh100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Divyansh100/subscriptions",
"organizations_url": "https://api.github.com/users/Divyansh100/orgs",
"repos_url": "https://api.github.com/users/Divyansh100/repos",
"events_url": "https://api.github.com/users/Divyansh100/events{/privacy}",
"received_events_url": "https://api.github.com/users/Divyansh100/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Seems out of scope."
] | 1,618 | 1,618 | 1,618 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11277/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11277/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/11276 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11276/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11276/comments | https://api.github.com/repos/huggingface/transformers/issues/11276/events | https://github.com/huggingface/transformers/issues/11276 | 859,696,327 | MDU6SXNzdWU4NTk2OTYzMjc= | 11,276 | Running gpt-neo 2.7B with less than 13GB of system memory like Colab | {
"login": "finetunej",
"id": 82650881,
"node_id": "MDQ6VXNlcjgyNjUwODgx",
"avatar_url": "https://avatars.githubusercontent.com/u/82650881?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/finetunej",
"html_url": "https://github.com/finetunej",
"followers_url": "https://api.github.com/users/finetunej/followers",
"following_url": "https://api.github.com/users/finetunej/following{/other_user}",
"gists_url": "https://api.github.com/users/finetunej/gists{/gist_id}",
"starred_url": "https://api.github.com/users/finetunej/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/finetunej/subscriptions",
"organizations_url": "https://api.github.com/users/finetunej/orgs",
"repos_url": "https://api.github.com/users/finetunej/repos",
"events_url": "https://api.github.com/users/finetunej/events{/privacy}",
"received_events_url": "https://api.github.com/users/finetunej/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [] | 1,618 | 1,618 | null | NONE | null | # 🚀 Feature request
A way to conserve regular system memory while loading large models. On systems without much system memory, the process crashes because it tries to load both the weight checkpoint and the model into system memory. In the case of gpt-neo 2.7B, this is even worse because the checkpoint is offered only in float32 format, taking up twice as much space.
## Motivation
Free Colab often offers GPUs with 16GB of VRAM but only about 13GB of RAM. This increases the barrier of entry for people to play with these kinds of models. I have found a way to load models in this kind of situation, but it is not general or well integrated. By posting it here, I hope that a more general implementation can be built at some point.
This would also help #11271.
## Your contribution
It is possible to work around this by loading the checkpoint directly into VRAM, casting it to float16, instantiating the model in VRAM and only then applying the weights from the checkpoint.
To do this, first a patch has to be applied to src/transformers/models/gpt_neo/modeling_gpt_neo.py. This is based on the 4.5.1 release.
703c703
< self.h = nn.ModuleList([GPTNeoBlock(config, layer_id=i) for i in range(config.num_layers)])
---
> self.h = nn.ModuleList([GPTNeoBlock(config, layer_id=i).half().cuda() for i in range(config.num_layers)])
890,891c890,891
< self.transformer = GPTNeoModel(config)
< self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
---
> self.transformer = GPTNeoModel(config).half().cuda()
> self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False).half().cuda()
This causes space for the biggest part of the model to be allocated directly on the GPU, which has more space in the free Colab scenario. It also moves the other parts of the model to GPU. Now the model can be instantiated like this:
from transformers.file_utils import cached_path, WEIGHTS_NAME, hf_bucket_url
model_name = "EleutherAI/gpt-neo-2.7B"
archive_file = hf_bucket_url(model_name, filename=WEIGHTS_NAME)
resolved_archive_file = cached_path(archive_file)
checkpoint = torch.load(resolved_archive_file, map_location="cuda:0")
for k in checkpoint.keys():
checkpoint[k] = checkpoint[k].half()
model = GPTNeoForCausalLM.from_pretrained(model_name, state_dict=checkpoint).half().to("cuda")
for k in list(checkpoint.keys()):
del checkpoint[k] | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11276/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11276/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11275 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11275/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11275/comments | https://api.github.com/repos/huggingface/transformers/issues/11275/events | https://github.com/huggingface/transformers/pull/11275 | 859,483,838 | MDExOlB1bGxSZXF1ZXN0NjE2NTU3NjM3 | 11,275 | modify double considering special tokens in `language_modeling.py` | {
"login": "taepd",
"id": 49802647,
"node_id": "MDQ6VXNlcjQ5ODAyNjQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/49802647?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taepd",
"html_url": "https://github.com/taepd",
"followers_url": "https://api.github.com/users/taepd/followers",
"following_url": "https://api.github.com/users/taepd/following{/other_user}",
"gists_url": "https://api.github.com/users/taepd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taepd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taepd/subscriptions",
"organizations_url": "https://api.github.com/users/taepd/orgs",
"repos_url": "https://api.github.com/users/taepd/repos",
"events_url": "https://api.github.com/users/taepd/events{/privacy}",
"received_events_url": "https://api.github.com/users/taepd/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,618 | 1,618 | 1,618 | CONTRIBUTOR | null | # What does this PR do?
in `class TextDatasetForNextSentencePrediction`, double considering `self.tokenizer.num_special_tokens_to_add(pair=True)`
so, i remove `self.block_size`, and add parameter for `def create_examples_from_document` like `class LineByLineWithSOPTextDataset` do
Fixes # (issue) double considering special tokens in `language_modeling.py`
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11275/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11275/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11275",
"html_url": "https://github.com/huggingface/transformers/pull/11275",
"diff_url": "https://github.com/huggingface/transformers/pull/11275.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11275.patch",
"merged_at": 1618845883000
} |
https://api.github.com/repos/huggingface/transformers/issues/11274 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11274/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11274/comments | https://api.github.com/repos/huggingface/transformers/issues/11274/events | https://github.com/huggingface/transformers/pull/11274 | 859,400,794 | MDExOlB1bGxSZXF1ZXN0NjE2NDg4ODA5 | 11,274 | [debug utils] activation/weights underflow/overflow detector | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> I haven't commented on each print statement, but they should use the logger maybe? \r\n\r\nI started with it first and then replaced with print, because we need all the horizontal space and the really busy long pre-amble is just getting in the way, IMHO. I also don't see what useful information it'd contribute because the tool raises an exception when it detects the problem. Finally, what if someone disabled the logger - it'd not be able to do its work then. Please correct me if I'm missing something.\r\n\r\n> Also I think the `DebugActivationOverflow` should be documented in our internals API doc (since we tell people to use it in their own Trainers). `internal/trainer_utils` is probably the place for that.\r\n\r\nWill do. Thank you for suggesting where to put it.\r\n\r\nConverted to .rst - found this new tool https://github.com/miyakogi/m2r that did it well, just needed to clean up a weird quirk.\r\n",
"All links have been added and tweaked the doc some more to improve readability.",
"OK, the original functionality has been expanded to include a lot more useful information. Please see the updated copious documentation both in the docstring and the user docs.\r\n\r\nThe main changes are that:\r\n1. we now print each frame separately and include inputs/outputs/weights\r\n2. there is a tracing mode which can easily trace any number of batches at will\r\n\r\nI wasn't sure how I could integrate the new features into the limited `--debug underflow_overflow` interface as it now has 3 optional parameters. So for now these can be activated directly from the script. If you can think how I could make these work with ``TrainingArguments`` I'm all ears.\r\n\r\nAs this has changed a lot inside I'd appreciate another look from @sgugger and @LysandreJik - no rush please. And thank you!"
] | 1,618 | 1,619 | 1,619 | CONTRIBUTOR | null | This PR came to be out of the overflow issue we have been dealing with in t5/mt5/gpt-neo due to bf16 pretrained models. This PR:
* adds a new file `debug_utils.py`
* adds new helper debug class `DebugUnderOverflow` and function `detect_overflow` for doing the same for any tensor variable (useful for detailed debug).
* extends `Trainer` to support `--debug underflow_overflow` which automatically activates this detector - no changes to the code required
* overloads the old `--debug` which for some reason was used for very specific tpu debug prints yet, so folding that feature into now multi-optional `--debug` (similar to `--sharded_ddp`). the old unqualified `--debug` is now specific `--debug tpu_metrics_debug`. I know it sort of breaks back-compat for `--debug`, but since it's debug it's hopefully OK.
* creates a new doc `debugging.rst` - will add some more useful debug recipes into it later
I'm open to suggestions of different namings to all of the new things...
@LysandreJik, @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11274/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11274",
"html_url": "https://github.com/huggingface/transformers/pull/11274",
"diff_url": "https://github.com/huggingface/transformers/pull/11274.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11274.patch",
"merged_at": 1619806547000
} |
https://api.github.com/repos/huggingface/transformers/issues/11273 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11273/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11273/comments | https://api.github.com/repos/huggingface/transformers/issues/11273/events | https://github.com/huggingface/transformers/pull/11273 | 859,391,145 | MDExOlB1bGxSZXF1ZXN0NjE2NDgxMDA2 | 11,273 | update dependency_versions_table | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,618 | 1,618 | 1,618 | CONTRIBUTOR | null | missed this updating when bumped the version.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11273/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11273",
"html_url": "https://github.com/huggingface/transformers/pull/11273",
"diff_url": "https://github.com/huggingface/transformers/pull/11273.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11273.patch",
"merged_at": 1618539029000
} |
https://api.github.com/repos/huggingface/transformers/issues/11272 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11272/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11272/comments | https://api.github.com/repos/huggingface/transformers/issues/11272/events | https://github.com/huggingface/transformers/issues/11272 | 859,343,781 | MDU6SXNzdWU4NTkzNDM3ODE= | 11,272 | squad_convert_example_to_features is broken | {
"login": "brian8128",
"id": 10691563,
"node_id": "MDQ6VXNlcjEwNjkxNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/10691563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brian8128",
"html_url": "https://github.com/brian8128",
"followers_url": "https://api.github.com/users/brian8128/followers",
"following_url": "https://api.github.com/users/brian8128/following{/other_user}",
"gists_url": "https://api.github.com/users/brian8128/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brian8128/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brian8128/subscriptions",
"organizations_url": "https://api.github.com/users/brian8128/orgs",
"repos_url": "https://api.github.com/users/brian8128/repos",
"events_url": "https://api.github.com/users/brian8128/events{/privacy}",
"received_events_url": "https://api.github.com/users/brian8128/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please use `squad_convert_example_to_features_init(yourtokenizer)` to set the tokenizer."
] | 1,618 | 1,618 | 1,618 | NONE | null | ## Environment info
- `transformers` version: 4.2.2
- Platform: Linux-4.15.0-132-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1+cu101 (True)
- Tensorflow version (GPU?): No
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger
Models:
NA
## Information
The squad_convert_example_to_features function requires a tokenizer but there is no way to give it one so you always get `NameError: name 'tokenizer' is not defined`
## To reproduce
Call `squad_convert_example_to_features` with any input.
## Expected behavior
Convert a squad example to features.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11272/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11272/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11271 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11271/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11271/comments | https://api.github.com/repos/huggingface/transformers/issues/11271/events | https://github.com/huggingface/transformers/issues/11271 | 859,315,034 | MDU6SXNzdWU4NTkzMTUwMzQ= | 11,271 | gpt-neo 2.7 crashes, 1.3 runs fine | {
"login": "caseybasichis",
"id": 1331371,
"node_id": "MDQ6VXNlcjEzMzEzNzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1331371?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/caseybasichis",
"html_url": "https://github.com/caseybasichis",
"followers_url": "https://api.github.com/users/caseybasichis/followers",
"following_url": "https://api.github.com/users/caseybasichis/following{/other_user}",
"gists_url": "https://api.github.com/users/caseybasichis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/caseybasichis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/caseybasichis/subscriptions",
"organizations_url": "https://api.github.com/users/caseybasichis/orgs",
"repos_url": "https://api.github.com/users/caseybasichis/repos",
"events_url": "https://api.github.com/users/caseybasichis/events{/privacy}",
"received_events_url": "https://api.github.com/users/caseybasichis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I would guess that's a memory issue indeed! That should be the only difference between the two checkpoints.",
"Oof makes sense. I'll check back if that doesn't fix it. Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,621 | 1,621 | NONE | null | Loading the generator crashes Python
```
python3
Python 3.8.5 (default, Jan 27 2021, 15:41:15)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B', device=0)
[2]+ Killed python3
Killed
```
I have an A6000 with 48gb.
... it looks like I'm running on 16gb of ram?? maybe a stick is dead -- usually 32gb.
Is it a system ram issue?
## Environment info
- `transformers` version: 4.5.0
- Platform: Linux-5.4.0-70-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.9.0.dev20210217+cu112 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11271/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11271/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11270 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11270/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11270/comments | https://api.github.com/repos/huggingface/transformers/issues/11270/events | https://github.com/huggingface/transformers/pull/11270 | 859,281,467 | MDExOlB1bGxSZXF1ZXN0NjE2Mzg4NjQ1 | 11,270 | Workflow fixes | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"That's one advantage of using the docker images! We control the exact CUDA and torch versions by controlling the images directly, so it won't break until we manually update it, at which point we should remember to be extra careful about this dependency."
] | 1,618 | 1,618 | 1,618 | MEMBER | null | Fixes some workflow issues:
- Installs torch scatter in the CI with the appropriate pre-compiled version
- Removes DeepSpeed and Fairscale from the non-cuda-extension workflows
- Adds the forgotten reports for cuda-extension workflows
- Adds the result of the cuda-extension workflows to be sent to Slack
Also it updates the `deepspeed` dependency in the dependency table, because they seem mismatched on `master` (running `make fixup` fixed it for me) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11270/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11270/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11270",
"html_url": "https://github.com/huggingface/transformers/pull/11270",
"diff_url": "https://github.com/huggingface/transformers/pull/11270.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11270.patch",
"merged_at": 1618543278000
} |
https://api.github.com/repos/huggingface/transformers/issues/11268 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11268/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11268/comments | https://api.github.com/repos/huggingface/transformers/issues/11268/events | https://github.com/huggingface/transformers/issues/11268 | 858,905,324 | MDU6SXNzdWU4NTg5MDUzMjQ= | 11,268 | DataCollatorForSOP marked as deprecated but DataCollatorForLanguageModeling does not offer the same functionality | {
"login": "xamm",
"id": 39380924,
"node_id": "MDQ6VXNlcjM5MzgwOTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/39380924?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xamm",
"html_url": "https://github.com/xamm",
"followers_url": "https://api.github.com/users/xamm/followers",
"following_url": "https://api.github.com/users/xamm/following{/other_user}",
"gists_url": "https://api.github.com/users/xamm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xamm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xamm/subscriptions",
"organizations_url": "https://api.github.com/users/xamm/orgs",
"repos_url": "https://api.github.com/users/xamm/repos",
"events_url": "https://api.github.com/users/xamm/events{/privacy}",
"received_events_url": "https://api.github.com/users/xamm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [
"The `sentence_order_label` will be left as is and collated if your dataset provides them. This is tested [here](https://github.com/huggingface/transformers/blob/2550b41aa2ec34f05ddfd3ec5875ddb32ad78d58/tests/test_data_collator.py#L268) which is adapted from the old test of `DataCollatorForSOP`.",
"Oh, now I see it too. I was kind of under the assumption that the DataCollator could also be used on non-preprocessed data to tokenize and preprocess batches on demand."
] | 1,618 | 1,618 | 1,618 | NONE | null | The `DataCollatorForSOP` is marked as deprecated and it is recommended to use the `DataCollatorForLanguageModeling` instead. [Link to the data_collator.py](https://github.com/huggingface/transformers/blob/4bae96ec2bee265f938fc262201538819419089a/src/transformers/data/data_collator.py)
As far as I can tell, the labels for the sentence order prediction task (`sentence_order_label`) are not set in `DataCollatorForLanguageModeling`.
Will this be added to `DataCollatorForLanguageModeling` in a future release, or what is the correct procedure when a data collator is needed for both the mlm and sop tasks at the same time, as in ALBERT training?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11268/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11268/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11267 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11267/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11267/comments | https://api.github.com/repos/huggingface/transformers/issues/11267/events | https://github.com/huggingface/transformers/issues/11267 | 858,891,748 | MDU6SXNzdWU4NTg4OTE3NDg= | 11,267 | inf/nan in generate (beam_sample) with small temperature values | {
"login": "elsanns",
"id": 3648991,
"node_id": "MDQ6VXNlcjM2NDg5OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3648991?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elsanns",
"html_url": "https://github.com/elsanns",
"followers_url": "https://api.github.com/users/elsanns/followers",
"following_url": "https://api.github.com/users/elsanns/following{/other_user}",
"gists_url": "https://api.github.com/users/elsanns/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elsanns/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elsanns/subscriptions",
"organizations_url": "https://api.github.com/users/elsanns/orgs",
"repos_url": "https://api.github.com/users/elsanns/repos",
"events_url": "https://api.github.com/users/elsanns/events{/privacy}",
"received_events_url": "https://api.github.com/users/elsanns/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @elsanns \r\n\r\nGreat catch, thanks for the detailed explanation!\r\n\r\nYour observation is right and I can re-produce this error.\r\n\r\nRe, your question\r\n> should one just implement their own logits_warper handling float overflow?\r\n\r\nActually there's an `InfNanRemoveLogitsProcessor ` (#10769) which does just that, and can be enabled by passing `remove_invalid_values=True` to `generate`. But the problem is that it replaces the `inf` values by the maximum value for the current `dtype` which is still quite large and ends up becoming `inf` again after adding the `beam_scores`.\r\n\r\nAlso if you use `InfNanRemoveLogitsProcessor` as `logits_warper` (so that it gets applied after adding the `beam_scores`) then it no longer gives this error but seems to be shifting the distribution and the generated output doesn't make sense. \r\n\r\nI tried your fix of normalizing `beam_scores` and it seems to be working.\r\n\r\nOne possible solution would be to add `normalize_beam_scores` argument and when it is `True`, `BeamScorer` would return the normalized `beam_scores`.\r\n\r\nWhat do you think @patrickvonplaten?",
"Hi @patil-suraj,\r\n\r\nThanks for replying!\r\nI think there are several ways of scaling `beam_scores`, e.g. using `beam_sample` with a custom `beam_scorer` scaling the input before processing. Pros: no changes to the code, cons: not available through `generate`.\r\nAnother approach would be applying logits processors and warper before softmax but it could be a breaking change for users writing custom processors.",
"Hey @elsanns,\r\n\r\nSorry for answering so late! My answer here: https://github.com/huggingface/transformers/issues/14993#issuecomment-1003945387 might also be of relevance. \r\n\r\nIn short, I think there are a couple of things here:\r\n- `beam_sample()` is quite an edge-case because encoder-decoder models are usually evaluated with `beam_search` instead? Could I ask why you chose to use `beam_sample()` here? Did it give better results for you? \r\n- Distilled models, like `distilbart` tend to have more extreme output logits as lots of knowledge is compressed into comparably little capacity\r\n- Lastly, as said in the linked answer above, I don't know of an \"official\" beam sample algorithm which is the reason `transformers` `beam_sample()` algorithm is not implemented according to an official paper or any mathematically sound algorithm. \r\n\r\nIMO a better solution than having the beam score normalize it's outputs would be to maybe add a `Normalizer` to the logits warper so that before the logits are sampled they are being normalized.\r\n\r\nIn case we see more and more of issues like this one or https://github.com/huggingface/transformers/issues/14993 we might also consider change the `beam_sample()` algorithm to follow the approach proposed in Algorithm 2 in https://github.com/huggingface/transformers/issues/14993 . This would however be a big breaking change and I am currently really not sure that it is worth it\r\n",
"Hi @patrickvonplaten,\r\n\r\nThank you for a detailed answer. \r\n\r\nI noticed this behaviour testing various decoding methods, and I don't recall seeing a significant advantage of `beam_sample` in any particular use case.\r\n\r\nSince the new approach would be a breaking change, it seems a right solution to keep it the way it is for now.\r\n\r\nThanks again for your answer",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,644 | 1,644 | CONTRIBUTOR | null | ## Environment info
- transformers` version: transformers version: '4.6.0.dev0'
- Platform: Linux
- Python version: 3.6.9
- PyTorch version (GPU?): '1.8.0' (yes)
## Information
The `generate` function (`beam_sample`) throws error when passing small temperature values.
## To reproduce
```
from transformers import (
AutoModelForSeq2SeqLM,
AutoTokenizer
)
model_name = "sshleifer/distilbart-xsum-12-3"
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
text = "New York City (NYC), often simply called New York, is the most populous city in the United States"
input_ids = tokenizer.encode(text, return_tensors='pt')
sample_outputs = model.generate(input_ids,
num_beams=3,
do_sample=True,
temperature=0.2
)
```
```
Traceback (most recent call last):
File "test.py", line 16, in <module>
temperature=0.2
File "/opt/anaconda3/envs/tensorflow2/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/opt/anaconda3/envs/tensorflow2/lib/python3.6/site-packages/transformers/generation_utils.py", line 1113, in generate
**model_kwargs,
File "/opt/anaconda3/envs/tensorflow2/lib/python3.6/site-packages/transformers/generation_utils.py", line 2134, in beam_sample
next_tokens = torch.multinomial(probs, num_samples=2 * num_beams)
RuntimeError: probability tensor contains either `inf`, `nan` or element < 0
```
Another way to reproduce this error is using higher temperatures and more iterations (generate a longer output).
It looks like this error is caused by `next_token_scores` growing to -inf and `probs` becoming nan.
Apparently, large absolute values accumulate over iterations because `next_token_scores` are no longer normalized after adding unnormalized `beam_scores`.
`beam_scores` are calculated form the output of `logits_warper(input_ids, next_token_scores)` ,
and can grow fast with low temperatures (warper does: `scores = scores / self.temperature`).
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Is the increase of unscaled values a desired behaviour and should one just implement their own `logits_warper` handling float overflow?
If not, a quick fix, just for demonstration, is scaling the values of `beam_scores` added to `next_token_scores` by replacing:
`next_token_scores = next_token_scores + beam_scores[:, None].expand_as(next_token_scores)`
with:
`beam_scores_softmax = F.softmax(beam_scores, dim=-1) `
`next_token_scores = next_token_scores + beam_scores_softmax[:, None].expand_as(next_token_scores)`
It works fine but changes absolute values of scores users may rely on.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11267/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11266 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11266/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11266/comments | https://api.github.com/repos/huggingface/transformers/issues/11266/events | https://github.com/huggingface/transformers/issues/11266 | 858,785,033 | MDU6SXNzdWU4NTg3ODUwMzM= | 11,266 | chunk of words for input token | {
"login": "jihwanp",
"id": 70801434,
"node_id": "MDQ6VXNlcjcwODAxNDM0",
"avatar_url": "https://avatars.githubusercontent.com/u/70801434?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jihwanp",
"html_url": "https://github.com/jihwanp",
"followers_url": "https://api.github.com/users/jihwanp/followers",
"following_url": "https://api.github.com/users/jihwanp/following{/other_user}",
"gists_url": "https://api.github.com/users/jihwanp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jihwanp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jihwanp/subscriptions",
"organizations_url": "https://api.github.com/users/jihwanp/orgs",
"repos_url": "https://api.github.com/users/jihwanp/repos",
"events_url": "https://api.github.com/users/jihwanp/events{/privacy}",
"received_events_url": "https://api.github.com/users/jihwanp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,621 | 1,621 | NONE | null | Hi I have some questions about using pretrained bert.
Can I put a chunk of words into one input token? For example, split "hi my name is Linda and today i will~" as "hi my name is Linda" and "and today i will" and make each split as one embedding vector (i.e using average word2vec) and treat each split vector as one input token. Is it okay to apply it to the existing pre-trained models?
Actually i'm forced to use phrase wise token in my task so the models for long sequences are not the option.
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11266/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11266/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11265 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11265/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11265/comments | https://api.github.com/repos/huggingface/transformers/issues/11265/events | https://github.com/huggingface/transformers/issues/11265 | 858,756,641 | MDU6SXNzdWU4NTg3NTY2NDE= | 11,265 | TensorFlow "predict" returns empty output with MirroredStrategy | {
"login": "ZJaume",
"id": 11339330,
"node_id": "MDQ6VXNlcjExMzM5MzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/11339330?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZJaume",
"html_url": "https://github.com/ZJaume",
"followers_url": "https://api.github.com/users/ZJaume/followers",
"following_url": "https://api.github.com/users/ZJaume/following{/other_user}",
"gists_url": "https://api.github.com/users/ZJaume/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZJaume/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZJaume/subscriptions",
"organizations_url": "https://api.github.com/users/ZJaume/orgs",
"repos_url": "https://api.github.com/users/ZJaume/repos",
"events_url": "https://api.github.com/users/ZJaume/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZJaume/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "jmwoloso",
"id": 7530947,
"node_id": "MDQ6VXNlcjc1MzA5NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7530947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmwoloso",
"html_url": "https://github.com/jmwoloso",
"followers_url": "https://api.github.com/users/jmwoloso/followers",
"following_url": "https://api.github.com/users/jmwoloso/following{/other_user}",
"gists_url": "https://api.github.com/users/jmwoloso/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmwoloso/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmwoloso/subscriptions",
"organizations_url": "https://api.github.com/users/jmwoloso/orgs",
"repos_url": "https://api.github.com/users/jmwoloso/repos",
"events_url": "https://api.github.com/users/jmwoloso/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmwoloso/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "jmwoloso",
"id": 7530947,
"node_id": "MDQ6VXNlcjc1MzA5NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7530947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmwoloso",
"html_url": "https://github.com/jmwoloso",
"followers_url": "https://api.github.com/users/jmwoloso/followers",
"following_url": "https://api.github.com/users/jmwoloso/following{/other_user}",
"gists_url": "https://api.github.com/users/jmwoloso/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmwoloso/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmwoloso/subscriptions",
"organizations_url": "https://api.github.com/users/jmwoloso/orgs",
"repos_url": "https://api.github.com/users/jmwoloso/repos",
"events_url": "https://api.github.com/users/jmwoloso/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmwoloso/received_events",
"type": "User",
"site_admin": false
}
] | [
"Tested on latest release and still present.",
"Pinging our TensorFlow expert, @Rocketknight1 ",
"I've managed to reproduce this but I'm very confused about the cause, especially because I'm pretty sure I've used model.predict with MirroredStrategy in our codebase before.\r\n\r\nI've tested your code snippet with a standard RNN instead of BERT and confirmed that it works fine, and I tried distilbert instead of BERT and the problem remained, so the problem does seem to be the combination of MirroredStrategy and our models.\r\n\r\nI'm going to keep poking around at this, but if you discover anything else that might help me figure out what's going on, please let me know!",
"Update: This bug appears in our `run_text_classification.py` script too, again only when using predict(). I'm investigating.",
"Update 2: `fit()` and `evaluate()` seemed to work correctly in a MirroredStrategy context (which is good because I have a whole example that uses them). The issue is specific to `predict()`",
"Hi, just keeping this issue alive! I've traced the issue to the way we return our values from the `call()` methods - I think Keras doesn't like the thing we do with a subclassed OrderedDict. We're going to reach out to our contacts at Google in the next couple of days and figure out what the best approach is - whether we need to refactor that totally, or if there's an easy workaround.",
"Putting this here as a writeup of what we know so far:\r\n\r\nThe issue is not caused by returning an `OrderedDict`, but instead because we return a `TFBaseModelOutput`, which is a subclass of `OrderedDict` decorated with dataclass. Refer to the code [here](https://github.com/huggingface/transformers/blob/38a716cd41f22f6a7d5ff3dc081903090198803a/src/transformers/modeling_tf_outputs.py#L24-L46).\r\n\r\nIf we just return a dict, `OrderedDict` or `ModelOutput` (the parent class for `TFBaseModelOutput`, subclassed from `OrderedDict`), everything works okay. Therefore the central issue is this data class, which will probably need to be removed. We're looking at how we can do that now!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hi,\r\nAny updates about this issue? ",
"definitely looking forward to a fix for this. how can we help @Rocketknight1?",
"@jmwoloso @ayalaall \r\n\r\nHey all! I'm going to reopen this issue, even though we're short on bandwidth for it right now. The current situation is that we know where the problem lies - it's in the fact that we're returning a `@dataclass` decorated object from our models, and that doesn't play nicely with Keras. We get away with it when we're not in a `Strategy` context, but it breaks inside of one, even though `fit()` still usually works correctly.\r\n\r\nThe problem is that even though the change needed is relatively small, it's finicky because we place a lot of value on maintaining a very consistent API for users, and changing the return class for every TF model on the entire hub is a big deal. So we need to find some way to make sure existing code is as unaffected as possible in the process, and that requires some engineering exploration.\r\n\r\nThe good news is the `@dataclass` decorator is really just for convenience rather than a critical part of the class - we just use it to ensure that certain keys are always present in the output dict, and set with default values, and it got ported over from the original PyTorch code. We could probably make some other subclass of `Dict` or `OrderedDict` and return that, and maybe that would play nicer with Keras, but I have a few other major things on my to do list, so I don't know if I'll be able to get to that for a month or two. If anyone wants to experiment and file a PR, feel free to ask any questions you want here. If not, I'll do my best to get to it as soon as I can.",
"Say no more @Rocketknight1! I'll take a look and get familiar with the components involved and see if I can devise a minimally-invasive solution. Thanks for re-opening!",
"You can assign this to me if you like as well.",
"@jmwoloso Sure, if you'd like! If you have any questions along the way, feel free to ask.",
"@ZJaume @ayalaall @Rocketknight1 \r\nAn update for the group. I'm still doing some testing, but this is fixed in both `master` and `transformers==4.10.0`!\r\n\r\nUsing a single VM (4 v100 GPUs) with `MirroredStrategy` works out of the box. `transformers==4.9.2` (the version I happened to be using) it does not work.\r\n\r\n```\r\nfrom transformers import TFDistilBertForSequenceClassification, DistilBertTokenizerFast\r\nimport tensorflow as tf\r\n\r\nstrategy = tf.distribute.MirroredStrategy()\r\nwith strategy.scope():\r\n tf_model = TFDistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased')\r\n\r\n tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')\r\n inputs = tokenizer('This is a test', 'Esto es una prueba',\r\n return_tensors='tf', max_length=200,\r\n padding='max_length', truncation=True,\r\n return_attention_mask=True,\r\n return_token_type_ids=False)\r\n\r\n print(tf_model.predict([inputs[\"input_ids\"], inputs[\"attention_mask\"]], verbose=1))\r\n print(tf_model([inputs[\"input_ids\"], inputs[\"attention_mask\"]]))\r\n\r\n```\r\n\r\n```\r\nWARNING:tensorflow:Collective ops is not configured at program startup. Some performance features may not be enabled.\r\nINFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3')\r\nDownloading: 100%|██████████| 483/483 [00:00<00:00, 551kB/s]\r\nDownloading: 100%|██████████| 363M/363M [00:04<00:00, 79.6MB/s] \r\nSome layers from the model checkpoint at distilbert-base-uncased were not used when initializing TFDistilBertForSequenceClassification: ['vocab_projector', 'vocab_layer_norm', 'vocab_transform', 'activation_13']\r\n- This IS expected if you are initializing TFDistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing TFDistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome layers of TFDistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['dropout_19', 'classifier', 'pre_classifier']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nDownloading: 100%|██████████| 232k/232k [00:00<00:00, 1.03MB/s]\r\nDownloading: 100%|██████████| 466k/466k [00:00<00:00, 1.52MB/s]\r\nDownloading: 100%|██████████| 28.0/28.0 [00:00<00:00, 28.1kB/s]\r\nWARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nWARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nWARNING:tensorflow:From /databricks/python/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py:5043: calling gather (from tensorflow.python.ops.array_ops) with validate_indices is deprecated and will be removed in a future version.\r\nInstructions for updating:\r\nThe `validate_indices` argument has no effect. Indices are always validated on CPU and never validated on GPU.\r\nWARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nWARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nWARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nWARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\nWARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\r\nWARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\r\n1/1 [==============================] - 10s 10s/step\r\nTFSequenceClassifierOutput(loss=None, logits=array([[ 0.03777119, -0.12381434]], dtype=float32), hidden_states=None, attentions=None)\r\nTFSequenceClassifierOutput(loss=None, logits=<tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[ 0.0377712 , -0.12381432]], dtype=float32)>, hidden_states=None, attentions=None)\r\n```",
"@jmwoloso That's really fascinating! I didn't think I touched any relevant code between those releases, but possibly one of the other engineers did. Can you try a couple of other models, say BERT or RoBERTa, to see if you see the same pattern with both?",
"I tried with Roberta and DistilBert with the new version and it doesn't give empty output any more. Thank you!",
"Hi @jmwoloso @ZJaume this is great, thank you! Can you confirm it still works with an input array larger than the batch size? (to ensure the work is getting distributed to multiple GPUs and then merged correctly)",
"@Rocketknight1 yeah i'll take a look at doing that today and posting confirmation in here and then we can close this out!",
"Working with 1024 samples and 8 batch size per gpu.",
"I'm still trying to test it out but databricks is having issues spinning up gpu clusters today :roll_eyes: \r\n\r\nI think we're good to close this out @Rocketknight1 unless there are other scenarios you want us to check out.",
"So I noticed I had the same problem when I do this with basic Tensorflow. I found that the Tokenizer() function from tensorflow.keras.preprocessing.text seems to be an empty when you load the model. Which is understandable because you are not loading any sort of data to the Tokenizer. \r\nHow I was able to solve it was\r\n\r\n```\r\nimport pickle\r\n\r\n# saving\r\nwith open('tokenizer.pickle', 'wb') as handle:\r\n pickle.dump(tokenizer, handle, protocol=pickle.HIGHEST_PROTOCOL)\r\n\r\n# loading\r\nwith open('tokenizer.pickle', 'rb') as handle:\r\n tokenizer = pickle.load(handle)\r\n```",
"@jmwoloso @ZJaume Thank you for all your help! I'm gonna mark this as resolved now, since the problem doesn't seem to have recurred.\r\n\r\n@JithLord I think that's a separate problem, unless it's also unique to `MirroredStrategy` contexts, so I'm gonna close this issue anyway. If you think you've found a bugs in the repo, though, please feel free to file a separate issue (or file it with Tensorflow upstream if you think the bug is there)."
] | 1,618 | 1,631 | 1,631 | NONE | null | I'm trying to use the `predict` method of the Keras TensorFlow API but it returns an empty output despite the input is being processed. Calling the model seems to work.
EDIT: the predict method works correctly if the model is loaded with single GPu strategy.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: `4.5.1`
- Platform: Linux CentOS 8.1
- Python version: `3.7.10`
- PyTorch version (GPU?): -
- Tensorflow version (GPU?): `2.3.2`(True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: multi-gpu on a single machine
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using: Bert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import BertTokenizerFast, TFBertForSequenceClassification
import tensorflow as tf
strategy = tf.distribute.MirroredStrategy()
#strategy = tf.distribute.OneDeviceStrategy("/gpu:0")
with strategy.scope():
tf_model = TFBertForSequenceClassification.from_pretrained('bert-base-cased')
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
inputs = tokenizer('This is a test', 'Esto es una prueba',
return_tensors='tf', max_length=200,
padding='max_length', truncation=True,
return_attention_mask=True,
return_token_type_ids=False)
print(tf_model.predict([inputs["input_ids"], inputs["attention_mask"]],
verbose=1))
print(tf_model([inputs["input_ids"], inputs["attention_mask"]]))
```
```
All model checkpoint layers were used when initializing TFBertForSequenceClassification.
Some layers of TFBertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
WARNING:tensorflow:From /venv/lib/python3.7/site-packages/tensorflow/python/data/ops/multi_device_iterator_ops.py:601: get_next_as_optional (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Iterator.get_next_as_optional()` instead.
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
1/1 [==============================] - 0s 241us/step
TFSequenceClassifierOutput(loss=None, logits=None, hidden_states=None, attentions=None)
TFSequenceClassifierOutput(loss=None, logits=<tf.Tensor: shape=(1, 2), dtype=float32, numpy=array([[-0.47814545, 0.35146457]], dtype=float32)>, hidden_states=None, attentions=None)
```
## Expected behavior
Output should be the same as when model is being called.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11265/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/11265/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11264 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11264/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11264/comments | https://api.github.com/repos/huggingface/transformers/issues/11264/events | https://github.com/huggingface/transformers/issues/11264 | 858,735,802 | MDU6SXNzdWU4NTg3MzU4MDI= | 11,264 | Multi-Workers distributed training | {
"login": "Gforky",
"id": 4157614,
"node_id": "MDQ6VXNlcjQxNTc2MTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4157614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gforky",
"html_url": "https://github.com/Gforky",
"followers_url": "https://api.github.com/users/Gforky/followers",
"following_url": "https://api.github.com/users/Gforky/following{/other_user}",
"gists_url": "https://api.github.com/users/Gforky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Gforky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gforky/subscriptions",
"organizations_url": "https://api.github.com/users/Gforky/orgs",
"repos_url": "https://api.github.com/users/Gforky/repos",
"events_url": "https://api.github.com/users/Gforky/events{/privacy}",
"received_events_url": "https://api.github.com/users/Gforky/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, here are a few resources to get you started:\r\n- [Examples](https://github.com/huggingface/transformers/tree/master/examples)\r\n- [Docs on distributed training](https://huggingface.co/transformers/examples.html#distributed-training-and-mixed-precision)",
"> Hi, here are a few resources to get you started:\r\n> \r\n> * [Examples](https://github.com/huggingface/transformers/tree/master/examples)\r\n> * [Docs on distributed training](https://huggingface.co/transformers/examples.html#distributed-training-and-mixed-precision)\r\n\r\nGot it, after I get familiar with pytorch, this problem solved😀"
] | 1,618 | 1,619 | 1,619 | NONE | null | Hi, does transformers support multi-workers distributed training for bert fine-tuning? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11264/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11264/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11263 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11263/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11263/comments | https://api.github.com/repos/huggingface/transformers/issues/11263/events | https://github.com/huggingface/transformers/issues/11263 | 858,595,826 | MDU6SXNzdWU4NTg1OTU4MjY= | 11,263 | TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect | {
"login": "deadsoul44",
"id": 31016182,
"node_id": "MDQ6VXNlcjMxMDE2MTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/31016182?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/deadsoul44",
"html_url": "https://github.com/deadsoul44",
"followers_url": "https://api.github.com/users/deadsoul44/followers",
"following_url": "https://api.github.com/users/deadsoul44/following{/other_user}",
"gists_url": "https://api.github.com/users/deadsoul44/gists{/gist_id}",
"starred_url": "https://api.github.com/users/deadsoul44/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/deadsoul44/subscriptions",
"organizations_url": "https://api.github.com/users/deadsoul44/orgs",
"repos_url": "https://api.github.com/users/deadsoul44/repos",
"events_url": "https://api.github.com/users/deadsoul44/events{/privacy}",
"received_events_url": "https://api.github.com/users/deadsoul44/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, please provide all the information required in the template so that we may help you. Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,621 | 1,621 | NONE | null | Probably due to assertion, traceback is lost and I cannot debug the code.
C:\Users\m00596504\.virtualenvs\porn_tr\lib\site-packages\transformers\modeling_utils.py:1759: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert all(
Process finished with exit code -1073741819 (0xC0000005) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11263/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11263/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11262 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11262/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11262/comments | https://api.github.com/repos/huggingface/transformers/issues/11262/events | https://github.com/huggingface/transformers/issues/11262 | 858,529,625 | MDU6SXNzdWU4NTg1Mjk2MjU= | 11,262 | Failed to import transformers | {
"login": "notooth1",
"id": 61880277,
"node_id": "MDQ6VXNlcjYxODgwMjc3",
"avatar_url": "https://avatars.githubusercontent.com/u/61880277?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/notooth1",
"html_url": "https://github.com/notooth1",
"followers_url": "https://api.github.com/users/notooth1/followers",
"following_url": "https://api.github.com/users/notooth1/following{/other_user}",
"gists_url": "https://api.github.com/users/notooth1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/notooth1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/notooth1/subscriptions",
"organizations_url": "https://api.github.com/users/notooth1/orgs",
"repos_url": "https://api.github.com/users/notooth1/repos",
"events_url": "https://api.github.com/users/notooth1/events{/privacy}",
"received_events_url": "https://api.github.com/users/notooth1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think this is the same issue as https://github.com/huggingface/tokenizers/issues/585",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I am struggling with the same issue. Have you solved the problem?",
"> \r\n> \r\n> I am struggling with the same issue. Have you solved the problem?\r\n\r\nuse pip instead of conda:\r\n```\r\nconda uninstall tokenizers, transformers\r\npip install transformers\r\n```\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Solved this issue by downgrading to python 3.6 and conda 4.6.14",
"Solved this by downgrading from python 3.8 to 3.7",
"Solved this by doing `pip install pytorch-transformers` and then reload the notebook/application. I keep my python version 3.7.",
"> Solved this by doing `pip install pytorch-transformers` and then reload the notebook/application. I keep my python version 3.7.\r\n\r\ndidn't work for me :(, details: https://github.com/huggingface/transformers/issues/15062",
"Maybe your numpy version is too low, try again after updating",
"> Maybe your numpy version is too low, try again after updating\r\n\r\npip install numpy==1.24.2 works",
"@kyxyxn how did you download python 3.6 . I am using colab and unable to downgrade the version. Any help is very appreciated.",
"> @kyxyxn how did you download python 3.6 . I am using colab and unable to downgrade the version. Any help is very appreciated.\r\n\r\nconda install python==3.6"
] | 1,618 | 1,704 | 1,624 | NONE | null | I got this error when importing transformers. Please help.
My system is Debian 10, Anaconda3.
```
$ python
Python 3.8.5 (default, Sep 4 2020, 07:30:14)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import pipeline
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers/__init__.py", line 2487, in __getattr__
return super().__getattr__(name)
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers/file_utils.py", line 1699, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers/__init__.py", line 2481, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/home/notooth/anaconda3/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 24, in <module>
from ..modelcard import ModelCard
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers/modelcard.py", line 31, in <module>
from .models.auto.configuration_auto import ALL_PRETRAINED_CONFIG_ARCHIVE_MAP
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers/models/__init__.py", line 19, in <module>
from . import (
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers/models/layoutlm/__init__.py", line 23, in <module>
from .tokenization_layoutlm import LayoutLMTokenizer
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers/models/layoutlm/tokenization_layoutlm.py", line 19, in <module>
from ..bert.tokenization_bert import BertTokenizer
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers/models/bert/tokenization_bert.py", line 23, in <module>
from ...tokenization_utils import PreTrainedTokenizer, _is_control, _is_punctuation, _is_whitespace
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers/tokenization_utils.py", line 26, in <module>
from .tokenization_utils_base import (
File "/home/notooth/anaconda3/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 68, in <module>
from tokenizers import AddedToken
File "/home/notooth/anaconda3/lib/python3.8/site-packages/tokenizers/__init__.py", line 79, in <module>
from .tokenizers import (
ImportError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /home/notooth/anaconda3/lib/python3.8/site-packages/tokenizers/tokenizers.cpython-38-x86_64-linux-gnu.so)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11262/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11262/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11261 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11261/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11261/comments | https://api.github.com/repos/huggingface/transformers/issues/11261/events | https://github.com/huggingface/transformers/issues/11261 | 858,510,743 | MDU6SXNzdWU4NTg1MTA3NDM= | 11,261 | --sharded_ddp "zero_dp_3 offload" fails with AssertionError | {
"login": "chitkwan",
"id": 22551285,
"node_id": "MDQ6VXNlcjIyNTUxMjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/22551285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chitkwan",
"html_url": "https://github.com/chitkwan",
"followers_url": "https://api.github.com/users/chitkwan/followers",
"following_url": "https://api.github.com/users/chitkwan/following{/other_user}",
"gists_url": "https://api.github.com/users/chitkwan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chitkwan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chitkwan/subscriptions",
"organizations_url": "https://api.github.com/users/chitkwan/orgs",
"repos_url": "https://api.github.com/users/chitkwan/repos",
"events_url": "https://api.github.com/users/chitkwan/events{/privacy}",
"received_events_url": "https://api.github.com/users/chitkwan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"As replied on the forums, you should rather use `--deepspeed` for Zero-offload. We will investigate this bug, but there is another one for the gradient scaler that will block you either way.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,621 | 1,621 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Linux-5.4.0-1043-aws-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.1+cu110 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: 8 x A100 (AWS p4d.24xlarge)
- Using distributed or parallel set-up in script?: python -m torch.distributed.launch
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
Library:
- deepspeed: @stas00
- trainer: @sgugger
## Information
Model I am using (Bert, XLNet ...): roberta-base
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
I want to perform distributed training using the example `run_mlm.py` script on the wikitext dataset. Specifically, I'm trying to use sharded_ddp zero_dp_3 (i.e., fairscale) and **with offloading enabled**. When I run _without_ offloading, it works. But if I use the "offload" option, an AssertionError is thrown, as shown in the stack trace below.
Steps to reproduce the behavior:
1. Install fairscale
pip install fairscale==0.3.4
2. Run the example run_mlm.py as follows:
export OMP_NUM_THREADS=11;
export TOKENIZERS_PARALLELISM=true;
python -m torch.distributed.launch --nproc_per_node=8 run_mlm.py --model_name_or_path roberta-base \
--use_fast_tokenizer \
--dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--do_train --do_eval --num_train_epochs 5 \
--output_dir ./experiments/wikitext --sharded_ddp "zero_dp_3 offload" --fp16
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Traceback (most recent call last):
File "run_mlm.py", line 492, in <module> main() File "run_mlm.py", line 458, in main train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/me/ve/lib/python3.6/site-packages/transformers/trainer.py", line 1120, in train tr_loss += self.training_step(model, inputs)
File "/home/me/ve/lib/python3.6/site-packages/transformers/trainer.py", line 1522, in training_step loss = self.compute_loss(model, inputs)
File "/home/me/ve/lib/python3.6/site-packages/transformers/trainer.py", line 1556, in compute_loss outputs = model(**inputs)
File "/home/me/ve/lib/python3.6/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs)
File "/home/me/ve/lib/python3.6/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 902, in forward self._lazy_init()
File "/home/me/ve/lib/python3.6/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 739, in _lazy_init self._init_param_attributes(p)
File "/home/me/ve/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context return func(*args, **kwargs)
File "/home/me/ve/lib/python3.6/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 796, in _init_param_attributes assert p._fp32_shard.device == torch.device("cpu")
AssertionError
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
It should proceed to train.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11261/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11260 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11260/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11260/comments | https://api.github.com/repos/huggingface/transformers/issues/11260/events | https://github.com/huggingface/transformers/issues/11260 | 858,420,355 | MDU6SXNzdWU4NTg0MjAzNTU= | 11,260 | About pre-trained model : facebook/wav2vec2-large-xlsr-53 & facebook/wav2vec2-base | {
"login": "LifaSun",
"id": 6188893,
"node_id": "MDQ6VXNlcjYxODg4OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6188893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LifaSun",
"html_url": "https://github.com/LifaSun",
"followers_url": "https://api.github.com/users/LifaSun/followers",
"following_url": "https://api.github.com/users/LifaSun/following{/other_user}",
"gists_url": "https://api.github.com/users/LifaSun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LifaSun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LifaSun/subscriptions",
"organizations_url": "https://api.github.com/users/LifaSun/orgs",
"repos_url": "https://api.github.com/users/LifaSun/repos",
"events_url": "https://api.github.com/users/LifaSun/events{/privacy}",
"received_events_url": "https://api.github.com/users/LifaSun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The xlsr model has no vocab, you need to build the processor yourself",
"https://huggingface.co/blog/fine-tune-xlsr-wav2vec2\r\nYou could check out this blogpost, there is it explained very well",
"@flozi00 Thank you very much! Processor is built. \r\n\r\nActually, I want to visualize the output of hidden layer in the wav2vec model before fine-funing. It seems the output of wav2vec2-base is normal, but the output of wav2vec2-large-xlsr-53 is not. The results are attached (x axis: time, y axis: hidden units). \r\n\r\nThe output of hidden layer using wav2vec2-base\r\n\r\n\r\nThe output of hidden layer using wav2vec2-large-xlsr-53\r\n\r\n\r\nCould you explain it? Thank you!\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,621 | 1,621 | NONE | null | Hi,
I am trying to load the pre-trained 'no fine-tuning' model of wav2vec2 to exact some features . The models, 'wav2vec2-base' and 'wav2vec2-large-xlsr-53', are not fine-tuned. Why are these files not exactly the same?
https://huggingface.co/facebook/wav2vec2-base/tree/main
https://huggingface.co/facebook/wav2vec2-large-xlsr-53/tree/main
'wav2vec2-base' can be loaded smoothly, but it doesn't work for 'wav2vec2-large-xlsr-53'
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base")
model = Wav2Vec2Model.from_pretrained("facebook/wav2vec2-base").to('cuda')
@patrickvonplaten
Thank you very much!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11260/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11259 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11259/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11259/comments | https://api.github.com/repos/huggingface/transformers/issues/11259/events | https://github.com/huggingface/transformers/issues/11259 | 858,395,431 | MDU6SXNzdWU4NTgzOTU0MzE= | 11,259 | [Benchmark] | {
"login": "juhyeok123",
"id": 76256649,
"node_id": "MDQ6VXNlcjc2MjU2NjQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/76256649?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juhyeok123",
"html_url": "https://github.com/juhyeok123",
"followers_url": "https://api.github.com/users/juhyeok123/followers",
"following_url": "https://api.github.com/users/juhyeok123/following{/other_user}",
"gists_url": "https://api.github.com/users/juhyeok123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/juhyeok123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juhyeok123/subscriptions",
"organizations_url": "https://api.github.com/users/juhyeok123/orgs",
"repos_url": "https://api.github.com/users/juhyeok123/repos",
"events_url": "https://api.github.com/users/juhyeok123/events{/privacy}",
"received_events_url": "https://api.github.com/users/juhyeok123/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,621 | 1,621 | NONE | null | # 🖥 Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
## Results
Put your results here! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11259/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11259/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11258 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11258/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11258/comments | https://api.github.com/repos/huggingface/transformers/issues/11258/events | https://github.com/huggingface/transformers/pull/11258 | 858,271,915 | MDExOlB1bGxSZXF1ZXN0NjE1NTUzMzI5 | 11,258 | Support for set_epoch in IterableDataset | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,618 | 1,618 | 1,618 | COLLABORATOR | null | # What does this PR do?
I merged #11254 a bit too fast and forgot to actually call the `set_epoch` method in the main training loop at the beginning of each epoch.
Also, it looks like the Datasets library will deal internally with the RNG logic by having a `set_epoch` method, this PR allows support for that. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11258/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11258/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11258",
"html_url": "https://github.com/huggingface/transformers/pull/11258",
"diff_url": "https://github.com/huggingface/transformers/pull/11258.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11258.patch",
"merged_at": 1618486593000
} |
https://api.github.com/repos/huggingface/transformers/issues/11257 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11257/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11257/comments | https://api.github.com/repos/huggingface/transformers/issues/11257/events | https://github.com/huggingface/transformers/issues/11257 | 858,270,550 | MDU6SXNzdWU4NTgyNzA1NTA= | 11,257 | [Benchmark] | {
"login": "autumnX1515",
"id": 79848250,
"node_id": "MDQ6VXNlcjc5ODQ4MjUw",
"avatar_url": "https://avatars.githubusercontent.com/u/79848250?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/autumnX1515",
"html_url": "https://github.com/autumnX1515",
"followers_url": "https://api.github.com/users/autumnX1515/followers",
"following_url": "https://api.github.com/users/autumnX1515/following{/other_user}",
"gists_url": "https://api.github.com/users/autumnX1515/gists{/gist_id}",
"starred_url": "https://api.github.com/users/autumnX1515/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/autumnX1515/subscriptions",
"organizations_url": "https://api.github.com/users/autumnX1515/orgs",
"repos_url": "https://api.github.com/users/autumnX1515/repos",
"events_url": "https://api.github.com/users/autumnX1515/events{/privacy}",
"received_events_url": "https://api.github.com/users/autumnX1515/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"\n```",
"> # 🖥 Benchmarking `transformers`\n> \n> ## Benchmark\n> \n> Which part of `transformers` did you benchmark?\n> \n> ## Set-up\n> \n> What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?\n> \n> ## Results\n> \n> Put your results here!\n\n"
] | 1,618 | 1,618 | 1,618 | NONE | null | # 🖥 Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
## Results
Put your results here! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11257/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11257/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11256 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11256/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11256/comments | https://api.github.com/repos/huggingface/transformers/issues/11256/events | https://github.com/huggingface/transformers/issues/11256 | 858,223,942 | MDU6SXNzdWU4NTgyMjM5NDI= | 11,256 | Getting KeyError: 'loss' when fine-tuning model on a pre-trained MLM | {
"login": "neel04",
"id": 11617870,
"node_id": "MDQ6VXNlcjExNjE3ODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/11617870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neel04",
"html_url": "https://github.com/neel04",
"followers_url": "https://api.github.com/users/neel04/followers",
"following_url": "https://api.github.com/users/neel04/following{/other_user}",
"gists_url": "https://api.github.com/users/neel04/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neel04/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neel04/subscriptions",
"organizations_url": "https://api.github.com/users/neel04/orgs",
"repos_url": "https://api.github.com/users/neel04/repos",
"events_url": "https://api.github.com/users/neel04/events{/privacy}",
"received_events_url": "https://api.github.com/users/neel04/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @neel04. Regarding your code sample with simple reproducible data. I believe there are two errors here:\r\n- First, you've created your labels as a range from 1 to 20. You've set the model's `num_labels` to 20, but that, unfortunately, means it has a range of `[0, 19]`, therefore unable to satisfy the label 20. I would start the labels at 0.\r\n- Secondly, and more importantly, in your encoding method you're only tokenizing the source input, and doing nothing to your target input:\r\n\r\n```py\r\ndef tok(example):\r\n encodings = tokenizer(example['src'], truncation=True, padding=\"max_length\", max_length=10)\r\n return encodings\r\n\r\ntrain_encoded_dataset = train_dataset.map(tok, batched=True)\r\nval_encoded_dataset = val_dataset.map(tok, batched=True)\r\n```\r\n\r\nTherefore, here, your `train_encoded_dataset` and `val_encoded_dataset` contain dictionaries with the following keys: `input_ids` and `attention_mask`. There are no labels.\r\n\r\nYou could manually add your labels to your encoding by tweaking your `tok` method:\r\n\r\n```py\r\ndef tok(example):\r\n encodings = tokenizer(example['src'], truncation=True, padding=\"max_length\", max_length=10)\r\n encodings[\"labels\"] = example[\"tgt\"]\r\n return encodings\r\n\r\ntrain_encoded_dataset = train_dataset.map(tok, batched=True)\r\nval_encoded_dataset = val_dataset.map(tok, batched=True)\r\n```\r\n\r\nOtherwise, instead of naming your variable `tgt` inside the dataset, you could name it `labels` so that it's adequately named in your `Dataset` right away.\r\n\r\nI've been running your colab, but didn't run into an issue yet, I'm at step 2000. If I run into an issue, I'll try to see what's going on.\r\n\r\nHope that helps.",
"Thanx a ton for replying @LysandreJik !!! :hugs: :+1: \r\nAbout the labels, I think you may be right - I totally missed that point.\r\n\r\nsecondly, are you not seeing `tgt` in the `train_encoded_dataset` inside the repro? I do see it when printing it out :thinking: \r\n\r\n> I've been running your colab\r\n\r\ndo you mean you are re-training the LM? I already have it on model hub BTW - can you fine-tune that pre-trained model successfully? ",
"I'm sorry, you are correct, the `dataset` has the following attributes: `['attention_mask', 'input_ids', 'src', 'tgt']`. However, the model only cares about the `attention_mask` and `input_ids`. It also cares about the `labels`, which are absent in this case, hence why your code was failing.\r\n\r\nIf you want to have a look at what inputs the model needs, I encourage you to take a look at the docs; you're using `LongformerForSequenceClassification`, see the parameters it acepts [here](https://huggingface.co/transformers/model_doc/longformer.html#transformers.LongformerForSequenceClassification.forward).\r\n\r\nI did manage to run your code example, but I thought the colab would fail in a similar fashion. It seems it trained correctly, so that is not an issue.\r\n\r\nIs there anything else you need help with?",
"Thanx for the help! I am surprised why we need to add a `labels` attribute since we specify it when constructing the `Dataset` object - so it must be easy for HF to guess the numerical value and use it as labels accordingly. \r\n\r\nit does work for repro, so the issue does not remain now - but I would greatly appreciate if you can help me out! I am trying to train it on my normal data.\r\nI have used the `train_text_split` -er to split into NumPy arrays and am trying to pass it - but it still gives me the index error.\r\n\r\nRepro:\r\n```py\r\nDataset({\r\n features: ['attention_mask', 'input_ids', 'labels', 'src', 'tgt'],\r\n num_rows: 20\r\n})\r\n```\r\nMain dataset:\r\n```py\r\nDataset({\r\n features: ['attention_mask', 'input_ids', 'labels', 'src', 'tgt'],\r\n num_rows: 4572\r\n})\r\n```\r\nThere doesn't seem any surface difference; I checked the length of the mask and ids - they are as expected. checked 'labels', is numeric - doesn't cross 20.\r\n\r\nClearly, there is some problem with my input data. Could you give an idea about what the error might indicate is wrong with my data?\r\nMy input data is basically documents - long strings. they are cleaned thoroughly and are purely text. the only thing is that they are pretty long (sometimes longer than 2000 tokens). \r\nAny opinions on what the issue could be?",
"> I am surprised why we need to add a labels attribute since we specify it when constructing the Dataset object - so it must be easy for HF to guess the numerical value and use it as labels accordingly.\r\n\r\nAre you referencing the fact that we're passing the `Dataset` a `tgt` value during initialization? If so, then yes those are labels but since the model looks for the field `labels`, it will not look at `tgt`. If you define it as `labels` right off the bat, it should work!\r\n\r\nRegarding your second question, I fear I'm out of the loop. If you have an example which fail, that would be great, with a minimal reproducible code example, that would be even better!\r\n\r\nDo you have an idea of which document especially might cause an issue? Thank you.",
"I was able to repro the issue with this dummy dataset:\r\n```py\r\nimport numpy as np\r\ntrain_text = np.array(['lorem ipsum'*499]*20)\r\ntrain_label = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]\r\nval_text = np.array(['lorem ipsum'*499]*2)\r\nval_label = [1, 2]\r\n```\r\nWhat's your take on it? looks like a bug for high length strings - Even though they do seem padded and truncated via `datasets`. \r\n\r\n---\r\n\r\n**EDIT:-** This is the full error, in case you want something to refer to, instead of running your own code\r\n```py\r\n\r\nDownloading: 100%\r\n199M/199M [00:09<00:00, 21.2MB/s]\r\n\r\n\r\nSome weights of the model checkpoint at MalawiUniST/ISO6392.nya.ny were not used when initializing LongformerForSequenceClassification: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'lm_head.decoder.bias']\r\n- This IS expected if you are initializing LongformerForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing LongformerForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of LongformerForSequenceClassification were not initialized from the model checkpoint at MalawiUniST/ISO6392.nya.ny and are newly initialized: ['classifier.dense.weight', 'classifier.dense.bias', 'classifier.out_proj.weight', 'classifier.out_proj.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n\r\n---------------------------------------------------------------------------\r\n\r\nIndexError Traceback (most recent call last)\r\n\r\n<ipython-input-9-d3bd01a1a0a7> in <module>()\r\n 46 )\r\n 47 \r\n---> 48 train_results = trainer.train()\r\n\r\n11 frames\r\n\r\n/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs)\r\n 1118 tr_loss += self.training_step(model, inputs)\r\n 1119 else:\r\n-> 1120 tr_loss += self.training_step(model, inputs)\r\n 1121 self._total_flos += float(self.floating_point_ops(inputs))\r\n 1122 \r\n\r\n/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in training_step(self, model, inputs)\r\n 1522 loss = self.compute_loss(model, inputs)\r\n 1523 else:\r\n-> 1524 loss = self.compute_loss(model, inputs)\r\n 1525 \r\n 1526 if self.args.n_gpu > 1:\r\n\r\n/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs)\r\n 1554 else:\r\n 1555 labels = None\r\n-> 1556 outputs = model(**inputs)\r\n 1557 # Save past state if it exists\r\n 1558 # TODO: this needs to be fixed and made cleaner later.\r\n\r\n/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n 887 result = self._slow_forward(*input, **kwargs)\r\n 888 else:\r\n--> 889 result = self.forward(*input, **kwargs)\r\n 890 for hook in itertools.chain(\r\n 891 _global_forward_hooks.values(),\r\n\r\n/usr/local/lib/python3.7/dist-packages/transformers/models/longformer/modeling_longformer.py in forward(self, input_ids, attention_mask, global_attention_mask, head_mask, token_type_ids, position_ids, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict)\r\n 1855 output_attentions=output_attentions,\r\n 1856 output_hidden_states=output_hidden_states,\r\n-> 1857 return_dict=return_dict,\r\n 1858 )\r\n 1859 sequence_output = outputs[0]\r\n\r\n/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n 887 result = self._slow_forward(*input, **kwargs)\r\n 888 else:\r\n--> 889 result = self.forward(*input, **kwargs)\r\n 890 for hook in itertools.chain(\r\n 891 _global_forward_hooks.values(),\r\n\r\n/usr/local/lib/python3.7/dist-packages/transformers/models/longformer/modeling_longformer.py in forward(self, input_ids, attention_mask, global_attention_mask, head_mask, token_type_ids, position_ids, inputs_embeds, output_attentions, output_hidden_states, return_dict)\r\n 1662 \r\n 1663 embedding_output = self.embeddings(\r\n-> 1664 input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds\r\n 1665 )\r\n 1666 \r\n\r\n/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n 887 result = self._slow_forward(*input, **kwargs)\r\n 888 else:\r\n--> 889 result = self.forward(*input, **kwargs)\r\n 890 for hook in itertools.chain(\r\n 891 _global_forward_hooks.values(),\r\n\r\n/usr/local/lib/python3.7/dist-packages/transformers/models/longformer/modeling_longformer.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)\r\n 491 if inputs_embeds is None:\r\n 492 inputs_embeds = self.word_embeddings(input_ids)\r\n--> 493 position_embeddings = self.position_embeddings(position_ids)\r\n 494 token_type_embeddings = self.token_type_embeddings(token_type_ids)\r\n 495 \r\n\r\n/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)\r\n 887 result = self._slow_forward(*input, **kwargs)\r\n 888 else:\r\n--> 889 result = self.forward(*input, **kwargs)\r\n 890 for hook in itertools.chain(\r\n 891 _global_forward_hooks.values(),\r\n\r\n/usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py in forward(self, input)\r\n 156 return F.embedding(\r\n 157 input, self.weight, self.padding_idx, self.max_norm,\r\n--> 158 self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n 159 \r\n 160 def extra_repr(self) -> str:\r\n\r\n/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)\r\n 1914 # remove once script supports set_grad_enabled\r\n 1915 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)\r\n-> 1916 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\n 1917 \r\n 1918 \r\n\r\nIndexError: index out of range in self\r\n\r\n\r\n```",
"Thank you for the clear reproducible example and the error stack trace.\r\n\r\nThat's curious, it does not break on my machine; running the following code, which is a concatenation of the sample you've just given me regarding the dataset and the training code of your initial issue description:\r\n\r\n```py\r\nimport numpy as np\r\ntrain_text = np.array(['lorem ipsum'*499]*20)\r\ntrain_label = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]\r\nval_text = np.array(['lorem ipsum'*499]*2)\r\nval_label = [1, 2]\r\n\r\nfrom datasets import Dataset\r\n\r\ntrain_dataset = Dataset.from_dict({'src': train_text, 'tgt': train_label})\r\nval_dataset = Dataset.from_dict({'src': val_text, 'tgt': val_label})\r\n\r\nfrom transformers import AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"MalawiUniST/ISO6392.nya.ny\", use_fast=True, truncation=True, padding=True,\r\n max_length=10) # try fast=False\r\n\r\n\r\ndef tok(example):\r\n encodings = tokenizer(example['src'], truncation=True, padding=\"max_length\", max_length=10)\r\n encodings[\"labels\"] = example[\"tgt\"]\r\n return encodings\r\n\r\n\r\ntrain_encoded_dataset = train_dataset.map(tok, batched=True)\r\nval_encoded_dataset = val_dataset.map(tok, batched=True)\r\n\r\nprint(train_encoded_dataset)\r\n\r\nfrom transformers import Trainer, TrainingArguments, AutoModelForSequenceClassification\r\n\r\nfrom sklearn.metrics import accuracy_score, precision_recall_fscore_support\r\n\r\ndef compute_metrics(pred):\r\n labels = pred.label_ids\r\n preds = pred.predictions.argmax(-1)\r\n precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='weighted',zero_division=1) #none gives score for each class\r\n acc = accuracy_score(labels, preds)\r\n return {\r\n 'accuracy': acc,\r\n 'f1': f1,\r\n 'precision': precision,\r\n 'recall': recall\r\n }\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir='content/results/', # output directory\r\n overwrite_output_dir = True,\r\n num_train_epochs=16, # total number of training epochs\r\n per_device_train_batch_size=32, # batch size per device during training\r\n per_device_eval_batch_size=32, # batch size for evaluation\r\n warmup_steps=600, # number of warmup steps for learning rate scheduler\r\n weight_decay=0.01, # strength of weight decay\r\n logging_dir='content/logs', # directory for storing logs\r\n logging_steps=10,\r\n evaluation_strategy='epoch',\r\n learning_rate=1e-6,\r\n #fp16 = True,\r\n load_best_model_at_end = True,\r\n metric_for_best_model = 'eval_loss',\r\n greater_is_better = False,\r\n seed = 101,\r\n save_total_limit=5,\r\n)\r\n\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\"MalawiUniST/ISO6392.nya.ny\", num_labels=20)\r\n\r\ntrainer = Trainer(\r\n model=model, # the instantiated Transformers model to be trained\r\n args=training_args, # training arguments, defined above\r\n train_dataset=train_encoded_dataset, # training dataset\r\n eval_dataset=val_encoded_dataset, # evaluation dataset\r\n compute_metrics=compute_metrics,\r\n tokenizer=tokenizer\r\n )\r\n\r\ntrain_results = trainer.train()\r\n```\r\n\r\nHowever, in your error, I understand that there's an `IndexError` happening with the position embeddings, the interesting line being this one:\r\n```\r\n--> 493 position_embeddings = self.position_embeddings(position_ids)\r\n```\r\nThis is very frequently an issue with padding/truncation, as you have correctly identified. I can indeed reproduce if I remove the notion of padding/truncation from your tokenizer call in your `tok` method:\r\n\r\n```diff\r\ndef tok(example):\r\n- encodings = tokenizer(example['src'], truncation=True, padding=\"max_length\", max_length=10)\r\n+ encodings = tokenizer(example['src'])\r\n encodings[\"labels\"] = example[\"tgt\"]\r\n return encodings\r\n```",
"I think it is my fault :sweat_smile: I had changed `max_length=10` to ` max_length=2000` which is the appropriate length it was intended for and pre-trained on. Maybe that's why it ran on your machine, but failed on Colab?\r\n\r\nAbout the padding/truncation indeed, I am using the way it's marked in red - and can confirm that the length of each `attention mask` is 2000, along with the `input_ids`. since that's the case for samples, the only conclusion is that I am indeed padding and truncating the sequences.\r\n\r\nSo the last point for the error is the `max_length` - which can't be 2000. in the LM (accessible via the colab link in OP, fully reproducible end-to-end example) the tokenizer construction is like this:-\r\n```py\r\nmodel_checkpoint = \"allenai/longformer-base-4096\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True, max_length=2000)\r\n```\r\nwhich does specify 2000 to be the maximum length. I can probably try re-training the model, but it doesn't make sense If I can't find why (and how) the error originates and what changes to make. \r\n\r\nAny suspicions?\r\n",
"With a little trial and error, I got `max_length=500` to be the maximum I can use - losing out a lot of information :face_with_head_bandage: This seems like a very weird bug. Everything seems normal, but its always errors after crossing 500",
"Is it failing after crossing 500 or 512? It's possible that there's a rogue max length of 512 (which obviously shouldn't be here!)\r\n\r\nSurprisingly I'm having issues reproducing the error with your maximum length of 2000, which doesn't crash on my side either (as it shouldn't!)\r\n\r\nDo you have an example I can run locally which fails with length > 500?",
"yep, it def gets an error in Colab with CPU. for making sure repro changes, I will put the whole thing in here (with the dummy data):-\r\n```py\r\nimport numpy as np\r\ntrain_text = np.array(['lorem ipsum'*499]*20)\r\ntrain_label = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]\r\nval_text = np.array(['lorem ipsum'*499]*2)\r\nval_label = [1, 2]\r\n\r\nMAX_LENGTH = 2000\r\n\r\n!pip install -q transformers\r\n!pip install -q datasets\r\nimport transformers\r\ntransformers.__version__\r\n\r\nfrom datasets import Dataset\r\ntrain_dataset = Dataset.from_dict({'src':train_text, 'tgt':train_label})\r\nval_dataset = Dataset.from_dict({'src':val_text, 'tgt':val_label})\r\nfrom transformers import AutoTokenizer\r\n \r\ntokenizer = AutoTokenizer.from_pretrained(\"MalawiUniST/ISO6392.nya.ny\", use_fast=True, truncation=True, padding=True, max_length=MAX_LENGTH) #try fast=False\r\n\r\ndef tok(example):\r\n encodings = tokenizer(example['src'], truncation=True, padding=True, max_length=MAX_LENGTH)\r\n encodings[\"labels\"] = example[\"tgt\"] #Try removing this line\r\n return encodings\r\n\r\nlen(train_encoded_dataset['attention_mask'][0])\r\n\r\nfrom transformers import Trainer, TrainingArguments, AutoModelForSequenceClassification\r\n\r\nfrom sklearn.metrics import accuracy_score, precision_recall_fscore_support\r\n\r\ndef compute_metrics(pred):\r\n labels = pred.label_ids\r\n preds = pred.predictions.argmax(-1)\r\n precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='weighted',zero_division=1) #none gives score for each class\r\n acc = accuracy_score(labels, preds)\r\n return {\r\n 'accuracy': acc,\r\n 'f1': f1,\r\n 'precision': precision,\r\n 'recall': recall\r\n }\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir='/content/results/', # output directory\r\n overwrite_output_dir = True,\r\n num_train_epochs=16, # total number of training epochs\r\n per_device_train_batch_size=32, # batch size per device during training\r\n per_device_eval_batch_size=32, # batch size for evaluation\r\n warmup_steps=600, # number of warmup steps for learning rate scheduler\r\n weight_decay=0.01, # strength of weight decay\r\n logging_dir='/content/logs', # directory for storing logs\r\n logging_steps=10,\r\n evaluation_strategy='epoch',\r\n learning_rate=1e-6,\r\n #fp16 = True,\r\n load_best_model_at_end = True,\r\n metric_for_best_model = 'eval_loss',\r\n greater_is_better = False,\r\n seed = 101,\r\n save_total_limit=5,\r\n)\r\n\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\"MalawiUniST/ISO6392.nya.ny\", num_labels=20)\r\n\r\ntrainer = Trainer(\r\n model=model, # the instantiated Transformers model to be trained\r\n args=training_args, # training arguments, defined above\r\n train_dataset=train_encoded_dataset, # training dataset\r\n eval_dataset=val_encoded_dataset, # evaluation dataset\r\n compute_metrics=compute_metrics,\r\n tokenizer=tokenizer\r\n )\r\n\r\ntrain_results = trainer.train()\r\n```\r\nshould reproduce the error on pasting straightaway! :+1: \r\n\r\n---\r\n\r\n**EDIT:-** yep, you are right - using `max_length` as `513` gets an error, and 512 doesn't. I am using Longformer here - so the whole situation becomes tricky :thinking: By default, `Longformer-base-4096` should get `4096` as the max_length.\r\n\r\nWhen pre-training the LM, this is the snippet for initializing the model from scratch:\r\n```py\r\nfrom transformers import LongformerForMaskedLM\r\nfrom transformers import LongformerConfig\r\n\r\nconfig = LongformerConfig(\r\n vocab_size=52_000,\r\n max_position_embeddings=514,\r\n num_attention_heads=2,\r\n num_hidden_layers=1,\r\n type_vocab_size=1,\r\n)\r\n\r\nmodel = LongformerForMaskedLM(config=config)\r\n```\r\ntokenizer too is gotten properly:\r\n```py\r\nmodel_checkpoint = \"allenai/longformer-base-4096\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True, max_length=2000)\r\n```\r\nVery strange.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,621 | 1,621 | NONE | null | ## Environment info
- `transformers` version: 4.5.1
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: Depends - CPU for debugging
- Using distributed or parallel set-up in script?: False
### Who can help
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- tokenizers: @LysandreJik
## Information
Model I am using (Bert, XLNet ...): Longformer (custom upload on Model Hub)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
I created a LM model and uploaded it to Huggingface Model Hub via [this colab notebook](https://colab.research.google.com/drive/153754DbFXRhKdHvjdSUUp9VSB5JqtZwX?usp=sharing)
But when fine-tuning the model on simple reproducible data, I get:-
```py
%%bash
pip install -q transformers
pip install -q datasets
import numpy as np
train_text = np.array(['a foxy', 'b ball', 'c cats r bad', 'as das', 'sagha','asdfsd','asd','ad','aets','hsdg','reya','arey','areyareh','yui','aEWY','DSH','ASUYH','ASFH','ASDFHG','OOO'], dtype='<U5280')
train_label = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]
val_text = np.array(['a foxy', 'r c cats'], dtype='<U5280')
val_label = [1, 2]
from datasets import Dataset
train_dataset = Dataset.from_dict({'src':train_text, 'tgt':train_label})
val_dataset = Dataset.from_dict({'src':val_text, 'tgt':val_label})
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("MalawiUniST/ISO6392.nya.ny", use_fast=True, truncation=True, padding=True, max_length=10) #try fast=False
def tok(example):
encodings = tokenizer(example['src'], truncation=True, padding="max_length", max_length=10)
return encodings
train_encoded_dataset = train_dataset.map(tok, batched=True)
val_encoded_dataset = val_dataset.map(tok, batched=True)
from transformers import Trainer, TrainingArguments, AutoModelForSequenceClassification
from sklearn.metrics import accuracy_score, precision_recall_fscore_support
def compute_metrics(pred):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='weighted',zero_division=1) #none gives score for each class
acc = accuracy_score(labels, preds)
return {
'accuracy': acc,
'f1': f1,
'precision': precision,
'recall': recall
}
training_args = TrainingArguments(
output_dir='/content/results/', # output directory
overwrite_output_dir = True,
num_train_epochs=16, # total number of training epochs
per_device_train_batch_size=32, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
warmup_steps=600, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='/content/logs', # directory for storing logs
logging_steps=10,
evaluation_strategy='epoch',
learning_rate=1e-6,
#fp16 = True,
load_best_model_at_end = True,
metric_for_best_model = 'eval_loss',
greater_is_better = False,
seed = 101,
save_total_limit=5,
)
model = AutoModelForSequenceClassification.from_pretrained("MalawiUniST/ISO6392.nya.ny", num_labels=20)
trainer = Trainer(
model=model, # the instantiated Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_encoded_dataset, # training dataset
eval_dataset=val_encoded_dataset, # evaluation dataset
compute_metrics=compute_metrics,
tokenizer=tokenizer
)
train_results = trainer.train()
```
This error:-
```
Some weights of the model checkpoint at MalawiUniST/ISO6392.nya.ny were not used when initializing LongformerForSequenceClassification: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'lm_head.decoder.bias']
- This IS expected if you are initializing LongformerForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing LongformerForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of LongformerForSequenceClassification were not initialized from the model checkpoint at MalawiUniST/ISO6392.nya.ny and are newly initialized: ['classifier.dense.weight', 'classifier.dense.bias', 'classifier.out_proj.weight', 'classifier.out_proj.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-113-a2ff149dfd3d> in <module>()
46 )
47
---> 48 train_results = trainer.train()
3 frames
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs)
1118 tr_loss += self.training_step(model, inputs)
1119 else:
-> 1120 tr_loss += self.training_step(model, inputs)
1121 self._total_flos += float(self.floating_point_ops(inputs))
1122
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in training_step(self, model, inputs)
1522 loss = self.compute_loss(model, inputs)
1523 else:
-> 1524 loss = self.compute_loss(model, inputs)
1525
1526 if self.args.n_gpu > 1:
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs)
1564 else:
1565 # We don't use .loss here since the model may return tuples instead of ModelOutput.
-> 1566 loss = outputs["loss"] if isinstance(outputs, dict) else outputs[0]
1567
1568 return (loss, outputs) if return_outputs else loss
/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in __getitem__(self, k)
1614 if isinstance(k, str):
1615 inner_dict = {k: v for (k, v) in self.items()}
-> 1616 return inner_dict[k]
1617 else:
1618 return self.to_tuple()[k]
KeyError: 'loss'
```
From sgugger's reply [here on forums](https://discuss.huggingface.co/t/key-error-loss-while-fine-tuning-gpt-2-with-the-trainer-utility/2861/4?u=neel-gupta) it seems that one strong cause is when the labels aren't present (even though they certainly are upon printing it out)
This seems like a bug and the code is reproducible on colab. Any ideas to possible workarounds? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11256/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11256/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11255 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11255/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11255/comments | https://api.github.com/repos/huggingface/transformers/issues/11255/events | https://github.com/huggingface/transformers/issues/11255 | 858,145,493 | MDU6SXNzdWU4NTgxNDU0OTM= | 11,255 | Big Bird generate() "local variable 'next_tokens' referenced before assignment" | {
"login": "OscarGarciaF",
"id": 38294826,
"node_id": "MDQ6VXNlcjM4Mjk0ODI2",
"avatar_url": "https://avatars.githubusercontent.com/u/38294826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OscarGarciaF",
"html_url": "https://github.com/OscarGarciaF",
"followers_url": "https://api.github.com/users/OscarGarciaF/followers",
"following_url": "https://api.github.com/users/OscarGarciaF/following{/other_user}",
"gists_url": "https://api.github.com/users/OscarGarciaF/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OscarGarciaF/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OscarGarciaF/subscriptions",
"organizations_url": "https://api.github.com/users/OscarGarciaF/orgs",
"repos_url": "https://api.github.com/users/OscarGarciaF/repos",
"events_url": "https://api.github.com/users/OscarGarciaF/events{/privacy}",
"received_events_url": "https://api.github.com/users/OscarGarciaF/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @vasudevgupta7 ",
"@OscarGarciaF, i will need more details before i could see your issue. Are you using google/bigbird-roberta-base with EncoderDecoderModel for summarization? It would be great if you can share your code. ",
"@vasudevgupta7 I was using AutoModelForSeq2SeqLM (this is what you use for summarization right?)\r\n\r\nI have now changed to EncoderDecoderModel but now I face a new error\r\n\r\n```\r\n 1 input = tokens[0:1, :].to(device)\r\n----> 2 generated = model_sum.generate(input, decoder_start_token_id = model_sum.config.decoder.pad_token_id, max_length = 512, num_beams = 4, early_stopping = True)\r\n\r\n10 frames\r\n/usr/local/lib/python3.7/dist-packages/transformers/models/big_bird/modeling_big_bird.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds, past_key_values_length)\r\n 293 \r\n 294 position_embeddings = self.position_embeddings(position_ids)\r\n--> 295 embeddings += position_embeddings\r\n 296 \r\n 297 embeddings = self.dropout(embeddings)\r\n\r\nRuntimeError: output with shape [4, 1, 768] doesn't match the broadcast shape [4, 0, 768]\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Any updates on this? I get the exact same error when running generate on EncoderDecoderModel. \r\n\r\n`RuntimeError: output with shape [1, 1, 768] doesn't match the broadcast shape [1, 0, 768]`\r\n\r\nWhen I remove padding from the input_ids the error goes away but I think this is a bug of some sort.",
"I also got exact same error (`output with shape...`) when i generate on **custom** BigBird model. I fixed it by reducing `model_max_length` value from 4096 to 4094 and afterwards i can use pipeline for inference without any problem.\r\n\r\n```\r\n>>> tokenizer.max_len_single_sentence\r\n4094\r\n>>> tokenizer.model_max_length\r\n4096\r\n>>> tokenizer.model_max_length = 4094\r\n```"
] | 1,618 | 1,668 | 1,621 | NONE | null | I am facing this problem when doing text summarization. I am using google/bigbird-roberta-base and I get the following error when calling model.generate(input, max_length = 4096, num_beams=4, early_stopping=True, length_penalty = 0.8):
```
Input length of input_ids is 4096, but ``max_length`` is set to 4096.This can lead to unexpected behavior. You should consider increasing ``config.max_length`` or ``max_length``.
---------------------------------------------------------------------------
UnboundLocalError Traceback (most recent call last)
<ipython-input-13-90a633800ba7> in <module>()
----> 1 get_ipython().run_cell_magic('time', '', ' \ni = 0\nsize = 1\nout = []\nend = False\nprint_iters = 100\nsave_iters = 5\n \nwhile True:\n if (i+size) >= n:\n last = n\n end = True\n else:\n last = i + size \n \n result = make_gen( model_sum, tokens[i:last, :].detach().clone() )\n \n for j in range(result.shape[0]):\n out.append(result[j])\n \n if last % (print_iters*size) == 0:\n print(last)\n gc.collect()\n torch.cuda.empty_cache()\n torch.cuda.synchronize()\n if last % (print_iters*size*save_iters) == 0:\n with open(path_output + name + ".pkl", \'wb\') as f:\n pickle.dump(out, f)\n print("Saved to disk")\n \n if end:\n break\n i = last')
6 frames
<decorator-gen-53> in time(self, line, cell, local_ns)
<timed exec> in <module>()
/usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py in beam_search(self, input_ids, beam_scorer, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, **model_kwargs)
1808
1809 sequence_outputs = beam_scorer.finalize(
-> 1810 input_ids, beam_scores, next_tokens, next_indices, pad_token_id=pad_token_id, eos_token_id=eos_token_id
1811 )
1812
UnboundLocalError: local variable 'next_tokens' referenced before assignment
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11255/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11255/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11254 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11254/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11254/comments | https://api.github.com/repos/huggingface/transformers/issues/11254/events | https://github.com/huggingface/transformers/pull/11254 | 858,144,735 | MDExOlB1bGxSZXF1ZXN0NjE1NDQ5MTI4 | 11,254 | Trainer iterable dataset | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,618 | 1,618 | 1,618 | COLLABORATOR | null | # What does this PR do?
This PR adds full support for `IterableDataset` training set in the main Trainer (just the training set, evaluation/prediction will require way more work). Up until now, the Trainer kind of support training dataset that are instances of `IterableDataset`, but in a distributed setting, the training will be on the same data on all processes, which is... not ideal. This PR fixes that and adds some tests. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11254/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11254",
"html_url": "https://github.com/huggingface/transformers/pull/11254",
"diff_url": "https://github.com/huggingface/transformers/pull/11254.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11254.patch",
"merged_at": 1618434147000
} |
https://api.github.com/repos/huggingface/transformers/issues/11253 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11253/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11253/comments | https://api.github.com/repos/huggingface/transformers/issues/11253/events | https://github.com/huggingface/transformers/pull/11253 | 858,142,240 | MDExOlB1bGxSZXF1ZXN0NjE1NDQ3MTI3 | 11,253 | New TF examples | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
},
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
},
{
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
}
] | [
"I opened a new branch and PR at #11360 to avoid dealing with rebasing after the folder structure was changed around"
] | 1,618 | 1,651 | 1,619 | MEMBER | null | Opening a PR to get some feedback on the new TF example style before I write the rest.
Don't merge it yet, I haven't even finalized the filenames! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11253/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11253/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11253",
"html_url": "https://github.com/huggingface/transformers/pull/11253",
"diff_url": "https://github.com/huggingface/transformers/pull/11253.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11253.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11252 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11252/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11252/comments | https://api.github.com/repos/huggingface/transformers/issues/11252/events | https://github.com/huggingface/transformers/pull/11252 | 858,122,084 | MDExOlB1bGxSZXF1ZXN0NjE1NDI5MzY3 | 11,252 | Fix for the issue of device-id getting hardcoded for token_type_ids during Tracing [WIP] | {
"login": "HamidShojanazeri",
"id": 9162336,
"node_id": "MDQ6VXNlcjkxNjIzMzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9162336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HamidShojanazeri",
"html_url": "https://github.com/HamidShojanazeri",
"followers_url": "https://api.github.com/users/HamidShojanazeri/followers",
"following_url": "https://api.github.com/users/HamidShojanazeri/following{/other_user}",
"gists_url": "https://api.github.com/users/HamidShojanazeri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HamidShojanazeri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HamidShojanazeri/subscriptions",
"organizations_url": "https://api.github.com/users/HamidShojanazeri/orgs",
"repos_url": "https://api.github.com/users/HamidShojanazeri/repos",
"events_url": "https://api.github.com/users/HamidShojanazeri/events{/privacy}",
"received_events_url": "https://api.github.com/users/HamidShojanazeri/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"@LysandreJik as discussed off-line, I would appreciate if you could help with re-opening the PR, thanks.",
"> Looks very cool! We can deploy the same fix on the other models.\r\n\r\nThanks @sgugger, once this can be merged, I will start the other models and submit separate PRs for them.",
"Great, thanks a lot @HamidShojanazeri. Before we merge; could you apply the same changes to all models affected by the `fix-copies` script in your PR? If we merge this as is, we'll have the implementation be partially supported for these models, which is unwanted.\r\n\r\nThank you!",
"@LysandreJik sure, I will update the affected models as well. ",
"> Great, thanks a lot @HamidShojanazeri. Before we merge; could you apply the same changes to all models affected by the `fix-copies` script in your PR? If we merge this as is, we'll have the implementation be partially supported for these models, which is unwanted.\r\n> \r\n> Thank you!\r\n\r\n@LysandreJik Updated!",
"Ran the GPU tests, works like a charm. As this gets implemented in other models, let's think of a test we can add similar to your snippet @HamidShojanazeri to ensure there is no regression.\r\n\r\nMerging!",
"Thanks @LysandreJik sure, will sync off-line.",
"This introduced an issue in our slow tests that I'm patching in https://github.com/huggingface/transformers/pull/12336",
"> This introduced an issue in our slow tests that I'm patching in #12336\r\n\r\nThank a lot! @LysandreJik."
] | 1,618 | 1,624 | 1,624 | CONTRIBUTOR | null | # What does this PR do?
Using Torchscript Trace API to convert the HF models is creating an issue , where during the tracing some of the name/id of the device getting hardcoded for some of the tensors. As a result position_embedding or token_type_embedding fail when model is loaded for inference on another device, as they were tied to the device that they were traced on (e.g cpu, gpu id). This issue will raise when one needs to switch between devices and specially for multi-gpu inference where model is Torchscripted/traced.
For Bert model, device name get hardcoded for token_type_ids, this has been previously addressed for position_ids in [the merged PR](https://github.com/huggingface/transformers/pull/5773). This PR, fixes this issue by registering a buffer for token_type_ids. Similar changes for other models are required as well, that will submit PRs accordingly.
The following code snippet can be used to test the issue and suggested fix.
```
import transformers
from pathlib import Path
import os
import json
import torch
from transformers import (AutoModelForSequenceClassification, AutoTokenizer, AutoModelForQuestionAnswering,
AutoModelForTokenClassification, AutoConfig)
device1 = torch.device('cuda') # this can be changed to cuda:0 in multi-gpu use-case
device2 = torch.device('cpu')# this can be changed to cuda:1 in multi-gpu use-case
model_name = 'bert-base-uncased'
config = AutoConfig.from_pretrained(model_name,num_labels=2,torchscript=True)
model = AutoModelForSequenceClassification.from_pretrained(model_name, config=config)
tokenizer = AutoTokenizer.from_pretrained(model_name,do_lower_case=True)
dummy_input = "This is a dummy input for torch jit trace"
max_length = 20
inputs = tokenizer.encode_plus(dummy_input,max_length = int(max_length),pad_to_max_length = True, add_special_tokens = True, return_tensors = 'pt')
input_ids = inputs["input_ids"]
attention_mask = inputs["attention_mask"]
print('device1 {}, device2 {}'.format(device1,device2))
outputs = model(**inputs)
model.to(device1).eval()
last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
traced_model = torch.jit.trace(model,(input_ids.to(device1),attention_mask.to(device1)))
torch.jit.save(traced_model, "bert.pt")
print("*************************** traced model graph on device 1 ***************************")
print(traced_model.graph)
print("\n")
loaded = torch.jit.load("bert.pt", map_location=device2)
print("\n")
print("*************************** model graph on loaded on device 2 ***************************")
print(loaded.graph)
outputs = loaded(input_ids.to(device2),attention_mask.to(device2))
print(outputs)
```
Error log :
[bert_gpu_to_cpu.logs.txt](https://github.com/huggingface/transformers/files/6312550/bert_gpu_to_cpu.logs.txt)
Fix log:
[bert_gpu_to_cpu_fixed.logs.txt](https://github.com/huggingface/transformers/files/6312560/bert_gpu_to_cpu_fixed.logs.txt)
Fixes # (issue)
Registering a buffer for token_type_ids in the constructor and then resizing it in the forward method based on input-shape.
## Before submitting
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
issues #5664 and #976
## Who can review?
@LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11252/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11252/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11252",
"html_url": "https://github.com/huggingface/transformers/pull/11252",
"diff_url": "https://github.com/huggingface/transformers/pull/11252.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11252.patch",
"merged_at": 1624353691000
} |
https://api.github.com/repos/huggingface/transformers/issues/11251 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11251/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11251/comments | https://api.github.com/repos/huggingface/transformers/issues/11251/events | https://github.com/huggingface/transformers/pull/11251 | 858,082,214 | MDExOlB1bGxSZXF1ZXN0NjE1Mzk2MTkw | 11,251 | Add batching in TokenClassificationPipeline | {
"login": "parakalan",
"id": 17947485,
"node_id": "MDQ6VXNlcjE3OTQ3NDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/17947485?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parakalan",
"html_url": "https://github.com/parakalan",
"followers_url": "https://api.github.com/users/parakalan/followers",
"following_url": "https://api.github.com/users/parakalan/following{/other_user}",
"gists_url": "https://api.github.com/users/parakalan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/parakalan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parakalan/subscriptions",
"organizations_url": "https://api.github.com/users/parakalan/orgs",
"repos_url": "https://api.github.com/users/parakalan/repos",
"events_url": "https://api.github.com/users/parakalan/events{/privacy}",
"received_events_url": "https://api.github.com/users/parakalan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"FYI there is also work done on this pipeline in https://github.com/huggingface/transformers/pull/10568 if you want to give it a look! It doesn't concern batching, however.",
"Thanks, let me check that out. ",
"Please review this @LysandreJik , @Narsil , @joshdevins",
"Closing this PR based on @Narsil's review. Thanks"
] | 1,618 | 1,620 | 1,620 | CONTRIBUTOR | null | # What does this PR do?
Currently, the NER pipeline in transformers iterates through the list of input sentences and processes them, sequentially.
This PR adds batching support in the pipeline to decrease latency and use GPU more efficiently.
Relevant Issue :- #11244
## Benchmark Report
### Without Batching (CPU)
Device: CPU
No. examples: 1000
Time taken: 283.27826976776123
Device: GPU
No. examples: 1000
Time taken: 17.89318561553955
Please check the benchmark gist [here](https://gist.github.com/parakalan/88b613ed4ca0001afb60448996f6b62a)
### Without Batching (CPU)
Device: CPU
No. examples: 1000
Batch Size: 512
Time taken: 121.81582999229431
Device: GPU
No. examples: 1000
Batch Size: 512
Time taken: 2.780881404876709
Please check the benchmark gist [here](https://gist.github.com/parakalan/f1fa25f25b8a70125145afbcbbeac85f)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. - https://github.com/huggingface/transformers/issues/11244
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11251/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11251/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11251",
"html_url": "https://github.com/huggingface/transformers/pull/11251",
"diff_url": "https://github.com/huggingface/transformers/pull/11251.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11251.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/11250 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11250/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11250/comments | https://api.github.com/repos/huggingface/transformers/issues/11250/events | https://github.com/huggingface/transformers/issues/11250 | 858,048,661 | MDU6SXNzdWU4NTgwNDg2NjE= | 11,250 | [Benchmark] | {
"login": "soapland-master69",
"id": 72711984,
"node_id": "MDQ6VXNlcjcyNzExOTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/72711984?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/soapland-master69",
"html_url": "https://github.com/soapland-master69",
"followers_url": "https://api.github.com/users/soapland-master69/followers",
"following_url": "https://api.github.com/users/soapland-master69/following{/other_user}",
"gists_url": "https://api.github.com/users/soapland-master69/gists{/gist_id}",
"starred_url": "https://api.github.com/users/soapland-master69/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/soapland-master69/subscriptions",
"organizations_url": "https://api.github.com/users/soapland-master69/orgs",
"repos_url": "https://api.github.com/users/soapland-master69/repos",
"events_url": "https://api.github.com/users/soapland-master69/events{/privacy}",
"received_events_url": "https://api.github.com/users/soapland-master69/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,618 | 1,618 | 1,618 | NONE | null | # 🖥 Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
## Results
Put your results here! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11250/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11250/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11249 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11249/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11249/comments | https://api.github.com/repos/huggingface/transformers/issues/11249/events | https://github.com/huggingface/transformers/issues/11249 | 858,033,140 | MDU6SXNzdWU4NTgwMzMxNDA= | 11,249 | TypeError: can't pickle _thread.RLock objects hyperparameter_search raytune | {
"login": "maxzzze",
"id": 24981282,
"node_id": "MDQ6VXNlcjI0OTgxMjgy",
"avatar_url": "https://avatars.githubusercontent.com/u/24981282?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxzzze",
"html_url": "https://github.com/maxzzze",
"followers_url": "https://api.github.com/users/maxzzze/followers",
"following_url": "https://api.github.com/users/maxzzze/following{/other_user}",
"gists_url": "https://api.github.com/users/maxzzze/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxzzze/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxzzze/subscriptions",
"organizations_url": "https://api.github.com/users/maxzzze/orgs",
"repos_url": "https://api.github.com/users/maxzzze/repos",
"events_url": "https://api.github.com/users/maxzzze/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxzzze/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I also have this issue (bump)",
"Pinging @richardliaw, @amogkam ",
"@maxzzze looks like a serialization error with the Trainer. We will take a look at this, but in the meantime can you downgrade your transformers version to 4.4. Also see https://github.com/ray-project/ray/issues/15439.",
"So it looks like this seems to work as soon as we disable the memory tracker:\r\n\r\n```\r\ntrainer._memory_tracker = None\r\n```\r\n\r\nWill it be possible to expose an API to temporarily disable this? \r\n\r\nThe other issue is https://github.com/huggingface/transformers/issues/11565, but we can resolve this there.\r\n\r\nWe should have tests that catch these regressions right?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I am having the same problem.\r\nDisabling the memory tracker worked for me.\r\nBUT, then I ran into #11565 as well",
"Yes, if you disable the memory tracker (pass in `skip_memory_metrics=True` into your `TrainingArguments`) then you will no longer get the pickling error.\r\n\r\nIn the next transformers release, the Ray Tune integration will automatically disable memory tracking if it's currently being enabled.",
"Hi, with transformers 4.26.1 on Sage maker I am still having this error: TypeError: cannot pickle '_thread.lock' object.\r\n\r\ndef hp_space(trial):\r\n return {\r\n \"learning_rate\": trial.suggest_float(\"learning_rate\", 1e-5, 1e-3, log=True),\r\n \"num_train_epochs\": trial.suggest_int(\"num_train_epochs\", 1, 10),\r\n \"seed\": trial.suggest_int(\"seed\", 1, 40),\r\n \"per_device_train_batch_size\": trial.suggest_categorical(\"per_device_train_batch_size\", [16, 32, 64]),\r\n \"weight_decay\": trial.suggest_float(\"weight_decay\", 1e-3, 1e-1, log=True),\r\n }\r\n\r\nbest_run = trainer.hyperparameter_search(n_trials=20, direction=\"minimize\", hp_space=hp_space)"
] | 1,618 | 1,677 | 1,623 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: v4.5.1
- Platform: Linux
- Python version: 3.7.8
- PyTorch version (GPU?): yes
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): bert-base-uncased
The problem arises when using:
* [ x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run hyperparameter tuning with raytune
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
2021-04-14 15:44:01,389 INFO services.py:1264 -- View the Ray dashboard at http://127.0.0.1:8265
Traceback (most recent call last):
File "pipeline_training.py", line 311, in <module>
keep_checkpoints_num=0
File "/opt/conda/lib/python3.7/site-packages/transformers/trainer.py", line 1459, in hyperparameter_search
best_run = run_hp_search(self, n_trials, direction, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/transformers/integrations.py", line 235, in run_hp_search_ray
**kwargs,
File "/opt/conda/lib/python3.7/site-packages/ray/tune/tune.py", line 297, in run
_ray_auto_init()
File "/opt/conda/lib/python3.7/site-packages/ray/tune/tune.py", line 664, in _ray_auto_init
ray.init()
File "/opt/conda/lib/python3.7/site-packages/ray/_private/client_mode_hook.py", line 62, in wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/ray/worker.py", line 785, in init
hook()
File "/opt/conda/lib/python3.7/site-packages/ray/tune/registry.py", line 171, in flush
self.references[k] = ray.put(v)
File "/opt/conda/lib/python3.7/site-packages/ray/_private/client_mode_hook.py", line 62, in wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/ray/worker.py", line 1481, in put
object_ref = worker.put_object(value)
File "/opt/conda/lib/python3.7/site-packages/ray/worker.py", line 266, in put_object
serialized_value = self.get_serialization_context().serialize(value)
File "/opt/conda/lib/python3.7/site-packages/ray/serialization.py", line 324, in serialize
return self._serialize_to_msgpack(value)
File "/opt/conda/lib/python3.7/site-packages/ray/serialization.py", line 304, in _serialize_to_msgpack
self._serialize_to_pickle5(metadata, python_objects)
File "/opt/conda/lib/python3.7/site-packages/ray/serialization.py", line 264, in _serialize_to_pickle5
raise e
File "/opt/conda/lib/python3.7/site-packages/ray/serialization.py", line 261, in _serialize_to_pickle5
value, protocol=5, buffer_callback=writer.buffer_callback)
File "/opt/conda/lib/python3.7/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 73, in dumps
cp.dump(obj)
File "/opt/conda/lib/python3.7/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 580, in dump
return Pickler.dump(self, obj)
TypeError: can't pickle _thread.RLock objects
```
The code chunk to start the `hyperparameter_search`:
```python
def my_hp_space(trial):
from ray import tune
return {
"learning_rate": tune.uniform(1e-5, 5e-5),
"num_train_epochs": tune.choice(range(1, 6)),
"per_device_train_batch_size": tune.choice([2,4]),
"weight_decay": tune.uniform(0.0, 0.3),
"adam_epsilon": tune.loguniform(1e-10, 1e-6),
"per_device_eval_batch_size": 32
}
best_run = trainer.hyperparameter_search(
backend="ray",
n_trials=15,
hp_space=my_hp_space,
stop=None,
checkpoint_score_attr="training_iteration",
keep_checkpoints_num=0
compute_objective=lambda x: my_objective(x, metric='eval_' + used_metric)
)
```
## Expected behavior
Expect that it will not throw an error. Note that this script does work on `4.2.0`.
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11249/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11249/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11248 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11248/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11248/comments | https://api.github.com/repos/huggingface/transformers/issues/11248/events | https://github.com/huggingface/transformers/pull/11248 | 857,997,620 | MDExOlB1bGxSZXF1ZXN0NjE1MzI1MTgx | 11,248 | Fix #10128 | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,618 | 1,618 | 1,618 | COLLABORATOR | null | # What does this PR do?
Small bug fix in numpy_pad_and_concatenate, as reported in #10128
Fixes #10128 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11248/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11248/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11248",
"html_url": "https://github.com/huggingface/transformers/pull/11248",
"diff_url": "https://github.com/huggingface/transformers/pull/11248.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11248.patch",
"merged_at": 1618415274000
} |
https://api.github.com/repos/huggingface/transformers/issues/11247 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11247/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11247/comments | https://api.github.com/repos/huggingface/transformers/issues/11247/events | https://github.com/huggingface/transformers/pull/11247 | 857,961,506 | MDExOlB1bGxSZXF1ZXN0NjE1Mjk0NzQ5 | 11,247 | Adding pipeline task aliases. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,618 | 1,618 | 1,618 | CONTRIBUTOR | null | # What does this PR do?
Two tasks were sort of not aligned with the pipeline names.
`sentiment-analysis` -> `TextClassificationPipeline`
`ner` -> `TokenClassificationPipeline`
In order to make this change backward compatible, yet make the code more consistent, this PR
introduces a TASK_ALIASES dictionary, which remaps a task name to another *canon* task name.
Previously working code should still be working, simply we are adding `text-classification` and `token-classification` tasks
available to `pipeline(...)` function.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik
@philschmid
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11247/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11247/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11247",
"html_url": "https://github.com/huggingface/transformers/pull/11247",
"diff_url": "https://github.com/huggingface/transformers/pull/11247.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11247.patch",
"merged_at": 1618473084000
} |
https://api.github.com/repos/huggingface/transformers/issues/11246 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11246/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11246/comments | https://api.github.com/repos/huggingface/transformers/issues/11246/events | https://github.com/huggingface/transformers/issues/11246 | 857,909,057 | MDU6SXNzdWU4NTc5MDkwNTc= | 11,246 | Enable Wav2Vec2 Pretraining | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
},
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
}
] | closed | false | null | [] | [] | 1,618 | 1,623 | 1,623 | MEMBER | null | # 🚀 Feature request
This is a feature request to add Wav2Vec2 Pretraining functionality to the transformers library. This is a "Good Second Issue" feature request, which means that interested contributors should have some experience with the transformers library and ideally also with training/fine-tuning Wav2Vec2.
## Motivation
The popular [Wav2Vec2](https://huggingface.co/models?filter=wav2vec2) model cannot be pretrained using the Hugging Face library yet. During the fine-tuning week, multiple people have reported improved results by pretraining wav2vec2 directly on the target language before fine-tuning it.
## Your contribution
I am happy to give an interesting contributor guidance throughout the PR and answer all relevant questions.
## How to start
1) To begin with, one should run a pretraining forward pass using the official Wav2Vec2 repository. The forward pass can be found here: https://github.com/pytorch/fairseq/blob/436166a00c2ecd1215df258f022608947cca2aa8/fairseq/models/wav2vec/wav2vec2.py#L474.
It is important that the argument `features_only` is set to `False` in the [`forward`](https://github.com/pytorch/fairseq/blob/436166a00c2ecd1215df258f022608947cca2aa8/fairseq/models/wav2vec/wav2vec2.py#L474) function.
Succesfully running a forward pass with fairseq is important to ensure the correctness of the hugging face implementation by comparing the two outputs.
This is probably the most difficult part of the PR.
**Note:** this also means that the loaded fairseq wav2vec2 checkpoint should include weights for the `GumbelVectorQuantizer` quantizer, see: https://github.com/pytorch/fairseq/blob/436166a00c2ecd1215df258f022608947cca2aa8/fairseq/models/wav2vec/wav2vec2.py#L277
The easiest checkpoint to try out pretraining with is probably the wav2vec2 2.0 Base - No fine-tuning [here](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small.pt)
[Here](https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#train-a-wav2vec-20-base-model) is the official Fairseq recipe on how to do so.
2) Having run a forward pass successfully, the methods can now be implemented into transformers [here](https://github.com/huggingface/transformers/blob/653076ca307520ee85fd5f5de6918019f8521bb5/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L966) as a new class that could roughly look as follows:
```python
class Wav2Vec2ForPretraining:
def init(self, config):
self.wav2vec2 = Wav2Vec2Model(config)
self.quantizer = ...
self.project_q = ...
def forward(...):
outputs = self.wav2vec2(
input_values,
attention_mask=attention_mask,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
# ... all the pretraining logic here
```
Having implemented the class it should be made sure that a forward pass of `Wav2Vec2ForPretraining` works.
3) Convert the pretrained checkpoints correctly
After `Wav2Vec2ForPretraining` was succesfully added, a "non-finuted" checkpoint, e.g., the wav2vec2 2.0 Base - No fine-tuning [here](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small.pt) should be converted to the hugging face models. One will probably have to slightly adapt the conversion script as well: https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py
Having converted the checkpoint it can be uploaded to the hub and checked whether the checkpoint yields the same outputs as the official wav2vec2 pretraining functionality.
4) Add tests
Next, a couple of tests should be implemented that make sure that the behavior stays correct in the future. This included both fast and "slow" integration tests (fast tests are "normal" tests); "slow" integration tests include a "real" checkpoint and test its output against a hardcoded expected output tensor slice as it's done, *e.g.*, [here](https://github.com/huggingface/transformers/blob/653076ca307520ee85fd5f5de6918019f8521bb5/tests/test_modeling_big_bird.py#L823).
## Ask for help
For questions regarding how to finish 1), they can be directly asked on this issue; I will try to answer them as best as I can. Also gently pinging @cceyda here as I think she has already succesfully pretrained a wav2vec2 model using fairseq. Hope it's fine to ping you here 😅) - in case you have some good tips on how to pretrained wav2vec2 with fairseq, it would be amazing to share some tips here.
For questions when doing 2), 3) & 4) please directly asked on the PR you have opened to implement the model.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11246/reactions",
"total_count": 16,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 16,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11246/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11245 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11245/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11245/comments | https://api.github.com/repos/huggingface/transformers/issues/11245/events | https://github.com/huggingface/transformers/issues/11245 | 857,859,045 | MDU6SXNzdWU4NTc4NTkwNDU= | 11,245 | RuntimeError: leaf variable has been moved into the graph interior | {
"login": "bing0037",
"id": 11786011,
"node_id": "MDQ6VXNlcjExNzg2MDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/11786011?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bing0037",
"html_url": "https://github.com/bing0037",
"followers_url": "https://api.github.com/users/bing0037/followers",
"following_url": "https://api.github.com/users/bing0037/following{/other_user}",
"gists_url": "https://api.github.com/users/bing0037/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bing0037/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bing0037/subscriptions",
"organizations_url": "https://api.github.com/users/bing0037/orgs",
"repos_url": "https://api.github.com/users/bing0037/repos",
"events_url": "https://api.github.com/users/bing0037/events{/privacy}",
"received_events_url": "https://api.github.com/users/bing0037/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"@bing0037 I'm running into the same problem. Did you find a fix for it?"
] | 1,618 | 1,635 | 1,621 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.5.1
- Platform: GPU
- Python version: python 3.6.9
- PyTorch version (GPU): torch==1.4.0
### Who can help
@TobiasLee @julien-c
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
- Model that I am using: Bert
- Code Modification:
Actually I didn't change any code, I just used the newest code in master branch.
I am trying to use run_bertology.py to carry out head pruning by using the following script:
- The script that I used:
```
export GLUE_DIR=glue_data
export TASK_NAME=RTE
CUDA_VISIBLE_DEVICES=3 python run_bertology.py \
--model_name_or_path bert-base-uncased \
--try_masking \
--task_name $TASK_NAME \
--data_dir $GLUE_DIR/$TASK_NAME/ \
--max_seq_length 128 \
--output_dir output_headpruning_bert/${TASK_NAME} \
--overwrite_output_dir
```
But I got the error:
```
Traceback (most recent call last):
File "run_bertology.py", line 449, in <module>
main()
File "run_bertology.py", line 445, in main
prune_heads(args, model, eval_dataloader, head_mask)
File "run_bertology.py", line 213, in prune_heads
args, model, eval_dataloader, compute_entropy=False, compute_importance=False, head_mask=head_mask
File "run_bertology.py", line 103, in compute_heads_importance
loss.backward() # Backpropagate to populate the gradients in the head mask
File "/home/bil19003/anaconda3/envs/pytorch_huggingface/lib/python3.6/site-packages/torch/tensor.py", line 195, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/bil19003/anaconda3/envs/pytorch_huggingface/lib/python3.6/site-packages/torch/autograd/__init__.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: leaf variable has been moved into the graph interior
```
I got similar results to #3895, seemingly the problem still exists. Could you please help?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11245/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11245/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11244 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11244/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11244/comments | https://api.github.com/repos/huggingface/transformers/issues/11244/events | https://github.com/huggingface/transformers/issues/11244 | 857,800,017 | MDU6SXNzdWU4NTc4MDAwMTc= | 11,244 | Batching in NER pipeline | {
"login": "parakalan",
"id": 17947485,
"node_id": "MDQ6VXNlcjE3OTQ3NDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/17947485?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parakalan",
"html_url": "https://github.com/parakalan",
"followers_url": "https://api.github.com/users/parakalan/followers",
"following_url": "https://api.github.com/users/parakalan/following{/other_user}",
"gists_url": "https://api.github.com/users/parakalan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/parakalan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parakalan/subscriptions",
"organizations_url": "https://api.github.com/users/parakalan/orgs",
"repos_url": "https://api.github.com/users/parakalan/repos",
"events_url": "https://api.github.com/users/parakalan/events{/privacy}",
"received_events_url": "https://api.github.com/users/parakalan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,621 | 1,621 | CONTRIBUTOR | null | # 🚀 Feature request
Currently, the NER pipeline in transformers iterates through the list of input sentences and processes them, sequentially.
It would we beneficial to add batching support in the pipeline to decrease latency and use GPU more efficiently.
## Motivation
Batching will help use the GPU more efficiently and reduce latency by a lot. The NER pipeline is amazing with its post processing, could be a production ready construct if batching is added.
This issue has been discussed here - https://github.com/huggingface/transformers/issues/8942 at the end, but looks like no one is working on it actively.
## Your contribution
Working on this issue in this PR - https://github.com/huggingface/transformers/pull/11251
Please raise if this is on the radar already, will close the issue.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11244/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11244/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11243 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11243/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11243/comments | https://api.github.com/repos/huggingface/transformers/issues/11243/events | https://github.com/huggingface/transformers/issues/11243 | 857,783,481 | MDU6SXNzdWU4NTc3ODM0ODE= | 11,243 | Cant load tokenizer locally after downloading it | {
"login": "jiwidi",
"id": 10882086,
"node_id": "MDQ6VXNlcjEwODgyMDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/10882086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiwidi",
"html_url": "https://github.com/jiwidi",
"followers_url": "https://api.github.com/users/jiwidi/followers",
"following_url": "https://api.github.com/users/jiwidi/following{/other_user}",
"gists_url": "https://api.github.com/users/jiwidi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiwidi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiwidi/subscriptions",
"organizations_url": "https://api.github.com/users/jiwidi/orgs",
"repos_url": "https://api.github.com/users/jiwidi/repos",
"events_url": "https://api.github.com/users/jiwidi/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiwidi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, that's because the tokenizer first looks to see if the path specified is a local path. Since you're saving your model on a path with the same identifier as the hub checkpoint, when you're re-running the script both the model and tokenizer will look into that folder.\r\n\r\nThe tokenizer doesn't find anything in there, as you've only saved the model, not the tokenizer. You should either save the tokenier as well, or change the path so that it isn't mistaken for a local path when it should be the hub.",
"> Hi, that's because the tokenizer first looks to see if the path specified is a local path. Since you're saving your model on a path with the same identifier as the hub checkpoint, when you're re-running the script both the model and tokenizer will look into that folder.\r\n> \r\n> The tokenizer doesn't find anything in there, as you've only saved the model, not the tokenizer. You should either save the tokenier as well, or change the path so that it isn't mistaken for a local path when it should be the hub.\r\n\r\nHow could I also save the tokenizer? Im newbie with transformer library and I took that code from the webpage.",
"You can add `tokenizer.save_pretrained(MODEL)` right under the model's `save_pretrained`!",
"i love you Lysanderjik"
] | 1,618 | 1,674 | 1,619 | NONE | null | Hi!
I'm following the tutorial for this pretrained model https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment. It works the first time I run it (and download the tokenizer) but after that it will complain that I don't have any tokenizer on the path specified.
The code is the following
```python
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import csv
import urllib.request
# Preprocess text (username and link placeholders)
def preprocess(text):
new_text = []
for t in text.split(" "):
t = '@user' if t.startswith('@') and len(t) > 1 else t
t = 'http' if t.startswith('http') else t
new_text.append(t)
return " ".join(new_text)
# Tasks:
# emoji, emotion, hate, irony, offensive, sentiment
# stance/abortion, stance/atheism, stance/climate, stance/feminist, stance/hillary
task='sentiment'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
# download label mapping
labels=[]
mapping_link = f"https://raw.githubusercontent.com/cardiffnlp/tweeteval/main/datasets/{task}/mapping.txt"
with urllib.request.urlopen(mapping_link) as f:
html = f.read().decode('utf-8').split("\n")
csvreader = csv.reader(html, delimiter='\t')
labels = [row[1] for row in csvreader if len(row) > 1]
# PT
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.save_pretrained(MODEL)
text = "Good night 😊"
text = preprocess(text)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = softmax(scores)
# # TF
# model = TFAutoModelForSequenceClassification.from_pretrained(MODEL)
# model.save_pretrained(MODEL)
# text = "Good night 😊"
# encoded_input = tokenizer(text, return_tensors='tf')
# output = model(encoded_input)
# scores = output[0][0].numpy()
# scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
for i in range(scores.shape[0]):
l = labels[ranking[i]]
s = scores[ranking[i]]
print(f"{i+1}) {l} {np.round(float(s), 4)}")
```
And fails on `tokenizer = AutoTokenizer.from_pretrained(MODEL)` with output:
```bash
2021-04-13 21:43:03.723523: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
Traceback (most recent call last):
File "train.py", line 27, in <module>
tokenizer = AutoTokenizer.from_pretrained(MODEL)
File "/home/jiwidi/anaconda3/envs/cuda/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py", line 423, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/home/jiwidi/anaconda3/envs/cuda/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1698, in from_pretrained
raise EnvironmentError(msg)
OSError: Can't load tokenizer for '/mnt/kingston/github/MIARFID/ALC/cardiffnlp/twitter-roberta-base-sentiment'. Make sure that:
- '/mnt/kingston/github/MIARFID/ALC/cardiffnlp/twitter-roberta-base-sentiment' is a correct model identifier listed on 'https://huggingface.co/models'
- or '/mnt/kingston/github/MIARFID/ALC/cardiffnlp/twitter-roberta-base-sentiment' is the correct path to a directory containing relevant tokenizer files
```
After running the script `train.py` the tokenizer is downloaded to the path the script is on. The path structrue is like this:
```bash
├── cardiffnlp
│ └── twitter-roberta-base-sentiment
│ ├── config.json
│ └── pytorch_model.bin
└── train.py
```
I have transformers version 4.5.1 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11243/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11243/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11242 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11242/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11242/comments | https://api.github.com/repos/huggingface/transformers/issues/11242/events | https://github.com/huggingface/transformers/issues/11242 | 857,753,540 | MDU6SXNzdWU4NTc3NTM1NDA= | 11,242 | position_ids generated from Roberta | {
"login": "hwijeen",
"id": 29157715,
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hwijeen",
"html_url": "https://github.com/hwijeen",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"See https://github.com/huggingface/transformers/issues/10736#issuecomment-800175342\r\n\r\nTip: if you search on this Github repo \"position ids roberta\", you get a lot of answers.\r\n"
] | 1,618 | 1,618 | 1,618 | CONTRIBUTOR | null | Hi,
Roberta created `position_ids` from `input_ids` using [this function](https://github.com/huggingface/transformers/blob/3d339ee6595b9e42925559ae21a0f6e77f032873/src/transformers/models/roberta/modeling_roberta.py#L1494).
When the max sequence length is 512, I expect the `position_ids` to be [0, 1, ..., 512].
However, the function gives me [1, 2, ..., 513] which later results in an CUDA index error for position embedding.
I would appreciate if someone could tell me what I am doing wrong.
```python
def create_position_ids_from_input_ids(input_ids, padding_idx, past_key_values_length=0):
mask = input_ids.ne(padding_idx).int()
incremental_indices = (torch.cumsum(mask, dim=1).type_as(mask) + past_key_values_length) * mask
return incremental_indices.long() + padding_idx
ipdb> input_ids
tensor([[ 2, 20, 630, ..., 22, 20, 3],
[ 2, 168, 106, ..., 4, 31532, 3],
[ 2, 287, 14603, ..., 284, 1594, 3],
...,
[ 2, 4, 873, ..., 5549, 24276, 3],
[ 2, 12, 56, ..., 87, 419, 3],
[ 2, 30683, 419, ..., 761, 312, 3]], device='cuda:0')
ipdb> incremental_indices
tensor([[ 1, 2, 3, ..., 510, 511, 512],
[ 1, 2, 3, ..., 510, 511, 512],
[ 1, 2, 3, ..., 510, 511, 512],
...,
[ 1, 2, 3, ..., 510, 511, 512],
[ 1, 2, 3, ..., 510, 511, 512],
[ 1, 2, 3, ..., 510, 511, 512]], device='cuda:0',
dtype=torch.int32)
ipdb> padding_idx
0
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11242/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11242/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11241 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11241/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11241/comments | https://api.github.com/repos/huggingface/transformers/issues/11241/events | https://github.com/huggingface/transformers/issues/11241 | 857,516,713 | MDU6SXNzdWU4NTc1MTY3MTM= | 11,241 | add new token to Bert | {
"login": "ReySadeghi",
"id": 71632819,
"node_id": "MDQ6VXNlcjcxNjMyODE5",
"avatar_url": "https://avatars.githubusercontent.com/u/71632819?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ReySadeghi",
"html_url": "https://github.com/ReySadeghi",
"followers_url": "https://api.github.com/users/ReySadeghi/followers",
"following_url": "https://api.github.com/users/ReySadeghi/following{/other_user}",
"gists_url": "https://api.github.com/users/ReySadeghi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ReySadeghi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ReySadeghi/subscriptions",
"organizations_url": "https://api.github.com/users/ReySadeghi/orgs",
"repos_url": "https://api.github.com/users/ReySadeghi/repos",
"events_url": "https://api.github.com/users/ReySadeghi/events{/privacy}",
"received_events_url": "https://api.github.com/users/ReySadeghi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you provide a reproducible code example alongside the error that happened?",
"my code is:\r\n................................................................\r\n```py\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\",max_len=256)\r\nvocab=[]\r\nwith open('vocab30k.txt', mode='r',encoding=\"utf8\",errors='ignore') as file2:\r\n for line2 in file2:\r\n line2=line2.split('\\n')[0]\r\n vocab.append(line2)\r\n\r\ntokenizer.add_tokens(vocab)\r\nmodel= BertForMaskedLM.from_pretrained(\"bert-base-uncased\")\r\nmodel.resize_token_embeddings(len(tokenizer)) \r\n\r\ndataset = LineByLineTextDataset(\r\n tokenizer=tokenizer,\r\n file_path=\"fa5M_shuffeled.txt\",\r\n block_size=128,\r\n)\r\n\r\n\r\ndata_collator = DataCollatorForLanguageModeling(\r\n tokenizer=tokenizer, mlm=True, mlm_probability=0.15\r\n)\r\n\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"fineTunedModel/\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=1,\r\n per_gpu_train_batch_size=16,\r\n save_steps=10_000,\r\n save_total_limit=2,\r\n prediction_loss_only=True,\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=dataset,\r\n)\r\n\r\ntrainer.train()\r\n```\r\n....................................................\r\nwhen I don't add_token, every thing is ok and training start but when I use add_token to add my new vocab, my code is still running and doesn't pass to go to start training , and nothing happen.",
"This is probably because it takes a very long time to add all tokens. Could you install from source:\r\n`pip install -U git+https://github.com/huggingface/transformers` and let me know if it fixes the issue? We recently merged a PR that should speed this up dramatically.",
"I installed via your link and try to add 5000 new vocab and it works.\r\nthanks so much.\r\nanother question is ,\r\n1.what is the limitation of number of tokens that we want to add? I tried to add 30k new token and got this error:\r\n return self._tokenizer.add_tokens(new_tokens)\r\npyo3_runtime.PanicException: called `Result::unwrap()` on an `Err` value: CompiledTooBig(10485760)\r\n\r\n2.when I want to add new token I uesd this : \"tokenizer.add_tokens(vocab) \"\r\nand not \"tokenizer.add_tokens(vocab,special_tokens=True)\"\r\nwhat is the differenet between these two in adding token and during fine-tune?\r\n\r\nthanks",
"1. It depends on the size of the tokens. Adding tokens to the tokenizer this way is not scalable, and should only be used to handle a very limited number of tokens. Under the hood, it actually uses some Regex to extract these tokens, and there is a limitation in the size of the regex we can create.\r\n2. Special tokens can be removed when decoding",
"Hi, in term of adding token, I tried to add 10k new token to my BERT model tokenizer and I saved the tokenizer with \"add_token.json\" file.\r\nSo when I want to use the tokenizer I got this error:\r\n\r\nAssertionError: Non-consecutive added token '#سلام' found. Should have index 100005 but has index 100006 in saved vocabulary.\r\n\r\nany help?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,622 | 1,622 | NONE | null | Hi,
I want to fine-tune Bert on tweets and I want to add some new tokens. I tried following code but it may have problem because after adding tokens and during reading sentences and tokenizing, it has lag and couldn't pass it.
any idea please?
I tried this:
tokenizer.add_tokens(["NEW_TOKEN"])
model.resize_token_embeddings(len(tokenizer)) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11241/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11240 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11240/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11240/comments | https://api.github.com/repos/huggingface/transformers/issues/11240/events | https://github.com/huggingface/transformers/pull/11240 | 857,483,969 | MDExOlB1bGxSZXF1ZXN0NjE0ODk1MzIx | 11,240 | Close open files to suppress ResourceWarning | {
"login": "parakalan",
"id": 17947485,
"node_id": "MDQ6VXNlcjE3OTQ3NDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/17947485?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/parakalan",
"html_url": "https://github.com/parakalan",
"followers_url": "https://api.github.com/users/parakalan/followers",
"following_url": "https://api.github.com/users/parakalan/following{/other_user}",
"gists_url": "https://api.github.com/users/parakalan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/parakalan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/parakalan/subscriptions",
"organizations_url": "https://api.github.com/users/parakalan/orgs",
"repos_url": "https://api.github.com/users/parakalan/repos",
"events_url": "https://api.github.com/users/parakalan/events{/privacy}",
"received_events_url": "https://api.github.com/users/parakalan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,618 | 1,618 | 1,618 | CONTRIBUTOR | null | # Close open files to suppress ResourceWarning
<!--
Across the repo, we are opening a bunch of files and not closing them. This seems to cause issue when trying to programatically access them. This also raise ResourceWarning at a few places, for instance - transformers/convert_slow_tokenizer.py:308: ResourceWarning. This PR closes a few files which were left open after accessing.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11240/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11240/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11240",
"html_url": "https://github.com/huggingface/transformers/pull/11240",
"diff_url": "https://github.com/huggingface/transformers/pull/11240.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11240.patch",
"merged_at": 1618410664000
} |
https://api.github.com/repos/huggingface/transformers/issues/11239 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11239/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11239/comments | https://api.github.com/repos/huggingface/transformers/issues/11239/events | https://github.com/huggingface/transformers/issues/11239 | 857,452,309 | MDU6SXNzdWU4NTc0NTIzMDk= | 11,239 | Getting `NameError: name 'BertOnlyMLMHead' is not defined` error when upgrading to latest transformers | {
"login": "gsrivas4",
"id": 23170843,
"node_id": "MDQ6VXNlcjIzMTcwODQz",
"avatar_url": "https://avatars.githubusercontent.com/u/23170843?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gsrivas4",
"html_url": "https://github.com/gsrivas4",
"followers_url": "https://api.github.com/users/gsrivas4/followers",
"following_url": "https://api.github.com/users/gsrivas4/following{/other_user}",
"gists_url": "https://api.github.com/users/gsrivas4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gsrivas4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gsrivas4/subscriptions",
"organizations_url": "https://api.github.com/users/gsrivas4/orgs",
"repos_url": "https://api.github.com/users/gsrivas4/repos",
"events_url": "https://api.github.com/users/gsrivas4/events{/privacy}",
"received_events_url": "https://api.github.com/users/gsrivas4/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1897896961,
"node_id": "MDU6TGFiZWwxODk3ODk2OTYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Migration",
"name": "Migration",
"color": "e99695",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,618 | 1,621 | 1,621 | NONE | null | # 📚 Migration
## Information
<!-- Important information -->
I am getting `NameError: name 'BertOnlyMLMHead' is not defined` error when I try to upgrade the transformers version used by [Oscar code](https://github.com/microsoft/Oscar) from pytorch-transformers to latest version of huggingface transformers.
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below) not sure
* [ ] my own modified scripts: (give details below) yes
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name): no
* [ ] my own task or dataset: (give details below): no
## Details
<!-- A clear and concise description of the migration issue.
If you have code snippets, please provide it here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
-->
I am trying to upgrade the huggingface transformers version used by [Oscar code](https://github.com/microsoft/Oscar) from pytorch-transformers to latest version of huggingface transformers. However, I am getting below error:
```
Traceback (most recent call last):
File "oscar/run_captioning.py", line 1010, in <module>
main()
File "oscar/run_captioning.py", line 966, in main
from_tf=bool('.ckpt' in args.model_name_or_path), config=config)
File "/usr/local/lib/python3.6/site-packages/transformers/modeling_utils.py", line 1058, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/home/default/ephemeral_drive/work/image_captioning/Oscar_13april/oscar/modeling/modeling_bert.py", line 624, in __init__
self.cls = BertOnlyMLMHead(config)
NameError: name 'BertOnlyMLMHead' is not defined
```
I have looked into the latest transformers and seems the class is not defined. However, the class was defined in older version of transformers - https://github.com/huggingface/transformers/blob/067923d3267325f525f4e46f357360c191ba562e/pytorch_transformers/modeling_bert.py#L506-L513.
What could be a replacement of the class `BertOnlyMLMHead` when using latest version of transformers?
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: https://github.com/huggingface/transformers
- Platform: x86_64 GNU/Linux
- Python version: 3.6.8
- PyTorch version (GPU?): 1.7.0+cu101 (GPU)
- Tensorflow version (GPU?): 2.3.0 (GPU)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
<!-- IMPORTANT: which version of the former library do you use? -->
* `pytorch-transformers` or `pytorch-pretrained-bert` version (or branch): https://github.com/huggingface/transformers/tree/067923d3267325f525f4e46f357360c191ba562e
## Checklist
- [ yes] I have read the migration guide in the readme.
([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers);
[pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [ no] I checked if a related official extension example runs on my machine.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11239/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11238 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11238/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11238/comments | https://api.github.com/repos/huggingface/transformers/issues/11238/events | https://github.com/huggingface/transformers/pull/11238 | 857,402,739 | MDExOlB1bGxSZXF1ZXN0NjE0ODI5NjA5 | 11,238 | Fix dimention misspellings. | {
"login": "odellus",
"id": 4686956,
"node_id": "MDQ6VXNlcjQ2ODY5NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4686956?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/odellus",
"html_url": "https://github.com/odellus",
"followers_url": "https://api.github.com/users/odellus/followers",
"following_url": "https://api.github.com/users/odellus/following{/other_user}",
"gists_url": "https://api.github.com/users/odellus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/odellus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/odellus/subscriptions",
"organizations_url": "https://api.github.com/users/odellus/orgs",
"repos_url": "https://api.github.com/users/odellus/repos",
"events_url": "https://api.github.com/users/odellus/events{/privacy}",
"received_events_url": "https://api.github.com/users/odellus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,618 | 1,618 | 1,618 | CONTRIBUTOR | null | # What does this PR do?
Replaces dimention mispelling with the proper spelling dimension.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11238/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11238/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11238",
"html_url": "https://github.com/huggingface/transformers/pull/11238",
"diff_url": "https://github.com/huggingface/transformers/pull/11238.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11238.patch",
"merged_at": 1618411178000
} |
https://api.github.com/repos/huggingface/transformers/issues/11237 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11237/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11237/comments | https://api.github.com/repos/huggingface/transformers/issues/11237/events | https://github.com/huggingface/transformers/pull/11237 | 857,364,149 | MDExOlB1bGxSZXF1ZXN0NjE0Nzk3NTMw | 11,237 | [deepspeed] test on one node 2 gpus max | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"ok, it's all working in the Deepspeed team's test suite - yay! "
] | 1,618 | 1,618 | 1,618 | CONTRIBUTOR | null | Deepspeed devs, who will be running our deepspeed tests as part of their tests, discovered that we had an unbound number of nodes and gpus in the tests so it was firing on multiple nodes and many gpus, so wasn't quite ready for it. So fixing it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11237/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11237/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11237",
"html_url": "https://github.com/huggingface/transformers/pull/11237",
"diff_url": "https://github.com/huggingface/transformers/pull/11237.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11237.patch",
"merged_at": 1618423619000
} |
https://api.github.com/repos/huggingface/transformers/issues/11236 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11236/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11236/comments | https://api.github.com/repos/huggingface/transformers/issues/11236/events | https://github.com/huggingface/transformers/pull/11236 | 857,293,909 | MDExOlB1bGxSZXF1ZXN0NjE0NzM4MjQz | 11,236 | [troubleshooting] add 2 points of reference to the offline mode | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,618 | 1,618 | 1,618 | CONTRIBUTOR | null | As discussed at https://github.com/huggingface/transformers/issues/11231#issuecomment-818976986 the offline mode doc can be hard to find, so this PR:
- starts a new "troubleshooting" document
- adds a note xref to `from_pretrained`
Surely we can start populating the new "troubleshooting" document - the idea here is to have common problems with explicit error messages and pointers to solutions.
@LysandreJik, @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11236/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11236/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11236",
"html_url": "https://github.com/huggingface/transformers/pull/11236",
"diff_url": "https://github.com/huggingface/transformers/pull/11236.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11236.patch",
"merged_at": 1618414764000
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.