url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/13347
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13347/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13347/comments
https://api.github.com/repos/huggingface/transformers/issues/13347/events
https://github.com/huggingface/transformers/issues/13347
983,496,053
MDU6SXNzdWU5ODM0OTYwNTM=
13,347
Predicted Start_index < Predicted End_index in BertForQuestionAnswering
{ "login": "JHH11", "id": 70930600, "node_id": "MDQ6VXNlcjcwOTMwNjAw", "avatar_url": "https://avatars.githubusercontent.com/u/70930600?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JHH11", "html_url": "https://github.com/JHH11", "followers_url": "https://api.github.com/users/JHH11/followers", "following_url": "https://api.github.com/users/JHH11/following{/other_user}", "gists_url": "https://api.github.com/users/JHH11/gists{/gist_id}", "starred_url": "https://api.github.com/users/JHH11/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JHH11/subscriptions", "organizations_url": "https://api.github.com/users/JHH11/orgs", "repos_url": "https://api.github.com/users/JHH11/repos", "events_url": "https://api.github.com/users/JHH11/events{/privacy}", "received_events_url": "https://api.github.com/users/JHH11/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nThanks!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,630
1,633
1,633
NONE
null
We want to fine-tuned a QA model, which is based on BertForQuestionAnswering. After training, we can get a span-start/end scores by input_ids/token_type_ids/attention_mask and choose the indices with maximum span-start/end scores as **predicted start_index** and **predicted end_index**. But, sometimes **predicted start_index** would less than **predicted end_index**. If any reasonable method to solve this situation, thanks~ Ex: `span-start scores = [-0.1, -2.1, 0.7, 1.3, 4.1]` `span-end scores = [-0.7, 3, 5, -0.7, 3.3]` `=>` `predicted start_index = 4` `predicted end_index = 2` It is not reasonable.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13347/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13347/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13346
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13346/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13346/comments
https://api.github.com/repos/huggingface/transformers/issues/13346/events
https://github.com/huggingface/transformers/issues/13346
983,406,111
MDU6SXNzdWU5ODM0MDYxMTE=
13,346
Bert (sentence classification) output is non-deterministic(have checked previous issue, SET model.eval() )
{ "login": "ValMystletainn", "id": 42485228, "node_id": "MDQ6VXNlcjQyNDg1MjI4", "avatar_url": "https://avatars.githubusercontent.com/u/42485228?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ValMystletainn", "html_url": "https://github.com/ValMystletainn", "followers_url": "https://api.github.com/users/ValMystletainn/followers", "following_url": "https://api.github.com/users/ValMystletainn/following{/other_user}", "gists_url": "https://api.github.com/users/ValMystletainn/gists{/gist_id}", "starred_url": "https://api.github.com/users/ValMystletainn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ValMystletainn/subscriptions", "organizations_url": "https://api.github.com/users/ValMystletainn/orgs", "repos_url": "https://api.github.com/users/ValMystletainn/repos", "events_url": "https://api.github.com/users/ValMystletainn/events{/privacy}", "received_events_url": "https://api.github.com/users/ValMystletainn/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You should get the following warning when you instantiate your `AutoModelForSequenceClassification` model:\r\n\r\n```\r\nSome weights of BertForSequenceClassification were not initialized from the model checkpoint at hfl/chinese-bert-wwm-ext and are newly initialized: ['classifier.bias', 'classifier.weight']\r\n```\r\n\r\nThis tells you that the sequence classifier is not in the checkpoint you're loading: it will be initialized randomly everytime you re-initialize it.", "Ok see it. but I have train my model and load in\r\n```python\r\nmodel.state_dict(torch.load('./weights/best_bert.pth', map_location='cpu'))\r\n```\r\n\r\nSo it's the pytorch function\r\ntorch.save(model.state_dict())\r\ndoes not save the model.classfier and model.bias and I trained, rights?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "I just managed finally to have deterministic results. If you are still struggling, see https://discuss.huggingface.co/t/initializing-the-weights-of-the-final-layer-of-e-g-bertfortokenclassification-with-a-manual-seed/1377/3\r\n\r\n```python\r\ndef set_seed(seed: Optional[int] = None):\r\n \"\"\"Set all seeds to make results reproducible (deterministic mode).\r\n When seed is None, disables deterministic mode.\r\n :param seed: an integer to your choosing\r\n \"\"\"\r\n if seed is not None:\r\n torch.manual_seed(seed)\r\n torch.cuda.manual_seed_all(seed)\r\n torch.backends.cudnn.deterministic = True\r\n torch.backends.cudnn.benchmark = False\r\n np.random.seed(seed)\r\n random.seed(seed)\r\n os.environ['PYTHONHASHSEED'] = str(seed)\r\n\r\n```", "> You should get the following warning when you instantiate your `AutoModelForSequenceClassification` model:\r\n> \r\n> ```\r\n> Some weights of BertForSequenceClassification were not initialized from the model checkpoint at hfl/chinese-bert-wwm-ext and are newly initialized: ['classifier.bias', 'classifier.weight']\r\n> ```\r\n> \r\n> This tells you that the sequence classifier is not in the checkpoint you're loading: it will be initialized randomly everytime you re-initialize it.\r\n\r\nThank you, I was struggling with trying to figure out why this was happening. I assumed that \"random initialization\" just meant it was randomly initialized once when the model was instantiated, not every time it's called. Do you know why it has that behavior? Why wouldn't it just be initialized randomly once? What tells it to stop being random? (A round of training? A flag?)" ]
1,630
1,670
1,633
NONE
null
## Environment info - `transformers` version: - Platform: Ubuntu 18.04 - Python version: 3.7.6 - PyTorch version (GPU?): 1.5.1 - Tensorflow version (GPU?): / - Using GPU in script?: Yes for trainning, Both GPU and CPU for testing scripts - Using distributed or parallel set-up in script?: Yes for trainning ### Who can help @LysandreJik ## Information Model I am using (Bert, XLNet ...): Bert The problem arises when using: * [ ] the official example scripts: (give details below) * [ v] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [v ] my own task or dataset: (give details below) I'm using chinese bert to match the similar tags and reduce the size of the database. So I use some mannually merged tags as the dataset, trainning a bert with inputting two tags and outputting the probability of they are similar. It did well after training in the calling of test() function I wrote(of course with model.eval()). But when I save the model to a .pth file and load it in another script, the output is non deterministic. ## To reproduce The whole test scripts is too long, but I have a short test snippet, It should cover the core of this issue. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("hfl/chinese-bert-wwm-ext") model = AutoModelForSequenceClassification.from_pretrained("hfl/chinese-bert-wwm-ext") model.state_dict(torch.load('./weights/best_bert.pth', map_location='cpu')) # model.cuda() for i in range(100): # use to control the time to call model.eval() foo = 1 # model = model.eval() model.eval() with torch.no_grad(): srcText = '春天' # 'spring' tgtText = '春季' # 'spring time' predict = model( **tokenizer(text=srcText, text_pair=tgtText, truncation=True, return_tensors='pt', max_length=256) ) # NON DETERMINISTIC print(torch.softmax(predict.logits, dim=1)) ``` Steps to reproduce the behavior: 1. run the script above 2. change the iterative times for foo = 1, or just do nothing 3. run again 4. get different outputs logits and probabilities ## Expected behavior Get identical outputs in step 1 and 3 ## Additional information I have read issue #4769 and some other similar issues, but I checked again and confirmed I called the function eval()
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13346/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13346/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13345
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13345/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13345/comments
https://api.github.com/repos/huggingface/transformers/issues/13345/events
https://github.com/huggingface/transformers/pull/13345
983,397,788
MDExOlB1bGxSZXF1ZXN0NzIyOTgxNjM2
13,345
Doc mismatch fixed
{ "login": "Apoorvgarg-creator", "id": 57873504, "node_id": "MDQ6VXNlcjU3ODczNTA0", "avatar_url": "https://avatars.githubusercontent.com/u/57873504?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Apoorvgarg-creator", "html_url": "https://github.com/Apoorvgarg-creator", "followers_url": "https://api.github.com/users/Apoorvgarg-creator/followers", "following_url": "https://api.github.com/users/Apoorvgarg-creator/following{/other_user}", "gists_url": "https://api.github.com/users/Apoorvgarg-creator/gists{/gist_id}", "starred_url": "https://api.github.com/users/Apoorvgarg-creator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Apoorvgarg-creator/subscriptions", "organizations_url": "https://api.github.com/users/Apoorvgarg-creator/orgs", "repos_url": "https://api.github.com/users/Apoorvgarg-creator/repos", "events_url": "https://api.github.com/users/Apoorvgarg-creator/events{/privacy}", "received_events_url": "https://api.github.com/users/Apoorvgarg-creator/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,630
1,630
1,630
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #13323 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13345/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13345/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13345", "html_url": "https://github.com/huggingface/transformers/pull/13345", "diff_url": "https://github.com/huggingface/transformers/pull/13345.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13345.patch", "merged_at": 1630405717000 }
https://api.github.com/repos/huggingface/transformers/issues/13344
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13344/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13344/comments
https://api.github.com/repos/huggingface/transformers/issues/13344/events
https://github.com/huggingface/transformers/issues/13344
983,382,342
MDU6SXNzdWU5ODMzODIzNDI=
13,344
How to use BertForSequenceClassification for the Apect Based Sentiment Analysis
{ "login": "pawanGithub10", "id": 73303444, "node_id": "MDQ6VXNlcjczMzAzNDQ0", "avatar_url": "https://avatars.githubusercontent.com/u/73303444?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pawanGithub10", "html_url": "https://github.com/pawanGithub10", "followers_url": "https://api.github.com/users/pawanGithub10/followers", "following_url": "https://api.github.com/users/pawanGithub10/following{/other_user}", "gists_url": "https://api.github.com/users/pawanGithub10/gists{/gist_id}", "starred_url": "https://api.github.com/users/pawanGithub10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pawanGithub10/subscriptions", "organizations_url": "https://api.github.com/users/pawanGithub10/orgs", "repos_url": "https://api.github.com/users/pawanGithub10/repos", "events_url": "https://api.github.com/users/pawanGithub10/events{/privacy}", "received_events_url": "https://api.github.com/users/pawanGithub10/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,630
1,633
1,633
NONE
null
In Aspect based sentiment analysis the sentence is classified in two way one is aspect and one is sentiment. e.g. "The food is good but service is poor" The output must be aspect food sentiment positive aspect service sentiment negative So how to configure the BertForSequenceClassification so that two output can be generated for the apsect classification and the sentiment classification.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13344/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13344/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13343
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13343/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13343/comments
https://api.github.com/repos/huggingface/transformers/issues/13343/events
https://github.com/huggingface/transformers/issues/13343
983,290,182
MDU6SXNzdWU5ODMyOTAxODI=
13,343
OverflowError: out of range integral type conversion attempted for run_summarization.py script using t5-small
{ "login": "aiswaryasankar", "id": 7874177, "node_id": "MDQ6VXNlcjc4NzQxNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/7874177?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aiswaryasankar", "html_url": "https://github.com/aiswaryasankar", "followers_url": "https://api.github.com/users/aiswaryasankar/followers", "following_url": "https://api.github.com/users/aiswaryasankar/following{/other_user}", "gists_url": "https://api.github.com/users/aiswaryasankar/gists{/gist_id}", "starred_url": "https://api.github.com/users/aiswaryasankar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aiswaryasankar/subscriptions", "organizations_url": "https://api.github.com/users/aiswaryasankar/orgs", "repos_url": "https://api.github.com/users/aiswaryasankar/repos", "events_url": "https://api.github.com/users/aiswaryasankar/events{/privacy}", "received_events_url": "https://api.github.com/users/aiswaryasankar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@aiswaryasankar - could you attach a **minimal** reproducible code snippet (also in google colab form) that allows us to quickly spot the error? Thank you :-)", "from the stack-trace it looks like you are decoding `labels` in the `prediction_step` method. When \r\n`--ignore_pad_token_for_loss` argument is set, the `labels` will still have -100 in `prediction_step`, so -100 should be replaced by pad token before decoding. The `run_summarization.py` script does that in the `compute_metrics` function which is called after the `prediction_step` method. \r\nhttps://github.com/huggingface/transformers/blob/c02cd95c56249e9bd38ecb3e4ebcce6d9eebd4a4/examples/pytorch/summarization/run_summarization.py#L509", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,630
1,633
1,633
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: Mac OS - Python version: 3.8.5 - PyTorch version (GPU?): No GPU , >- 1.3 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help Models: - t5: @patrickvonplaten, @patil-suraj Library: - tokenizers: @LysandreJik Looks like this is an issue with the t5Tokenizer possibly? - Seems related to this old github issue as well https://github.com/huggingface/transformers/pull/10046. - maintained examples (not research project or legacy): @sgugger, @patil-suraj --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [X ] the official example scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run the run_summarization.py file on the multi_news dataset using t5-small with do_predict and predict_with_generate options set to true 2. In the prediction step, the decode step highlighted in the error stack below gives an OverFlow error and the prediction stops ``` File "run_summarization.py", line 674, in <module> main() File "run_summarization.py", line 628, in main predict_results = trainer.predict( File "/Users/aiswarya.s/Desktop/transformers_fudge/src/transformers/trainer_seq2seq.py", line 125, in predict return super().predict(test_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix) File "/Users/aiswarya.s/Desktop/transformers_fudge/src/transformers/trainer.py", line 2133, in predict output = eval_loop( File "/Users/aiswarya.s/Desktop/transformers_fudge/src/transformers/trainer.py", line 2235, in evaluation_loop loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) File "/Users/aiswarya.s/Desktop/transformers_fudge/src/transformers/trainer_seq2seq.py", line 180, in prediction_step print(self.tokenizer_t5.batch_decode(inputs["labels"], skip_special_tokens=True, clean_up_tokenization_spaces=True)) File "/Users/aiswarya.s/Desktop/transformers_fudge/src/transformers/tokenization_utils_base.py", line 3047, in batch_decode return [ File "/Users/aiswarya.s/Desktop/transformers_fudge/src/transformers/tokenization_utils_base.py", line 3048, in <listcomp> self.decode( File "/Users/aiswarya.s/Desktop/transformers_fudge/src/transformers/tokenization_utils_base.py", line 3086, in decode return self._decode( File "/Users/aiswarya.s/Desktop/transformers_fudge/src/transformers/tokenization_utils_fast.py", line 507, in _decode text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens) OverflowError: out of range integral type conversion attempted``` ``` ## Expected behavior Should generate the tokenizer.decode outputs
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13343/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13342
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13342/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13342/comments
https://api.github.com/repos/huggingface/transformers/issues/13342/events
https://github.com/huggingface/transformers/pull/13342
983,199,122
MDExOlB1bGxSZXF1ZXN0NzIyODE2NTUw
13,342
Add the `AudioClassificationPipeline`
{ "login": "anton-l", "id": 26864830, "node_id": "MDQ6VXNlcjI2ODY0ODMw", "avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anton-l", "html_url": "https://github.com/anton-l", "followers_url": "https://api.github.com/users/anton-l/followers", "following_url": "https://api.github.com/users/anton-l/following{/other_user}", "gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}", "starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anton-l/subscriptions", "organizations_url": "https://api.github.com/users/anton-l/orgs", "repos_url": "https://api.github.com/users/anton-l/repos", "events_url": "https://api.github.com/users/anton-l/events{/privacy}", "received_events_url": "https://api.github.com/users/anton-l/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,630
1,631
1,630
MEMBER
null
# What does this PR do? This adds the audio classification pipeline needed for `Wav2Vec2ForSequenceClassification` and others (see #13153). The implementation is mostly based on `ImageClassificationPipeline` with `ffmpeg` audio file loading borrowed from `AutomaticSpeechRecognitionPipeline` by @Narsil Once merged, model cards like https://hf.co/superb/hubert-base-superb-ks should be able to have an `audio-classification` inference widget. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @patrickvonplaten @Narsil
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13342/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13342/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13342", "html_url": "https://github.com/huggingface/transformers/pull/13342", "diff_url": "https://github.com/huggingface/transformers/pull/13342.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13342.patch", "merged_at": 1630483428000 }
https://api.github.com/repos/huggingface/transformers/issues/13341
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13341/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13341/comments
https://api.github.com/repos/huggingface/transformers/issues/13341/events
https://github.com/huggingface/transformers/issues/13341
983,183,268
MDU6SXNzdWU5ODMxODMyNjg=
13,341
Padding labels is wrong when using `pad_to_multiple_of`
{ "login": "dirkgr", "id": 920638, "node_id": "MDQ6VXNlcjkyMDYzOA==", "avatar_url": "https://avatars.githubusercontent.com/u/920638?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dirkgr", "html_url": "https://github.com/dirkgr", "followers_url": "https://api.github.com/users/dirkgr/followers", "following_url": "https://api.github.com/users/dirkgr/following{/other_user}", "gists_url": "https://api.github.com/users/dirkgr/gists{/gist_id}", "starred_url": "https://api.github.com/users/dirkgr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dirkgr/subscriptions", "organizations_url": "https://api.github.com/users/dirkgr/orgs", "repos_url": "https://api.github.com/users/dirkgr/repos", "events_url": "https://api.github.com/users/dirkgr/events{/privacy}", "received_events_url": "https://api.github.com/users/dirkgr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sgugger @Rocketknight1 ", "The length of the label and the length of the input rarely match for seq2seq problem, so this is not an issue.", "🤦‍♂️You're right. I was looking for MLM and grabbed the wrong class." ]
1,630
1,630
1,630
CONTRIBUTOR
null
I haven't tried it myself, but it looks like this line is wrong: https://github.com/huggingface/transformers/blob/42f359d015aee3835490bdcfa20df657a4d97049/src/transformers/data/data_collator.py#L285 If `self.pad_to_multiple_of` is set to anything but 1, then the length of the labels and the length of the input won't match anymore.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13341/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13341/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13340
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13340/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13340/comments
https://api.github.com/repos/huggingface/transformers/issues/13340/events
https://github.com/huggingface/transformers/pull/13340
983,160,396
MDExOlB1bGxSZXF1ZXN0NzIyNzg0MjEx
13,340
Tests fetcher tests
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Will be working on tests in a few hours and would like to test this out - will merge this into `master` and check everything runs smoothly." ]
1,630
1,630
1,630
COLLABORATOR
null
# What does this PR do? If you received a notification for this PR and are not a reviewer, I apologize: I clicked the button create PR to early :grimacing: This PR fixes the tests_fetcher utils for the test files dependencies: currently modifying `tests_modeling_common.py` won't trigger any other tests than `tests_modeling_common.py` when we would like it to run all the modeling tests. To fix this, the same logic as in the modules is applied to the tests files: they are screened for dependencies to other tests files and this is all added before we compute the reverse dependency map. As an example this is properly working, this PR has a diff in `tests_modeling_common.py` and you can check the triggered tests [here](https://circle-production-customer-artifacts.s3.amazonaws.com/picard/5bdabdd888af1f000130874a/612d399a732e644e043dbe7e-0-build/artifacts/~/transformers/test_preparation.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20210830T200953Z&X-Amz-SignedHeaders=host&X-Amz-Expires=59&X-Amz-Credential=AKIAJR3Q6CR467H7Z55A%2F20210830%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=7b5b35edd60b676c46bc3bd0715fadd1089ed4d44f68c2bf987bbcd905faef3d).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13340/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13340/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13340", "html_url": "https://github.com/huggingface/transformers/pull/13340", "diff_url": "https://github.com/huggingface/transformers/pull/13340.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13340.patch", "merged_at": 1630396621000 }
https://api.github.com/repos/huggingface/transformers/issues/13339
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13339/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13339/comments
https://api.github.com/repos/huggingface/transformers/issues/13339/events
https://github.com/huggingface/transformers/pull/13339
983,114,173
MDExOlB1bGxSZXF1ZXN0NzIyNzQ2Njkz
13,339
Add generate kwargs to Seq2SeqTrainingArguments
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,630
1,630
1,630
COLLABORATOR
null
# What does this PR do? This PR adds two new `Seq2SeqTrainingArguments` to control which `max_length` and `num_beams` are used during the intermediate evaluations of the `Seq2SeqTrainer`. This feature has been requested multiple times, the last in date being #13252.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13339/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13339/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13339", "html_url": "https://github.com/huggingface/transformers/pull/13339", "diff_url": "https://github.com/huggingface/transformers/pull/13339.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13339.patch", "merged_at": 1630413720000 }
https://api.github.com/repos/huggingface/transformers/issues/13338
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13338/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13338/comments
https://api.github.com/repos/huggingface/transformers/issues/13338/events
https://github.com/huggingface/transformers/pull/13338
983,092,551
MDExOlB1bGxSZXF1ZXN0NzIyNzI5NTQ1
13,338
Handle nested dict/lists of tensors as inputs in the Trainer
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,630
1,630
1,630
COLLABORATOR
null
# What does this PR do? This PR refactors the `_prepare_inputs` method of the Trainer to make it recursively handle any nested list/dict of tensors. Fixes #13146
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13338/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13338/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13338", "html_url": "https://github.com/huggingface/transformers/pull/13338", "diff_url": "https://github.com/huggingface/transformers/pull/13338.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13338.patch", "merged_at": 1630406071000 }
https://api.github.com/repos/huggingface/transformers/issues/13337
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13337/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13337/comments
https://api.github.com/repos/huggingface/transformers/issues/13337/events
https://github.com/huggingface/transformers/pull/13337
982,940,533
MDExOlB1bGxSZXF1ZXN0NzIyNjA3NTE0
13,337
Fix release utils
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,630
1,630
1,630
COLLABORATOR
null
# What does this PR do? The Regex pattern in the release util was wrong for the conf.py file. This PR fixes that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13337/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13337/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13337", "html_url": "https://github.com/huggingface/transformers/pull/13337", "diff_url": "https://github.com/huggingface/transformers/pull/13337.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13337.patch", "merged_at": 1630339754000 }
https://api.github.com/repos/huggingface/transformers/issues/13336
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13336/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13336/comments
https://api.github.com/repos/huggingface/transformers/issues/13336/events
https://github.com/huggingface/transformers/pull/13336
982,933,138
MDExOlB1bGxSZXF1ZXN0NzIyNjAxODQ3
13,336
Fix AutoTokenizer when no fast tokenizer is available
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,630
1,630
1,630
COLLABORATOR
null
# What does this PR do? Currently, the `AutoTokenzier` API will not work when a user tries to instantiate a model that does not have a fast tokenizer, due to some wrong logic in the function `tokenizer_class_from_name`. This PR fixes that. Fixes #13161
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13336/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13336/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13336", "html_url": "https://github.com/huggingface/transformers/pull/13336", "diff_url": "https://github.com/huggingface/transformers/pull/13336.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13336.patch", "merged_at": 1630338918000 }
https://api.github.com/repos/huggingface/transformers/issues/13335
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13335/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13335/comments
https://api.github.com/repos/huggingface/transformers/issues/13335/events
https://github.com/huggingface/transformers/issues/13335
982,927,430
MDU6SXNzdWU5ODI5Mjc0MzA=
13,335
T5 - Flax - Decreasing performance on pretraining
{ "login": "peregilk", "id": 9079808, "node_id": "MDQ6VXNlcjkwNzk4MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/9079808?v=4", "gravatar_id": "", "url": "https://api.github.com/users/peregilk", "html_url": "https://github.com/peregilk", "followers_url": "https://api.github.com/users/peregilk/followers", "following_url": "https://api.github.com/users/peregilk/following{/other_user}", "gists_url": "https://api.github.com/users/peregilk/gists{/gist_id}", "starred_url": "https://api.github.com/users/peregilk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/peregilk/subscriptions", "organizations_url": "https://api.github.com/users/peregilk/orgs", "repos_url": "https://api.github.com/users/peregilk/repos", "events_url": "https://api.github.com/users/peregilk/events{/privacy}", "received_events_url": "https://api.github.com/users/peregilk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "To me this looks like the model is somehow overfitting on the pretraining data. \r\n\r\nThink it's very hard to know exactly what is happening here though as the model is trained on specific Norwegian data...one thing it would try is to take the fully pretrained model and to check if it has a tendency to generate the same output tokens. Maybe some words are overly represented in the training data and the model overfits to those words?! \r\nMaybe using dropout and/or weight_decay (the classic methods for regularization) could help here? \r\n\r\nGently pinging @craffel (hope that's fine) here in case he has seen something similar, has good ideas for analyzing the fully pretrained model and/or other ideas of what could be going on :-)", "@peregilk - also could you link the model repo on the hub here so that I can take a look into the config? ", "Thanks. Here is the link to the repo: https://huggingface.co/pere/norwegian-t5-base-NCC-fast\r\n\r\nI absolutely agree that this really looks a lot like overfitting. However, I really can not see why this could be happening. The corpus is described [here](https://github.com/NBAiLab/notram/tree/master/corpus) and [here](https://github.com/NBAiLab/notram/blob/master/corpus/official_NCC2.md). It is huge (250GB), and heavily deduplicated and cleaned. It is also a collection from multiple sources. There should really be no repetitive parts here to overfit on.\r\n\r\nPlease note that English words or phrases often are used in Norwegian today, and that most Norwegian speaks/understands English. We have therefore added roughly 15GB of English text to the corpus. At least in theory, the model should have some basic understanding of English as well. \r\n\r\nIll do some more tests and report the results.\r\n \r\n\r\n", "Interesting. I will just confirm that the only time we have seen this behavior (train loss going down, MLM accuracy going up, but downstream task performance going down) is when the model is overfitting to the training dataset. When you say \"Eval accuracy is also increasing\", do you mean you are computing MLM accuracy on a held-out validation set? Or is it on the train set?\r\n\r\nOther random things to think about -\r\n1. How many steps are you training on for the downstream task? How are you doing checkpoint selection?\r\n2. What vocabulary are you using?\r\n3. Are you regularizing on the downstream task?", "Thanks both @patrickvonplaten and @craffel for your insightful comments. Highly appreciated.\r\n\r\nI have dived into this, and I am now even more confused about what is going on. As Patrick suggested, I looked at how the model doing MLM-like-tasks at various pretraining steps. I am not detecting repeated tokens, or other oddities. I was only able to figure out how to give me the first prediction, so it is a bit hard comparing it directly to BERT, but it looks to be on the same level. Most predictions give grammatical sense. Also for English (<10% of total training corpus).\r\n\r\n```\r\nfrom transformers import AutoTokenizer, T5ForConditionalGeneration, FlaxT5ForConditionalGeneration\r\nmodel = T5ForConditionalGeneration.from_pretrained('pere/norwegian-t5-base-NCC-fast', from_flax=True)\r\n# Load 250k checkpoint instead \r\n#model_250k = T5ForConditionalGeneration.from_pretrained('pere/norwegian-t5-base-NCC-fast', from_flax=True, revision='49d7631d423fc64770a8c8e0d55216792031c97d')\r\ntokenizer = AutoTokenizer.from_pretrained('pere/norwegian-t5-base-NCC-fast', use_fast=True)\r\ninput_ids = tokenizer.encode('This is a small <extra_id_0> explaining how to <extra_id_1> a language model.', return_tensors='pt')\r\nprint(tokenizer.batch_decode(model.generate(input_ids)))\r\n#output -> \"section\" and \"write\"\r\n``` \r\nSince it seems to perform reasonable good on English, I tried running the run_summarization_flax.py example script on the xsum dataset. I modified the script to support t5, and am using the default settings (lr 5e-5, epochs 6). I adjust the batch size to maximum based on vocab size (without adjusting lr), and are getting these ROUGHE2-results.\r\nBART (default example): 16.99\r\nt5-base: 13.66\r\nmt5-base: 10.36\r\nNorwegian-t5: 9.16 (after 100k) - 7.10 (after 1.000k). See graph below:\r\n![image](https://user-images.githubusercontent.com/9079808/132212886-ad3d7e7a-7c99-484d-b6b5-1631314696c8.png)\r\n\r\nI think this points in the direction of the error being on the pretrained model (even if the qualitative mlm-test does not detect this).\r\n\r\n@craffel: We are using a community script for the streaming of the dataset here. I read though it, and spotted a few inaccuracy. For instance it seem like they are drawing a new eval set at the start of each epoch. This is of course not ideal, but the model is not running more than 2-3 epochs, so I doubt that this is the reason for the eval-accuracy to be improving.\r\n\r\n 1) The first figure is from running 1 epoch on a 100.000 example parallell corpus. Max performance here is after nearly 10 epochs, I did not have time to do this on all checkpoints. However, I made a few tests, and checkpoints that are doing bad after 1 epoch, seem to be bad at 10 epochs as well. \r\n\r\n2) A custom built 50k cased vocab file - Norwegian. I have verified that the tokenisation is reasonable.\r\n\r\n3) We are using the default dropout=0.1. I do not think weight decay/L2 is set here. \r\n\r\nMy main concern here is the added streaming code in the [training script](https://huggingface.co/pere/norwegian-t5-base-NCC-fast/blob/main/run_t5_mlm_flax_streaming.py). If it for some strange reason kept feeding the same examples to the trainer, this is maybe the result that should be expected (?). However, I have been reading through the code carefully, and this does not seem to be happening.\r\n\r\nAny ideas? Are there verified T5 pretraining scrips with streaming support that I can test?\r\n\r\n\r\n\r\n\r\n ", "Is it maybe possible that you can try the official (non-streaming) t5 MLM script: https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_t5_mlm_flax.py? This one is more tested. ", "@patrickvonplaten Yes. I have tried that - on a smaller dataset. I am not seeing issues like this.\r\n\r\nHowever, I was unable to get it to run on the 250GB set. I created a TPU VM with an external disk. Unfortunately, I am getting memory issues on the VM. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "There is one small issue with this script. But not sure if that is the reason for the problem.\r\n\r\n1. [Here](https://huggingface.co/pere/norwegian-t5-base-NCC-fast/blob/main/run_t5_mlm_flax_streaming.py#L514) : Text stream has been tokenized with an additional `</s>` . I believe we should add an additional argument `add_special_tokens=False`. Here adding `eos` token refers document separator not sample separator. \r\n2. In the [data collator](https://huggingface.co/pere/norwegian-t5-base-NCC-fast/blob/main/run_t5_mlm_flax_streaming.py#L234), at the end of each `src_input`, and `target_input` we should manually add an `eos_token`\r\n\r\n@patrickvonplaten @peregilk \r\n", "Just to update this issue as well. We have looked closer at this, and have been running a lot of experiments. I will post the results from this when they are done. The short summary is however that we need to increase regularisation (ie weight_decay) when finetuning our T5s that have been pretrained a long time. I have tested both with a large streaming dataset and smaller non-streaming datasets. It seems like we have the same issue in both.\r\n\r\nIn the examples reported in the graph above, we are able to get decent results when finetuning a T5 that has been pretrained for 200k steps, right \"out of the box\". There are optimal learning rates, but all in all it is not very sensitive to this. The T5 that has been pretrained for 1M steps is only possible to finetune if you add a significant amount of regularisation. Adding \"weight_decay\" seem to be the trick. The optimal dropout seem to be close to 0.1 for all models.\r\n\r\nI will post our results when they are completed in a few days. So far, we have no explanation/theory as to why this is happening. Might be related to what @sbmaruf is reporting. I simply do not know.", "I have never used any kind of regularization on T5 except for dropout; dropout=0.1 often helps during fine-tuning (have never tried any other value).", "Thanks @sbmaruf, @craffel and @patrickvonplaten for your comments. Before continuing this thread, I wanted to run more experiments and making sure that we get consistent results. We have now completed these tests.\r\n\r\nFirst let me sum up: We are training a T5 on a large Norwegian corpus from scratch. We are using the Flax example code with the hyperparameters above, and using the T5 v1.1 training regime. We are seeing a good decline in loss and increase in accuracy. This seem to be the case also for the evaluation-set! We have confirmed \"the error\" on two runs, both on a huge streaming 150GB corpus, and on a smaller 30GB corpus. We are pretty certain that this is a high quality corpus, and it is intensely deduplicated (roughly 20% of the corpus is Norwegian MC4. the major part is born digital public reports). We are pretty sure we not overfitting. We are mainly testing on a simple translation task (Bokmål vs Nynorsk), but have seen the same issue on other tasks, even if we have not run intensively tests here since the datasets are smaller.\r\n\r\nHere is what we are observing: It gets incrementally harder to finetune various checkpoints from the pretraining! We are able to finetune late checkpoints but only after adding weight decay. We have experimented with a lot of different learning rates, and the tendency is the same all over.\r\n\r\nHere you can see an example from the 100k pretrain checkpoint:\r\n![image](https://user-images.githubusercontent.com/9079808/139430715-a373deab-d0b3-4393-961a-196dd9169ecb.png)\r\n\r\nYou can see that it finetunes easily. Adding weight decay just makes the training a bit slower.\r\n\r\nHere is from the 500k pretrain checkpoint:\r\n![image](https://user-images.githubusercontent.com/9079808/139430954-1bdd4b1b-6bdf-4a41-bbd8-5c988f4e126a.png)\r\n\r\nHere we need weight decay to get stable finetuning. \r\n\r\nAt the 1M pretrain checkpoint, finetuning is also notably slower. Only with weight decay are we able to complete it:\r\n![image](https://user-images.githubusercontent.com/9079808/139431256-5e8e1b25-96b6-4bc1-8f92-bd98eb0ea675.png)\r\n\r\nAs you see all the yellow lines (no weight decay) are off the chart here. It simply does not converge at all.\r\n\r\nHere are the complete results from [W&B](https://wandb.ai/nbailab/norwegian-t5-base-checkpoints/reports/T5-Norwegian-Evaluation--VmlldzoxMTY4NzI4?accessToken=7jsxdmw82kf29vnqzxevkii6y9reuiak8wv591e256oe5ryf904h125g1h202d0x)\r\n\r\nFor some really strange reason we are able to end up with a decent result also on the late checkpoints, but only after very careful tuning of the hyperparameters. The model seem to be very unstable and a nightmare to finetune.\r\n\r\nDo any of you think the bug reported by @sbmaruf above could cause any of this?\r\n\r\n ", "I don't think that adding or not adding the EOS token makes a difference - so I highly doubt that is the reason for your observations...BTW here is the original preprocessing code for pretraining: https://github.com/google-research/text-to-text-transfer-transformer/blob/main/t5/data/preprocessors.py#L1864", "Thanks @patrickvonplaten for the response. \r\n\r\nWe have a new version of training corpus available soon. From the figure above it seems like we are able to detact instability after less than a week on a v3-8, maybe even before we reach one epoch. We will see if we are able to get a pytorch version of the T5 pretraining running. Maybe we can train pytorch and flax in parallell and see if they differ.\r\n\r\nIf you have any idea about what is causing this and/or other tests we can do, please let us know.\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,630
1,638
1,638
CONTRIBUTOR
null
## Environment info - `transformers` version: 4.9.0.dev0 - Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - Flax version (CPU?/GPU?/TPU?): 0.3.4 (tpu) - Jax version: 0.2.17 - JaxLib version: 0.68 ### Who can help @patrickvonplaten ### Script Using a slightly modified version of run_t5_mlm_flax.py that also support streaming large datasets. ## Information I am posting this as a bug report, since the behaviour is counter intuitive. I am not sure if this is a bug with jax/T5, or if it is actually a behaviour that should be expected from T5. We are training T5-base (v1.1) on a large, cleaned 250GB Norwegian dataset. We are training from 1M steps, which should equal roughly two complete epochs. With a lr=8e-3, bs=32, seq_length=512, adafactor, we are experiencing a steady decay in loss: ![image](https://user-images.githubusercontent.com/9079808/131355355-22e4542f-6501-450b-b2fb-4c3b967de0aa.png) The image above shows the first 250k steps. We needed to restart here, so I have not patched the event-files together. But the final loss after 1M steps ends on 1.349. Eval accuracy is also increasing. The weird thing is that the final checkpoint has really terrible performance on our downstream task! Looking into this issue, we evaluated multiple pre-training steps, by finetuning each of them 60k steps on a task of translating between two Norwegian dialects. ![image](https://user-images.githubusercontent.com/9079808/131357344-d482bfb8-659e-468f-a9d6-1378d3b4eccd.png) The red and blue dots are two models done before and after the t5 optimisation submitted by @patrickvonplaten. The tendency here is very clear. After roughly 200k steps the model starts to suddenly perform worse on the downstream task, even if the loss is decreasing and the eval accuracy of the pretrained model in improving. The detonation happens before 1 epoch of the pretrain dataset, and though it looks like over-fitting, we find this extremely unlikely. We have more experience with BERT-like models, and here performance on downstream tasks always improves as long as MLM-accuracy is improving. Is this expected behaviour of T5?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13335/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/13335/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13334
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13334/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13334/comments
https://api.github.com/repos/huggingface/transformers/issues/13334/events
https://github.com/huggingface/transformers/pull/13334
982,857,494
MDExOlB1bGxSZXF1ZXN0NzIyNTQzNDM1
13,334
Update label2id in the model config for run_glue
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,630
1,630
1,630
COLLABORATOR
null
# What does this PR do? This PR fixes a bug in all the `run_glue` examples, where the correspondence id to label was not properly saved in the model config. Fixes #13298
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13334/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13334/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13334", "html_url": "https://github.com/huggingface/transformers/pull/13334", "diff_url": "https://github.com/huggingface/transformers/pull/13334.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13334.patch", "merged_at": 1630334110000 }
https://api.github.com/repos/huggingface/transformers/issues/13333
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13333/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13333/comments
https://api.github.com/repos/huggingface/transformers/issues/13333/events
https://github.com/huggingface/transformers/pull/13333
982,792,730
MDExOlB1bGxSZXF1ZXN0NzIyNDkwODE2
13,333
Use existing functionality for #13251
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,630
1,630
1,630
COLLABORATOR
null
# What does this PR do? #13251 fixes some issue while re-implementing existing functionality. This PR refactors the fix to re-use the `model_type_to_module_name` implemented.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13333/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13333/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13333", "html_url": "https://github.com/huggingface/transformers/pull/13333", "diff_url": "https://github.com/huggingface/transformers/pull/13333.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13333.patch", "merged_at": 1630331003000 }
https://api.github.com/repos/huggingface/transformers/issues/13332
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13332/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13332/comments
https://api.github.com/repos/huggingface/transformers/issues/13332/events
https://github.com/huggingface/transformers/issues/13332
982,769,307
MDU6SXNzdWU5ODI3NjkzMDc=
13,332
bug in gpt2 notebook (in tensorflow)
{ "login": "randomgambit", "id": 8282510, "node_id": "MDQ6VXNlcjgyODI1MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/8282510?v=4", "gravatar_id": "", "url": "https://api.github.com/users/randomgambit", "html_url": "https://github.com/randomgambit", "followers_url": "https://api.github.com/users/randomgambit/followers", "following_url": "https://api.github.com/users/randomgambit/following{/other_user}", "gists_url": "https://api.github.com/users/randomgambit/gists{/gist_id}", "starred_url": "https://api.github.com/users/randomgambit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/randomgambit/subscriptions", "organizations_url": "https://api.github.com/users/randomgambit/orgs", "repos_url": "https://api.github.com/users/randomgambit/repos", "events_url": "https://api.github.com/users/randomgambit/events{/privacy}", "received_events_url": "https://api.github.com/users/randomgambit/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[ { "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false } ]
[ "summoning the masters @LysandreJik @sgugger @Rocketknight1 💯 ", "Hey! There are a couple of issues here. The first is that we're trying to move away from TFTrainer towards Keras - there'll be a new version of that notebook coming soon, like I promised!\r\n\r\nIn the meantime, your approach should work, though. The error you're getting is because `lm_datasets` is actually a `DatasetDict` containing both the train and validation set, so everything downstream gets confused. You probably want to swap out `lm_datasets` for `lm_datasets['train']` in that call to `TFTrainer`. However, like I said, we're trying to deprecate TFTrainer, so I'm trying to avoid doing any more bugfixing for it. I'm working on getting the new examples in ASAP!", "Thanks @Rocketknight1 ! Actually I was getting the same error even when I was using a `dataset` that only contains one set of data. But you are absolutely right: there is no need to fix something that is going to be deprecated soon. Happy to help if you need anything! Thanks!", "The good news is I'm moving to working on those TF notebooks right now, so hopefully I'll have a proper example to show you soon. However, the official launch of the new notebooks might depend on the PR at https://github.com/huggingface/datasets/pull/2731 being accepted and making it to release, since I'm planning to use that new method in a lot of them. \r\n\r\nStill, I'll make sure to ping you as soon as I have a LM example ready - just be aware that you might have to install a pre-release version of `datasets` to get it to work!", "got it. happy to try out the beta version of them at my risk and peril ;-)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Same question at year 2023 for \r\nhttps://github.com/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch.ipynb", "Solution: \r\nhttps://github.com/huggingface/transformers/blob/main/examples/tensorflow/language-modeling/run_clm.py" ]
1,630
1,676
1,633
NONE
null
Hello there! I tried to use the language-modeling-from-scratch notebook https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/language_modeling_from_scratch.ipynb#scrollTo=JEA1ju653l-p More specifically, I need to run it by using `tensorflow`. The simple strategy of using the `TF` versions of the `huggingface` functions everything seems to work correctly until I reach the `trainer` step and then I get a mysterious cardinality issue. This looks like a bug... Can you please have a look at the code below? ``` model_checkpoint = "gpt2" tokenizer_checkpoint = "sgugger/gpt2-like-tokenizer" from datasets import load_dataset datasets = load_dataset('wikitext', 'wikitext-2-raw-v1') def tokenize_function(examples): return tokenizer(examples["text"]) tokenized_datasets = datasets.map(tokenize_function, batched=True, remove_columns = ['text']) block_size = 128 def group_texts(examples): # Concatenate all texts. concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} total_length = len(concatenated_examples[list(examples.keys())[0]]) # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can # customize this part to your needs. total_length = (total_length // block_size) * block_size # Split by chunks of max_len. result = { k: [t[i : i + block_size] for i in range(0, total_length, block_size)] for k, t in concatenated_examples.items() } result["labels"] = result["input_ids"].copy() return result lm_datasets = tokenized_datasets.map( group_texts, batched=True, batch_size=1000 ) print(tokenizer.decode(lm_datasets['train'][2]["input_ids"])) from transformers import AutoConfig, TFAutoModelForCausalLM config = AutoConfig.from_pretrained(model_checkpoint) model = TFAutoModelForCausalLM.from_config(config) from transformers import TFTrainer, TFTrainingArguments training_args = TFTrainingArguments( "test-clm", evaluation_strategy = "epoch", learning_rate=2e-5) trainer = TFTrainer( model=model, args = training_args, train_dataset=lm_datasets) trainer.train() Traceback (most recent call last): File "<ipython-input-82-01e49a077e43>", line 11, in <module> trainer.train() File "C:\Users\john\anaconda3\envs\keras\lib\site-packages\transformers\trainer_tf.py", line 472, in train train_ds = self.get_train_tfdataset() File "C:\Users\john\anaconda3\envs\keras\lib\site-packages\transformers\trainer_tf.py", line 150, in get_train_tfdataset self.num_train_examples = self.train_dataset.cardinality().numpy() AttributeError: 'DatasetDict' object has no attribute 'cardinality' ``` What do you think? Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13332/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13332/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13331
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13331/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13331/comments
https://api.github.com/repos/huggingface/transformers/issues/13331/events
https://github.com/huggingface/transformers/issues/13331
982,700,544
MDU6SXNzdWU5ODI3MDA1NDQ=
13,331
bert:What is the tf version corresponding to tensformers?
{ "login": "xmcs111", "id": 46318698, "node_id": "MDQ6VXNlcjQ2MzE4Njk4", "avatar_url": "https://avatars.githubusercontent.com/u/46318698?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xmcs111", "html_url": "https://github.com/xmcs111", "followers_url": "https://api.github.com/users/xmcs111/followers", "following_url": "https://api.github.com/users/xmcs111/following{/other_user}", "gists_url": "https://api.github.com/users/xmcs111/gists{/gist_id}", "starred_url": "https://api.github.com/users/xmcs111/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xmcs111/subscriptions", "organizations_url": "https://api.github.com/users/xmcs111/orgs", "repos_url": "https://api.github.com/users/xmcs111/repos", "events_url": "https://api.github.com/users/xmcs111/events{/privacy}", "received_events_url": "https://api.github.com/users/xmcs111/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@Rocketknight1 ", "Hello! Do you mind providing the error you're seeing? Thank you!", "'''\r\nI:?[35mVENTILATOR?[0m:freeze, optimize and export graph, could take a while...\r\n2021-08-30 20:07:49.360788: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll\r\nd:\\users\\...\\pycharmprojects\\machine learning i don't study\\venv37\\lib\\site-packages\\bert_serving\\server\\helper.py:176: UserWarning: Tensorflow 2.4.0 is not tested!\r\n It may or may not work. Feel free to submit an issue at https://github.com/hanxiao/bert-as-service/issues/\r\n 'Feel free to submit an issue at https://github.com/hanxiao/bert-as-service/issues/' % tf.__version__)\r\nE:?[36mGRAPHOPT?[0m:fail to optimize the graph!\r\nTraceback (most recent call last):\r\n File \"D:\\Program Files (x86)\\Microsoft Visual Studio\\Shared\\Python37_64\\lib\\runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"D:\\Program Files (x86)\\Microsoft Visual Studio\\Shared\\Python37_64\\lib\\runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"D:\\Users\\...\\PycharmProjects\\machine learning I don't study\\venv37\\Scripts\\bert-serving-start.exe\\__main__.py\", line 7, in <module>\r\n File \"d:\\users\\...\\pycharmprojects\\machine learning i don't study\\venv37\\lib\\site-packages\\bert_serving\\server\\cli\\__init__.py\", line 4, in main\r\n with BertServer(get_run_args()) as server:\r\n File \"d:\\users\\...\\pycharmprojects\\machine learning i don't study\\venv37\\lib\\site-packages\\bert_serving\\server\\__init__.py\", line 71, in __init__\r\n self.graph_path, self.bert_config = pool.apply(optimize_graph, (self.args,))\r\nTypeError: cannot unpack non-iterable NoneType object\r\n'''", "Hi, it seems like you're using [bert-as-service](https://github.com/hanxiao/bert-as-service) with an unsupported version of Tensorflow. That isn't a Huggingface project, so we can't really support it here, unfortunately! Try filing an issue at that repo instead!\r\n\r\nIf you're interested in learning to use the Transformers library, you can check out our [documentation](https://huggingface.co/transformers/quicktour.html), our [course](https://huggingface.co/course/chapter1) or our [example code](https://github.com/huggingface/transformers/tree/master/examples), but we can't really answer questions on any of that in GitHub issues - try the [forums](https://discuss.huggingface.co/) instead!\r\n\r\nI'm going to close this issue, but if you believe there's an actual problem in the Transformers library, separate from `bert-as-service`, please feel free to add more info and re-open it." ]
1,630
1,630
1,630
NONE
null
I use python3.7, tf2.4.0, cuda11.1 and cudnn 8.0.4 to run bert-base-un and report an error - albert, bert, xlm: @LysandreJik - tensorflow: @Rocketkn
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13331/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13331/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13330
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13330/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13330/comments
https://api.github.com/repos/huggingface/transformers/issues/13330/events
https://github.com/huggingface/transformers/issues/13330
982,631,701
MDU6SXNzdWU5ODI2MzE3MDE=
13,330
model(**batch) returns loss dictionary that cant be divided by gradient_accu
{ "login": "samarth-b", "id": 1555600, "node_id": "MDQ6VXNlcjE1NTU2MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/1555600?v=4", "gravatar_id": "", "url": "https://api.github.com/users/samarth-b", "html_url": "https://github.com/samarth-b", "followers_url": "https://api.github.com/users/samarth-b/followers", "following_url": "https://api.github.com/users/samarth-b/following{/other_user}", "gists_url": "https://api.github.com/users/samarth-b/gists{/gist_id}", "starred_url": "https://api.github.com/users/samarth-b/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/samarth-b/subscriptions", "organizations_url": "https://api.github.com/users/samarth-b/orgs", "repos_url": "https://api.github.com/users/samarth-b/repos", "events_url": "https://api.github.com/users/samarth-b/events{/privacy}", "received_events_url": "https://api.github.com/users/samarth-b/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "when using a single GPU the script works. \r\nie, outputs.loss returns a shape (1,) that can be divided by int. \r\n\r\nwhat is the best way to use `outputs.loss` for multiple GPU settings? ", " This PR https://github.com/huggingface/accelerate/pull/149 will help you." ]
1,630
1,630
1,630
NONE
null
Running summarization example with T5 small with the following command produces a dict divided by int error. to reproduce run: ``` export TASK_NAME=mrpc accelerate launch run_summarization_no_trainer.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir ~/tmp/tst-summarization ``` https://github.com/huggingface/transformers/blob/8be921f9de012d6f82f1cf4b2dcd4bdf2262071b/examples/pytorch/summarization/run_summarization_no_trainer.py#L521 ``` ----> 1 loss = loss / args.gradient_accumulation_steps TypeError: unsupported operand type(s) for /: 'dict' and 'int' ``` **transformer version==4.9.2** **GPU: Telsa V100 X 2** **accelerate config:** ``` compute_environment: LOCAL_MACHINE deepspeed_config: {} distributed_type: MULTI_GPU fp16: true machine_rank: 0 main_process_ip: null main_process_port: null main_training_function: main num_machines: 1 num_processes: 1 ``` should it be?: ``` outputs = model(**batch) loss = outputs.loss['loss'] ``` happy to do the PR, if confirmed.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13330/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13330/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13329
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13329/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13329/comments
https://api.github.com/repos/huggingface/transformers/issues/13329/events
https://github.com/huggingface/transformers/issues/13329
982,620,411
MDU6SXNzdWU5ODI2MjA0MTE=
13,329
GPT-J-6B in run_clm.py
{ "login": "MantasLukauskas", "id": 52700341, "node_id": "MDQ6VXNlcjUyNzAwMzQx", "avatar_url": "https://avatars.githubusercontent.com/u/52700341?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MantasLukauskas", "html_url": "https://github.com/MantasLukauskas", "followers_url": "https://api.github.com/users/MantasLukauskas/followers", "following_url": "https://api.github.com/users/MantasLukauskas/following{/other_user}", "gists_url": "https://api.github.com/users/MantasLukauskas/gists{/gist_id}", "starred_url": "https://api.github.com/users/MantasLukauskas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MantasLukauskas/subscriptions", "organizations_url": "https://api.github.com/users/MantasLukauskas/orgs", "repos_url": "https://api.github.com/users/MantasLukauskas/repos", "events_url": "https://api.github.com/users/MantasLukauskas/events{/privacy}", "received_events_url": "https://api.github.com/users/MantasLukauskas/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello @MantasLukauskas, GPT-J is not yet merged into `transformers`, see https://github.com/huggingface/transformers/pull/13022", "@LysandreJik Is there any way to do a workaround for fine-tuning it because as I see merge could take some time", "You could checkout the PR directly and try fine-tuning it with the GPT-J code!\r\n\r\n```\r\ngit remote add StellaAthena https://github.com/StellaAthena/transformers\r\ngit fetch StellaAthena\r\ngit checkout -b gptj StellaAthena/master\r\n```", "You can also install directly from my fork with\r\n`pip install -e git+https://github.com/StellaAthena/transformers#egg=transformers`", "@StellaAthena I am trying to fine-tune GPT-J from your branch. But neither Tesla A100 with 40GB GPU RAM (Google Cloud) nor TPU v3-8 allow for this. OOM error in both cases.\r\n\r\nI am setting batch_size = 1, gradient_checkpointing, trying different block_sizes `1024`, `512`, `256`. There is OOM error in all cases.\r\n\r\nIs it possible to fine-tune it on such devices?", "@dimaischenko Do you use run_clm.py for fine-tune or do that in another way?", "@MantasLukauskas Yes, by run_clm.py", "@dimaischenko I got error \"RuntimeError: [enforce fail at CPUAllocator.cpp:65] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 4027105280 bytes. Error code 12 (Cannot allocate memory)\" do you had the same? \r\n\r\n100 GB RAM + DeepSpeed Zero 3 + T4 15 GB", "> @dimaischenko I got error \"RuntimeError: [enforce fail at CPUAllocator.cpp:65] . DefaultCPUAllocator: can't allocate memory: you tried to allocate [4027105280](tel:4027105280) bytes. Error code 12 (Cannot allocate memory)\" do you had the same?\r\n> \r\n> 100 GB RAM + DeepSpeed Zero 3 + T4 15 GB\r\n\r\n4,027,105,280 <<< 100 GB, so it’s hard to see how that’s the issue, unless you have something else running. Can you print out the amount of free memory during the loading process?", "Thanks @StellaAthena and @EricHallahan for all your work on the #13022 GPT-J fork!! \r\n\r\nOver the past few days I've been playing around with the current state of the fork and I am running into the same OOM issues that are referenced here by @dimaischenko.\r\n\r\nHere is some information from my end in case it is helpful debugging what is happening (I'd be happy to put this in a separate issue if that is desired). \r\n\r\n_System:_ I am running everything on a compute cluster (i.e., not g-colab) with ~384GB of ram and 8x RTX 6000 GPUs with 24gb vram each. I am using the fork by @StellaAthena and a fresh conda environment with Python 3.9. \r\n\r\nMy observations:\r\n\r\n1. I can't load the float32 model onto my RTX 6000 without running into an OOM error. With `model.half().cuda() ` and/or `torch_dtype=torch.float16` when loading the model it does work. As far as I understand, I should be able to load the float32 model with an RTX 6000 24GB? Given that I can't load the float32 model it might be that my OOM errors are caused by the issue brought up by @oborchers even why trying to use fp16.\r\n2. Irrespective of my training parameters (e.g., everything set to minimum) my training always triggers an OOM error when using `trainer` or the `run_clm.py`script. \r\n\r\nFor example, these are my parameters:\r\n```batch\r\n python run_clm.py \\\r\n --model_name_or_path EleutherAI/gpt-j-6B \\\r\n --model_revision float32 \\\r\n --dataset_name wikitext \\\r\n --dataset_config_name wikitext-2-raw-v1 \\\r\n --do_train \\\r\n --do_eval \\\r\n --output_dir /mmfs1/gscratch/tdekok/test-clm-j \\\r\n --overwrite_output_dir true \\\r\n --per_device_train_batch_size 1 \\\r\n --per_device_eval_batch_size 1 \\\r\n --gradient_accumulation_steps 1 \\\r\n --fp16 true \\\r\n --fp16_opt_level O1\r\n```\r\n\r\nThis results in about ~55G RAM usage and I can see in `nvidia-smi` that it fills up my GPU vram beyond the available 23761MiB. \r\n\r\n3. I noticed when using `gpt2` instead of `gpt-j-6b` that the memory usage on gpu:0 is substantially higher relative to the rest. I wonder whether this might be part of the issue:\r\n\r\n![image](https://user-images.githubusercontent.com/13317782/131442025-616c7ea9-c6fd-41f6-8cbe-c532d6cbef0d.png)\r\n", "> This results in about ~55G RAM usage and I can see in `nvidia-smi` that it fills up my GPU vram beyond the available 23761MiB.\r\n\r\nI think it doesn't matter if you are using 8 GPU or 1 GPU cause `batch_size=1`. So at least one sample fits on one video card. I am trying the same params on A100 card with `40Gb` gpu vram and OOM still exists. So I think that your RTX6000 card with 24 gb vram is definitely not enough for fine-tuning.\r\n\r\nBut let's wait for the answer from the creators of the model", "@dimaischenko yes I have the same problem, I tried to use DeepSpeed Zero3 optimizer for this one but even with batch_size=1 and model_revision = float16 I am out of memory. Interesting that with gpt2-xl I have the same problem but I saw a lot of people fine-tuning this model with T4 + Deepspeed :( ", "@dimaischenko I tested a lot of parameters and found that with --block_size 512 I can fine-tune GPT-J model. RAM Usage 100 GB, GPU usage 12 GB (Nvidia T4 total 16 GB), DeepSpeed Zero3 optimizer ", "@MantasLukauskas Sounds interesting. Maybe it's the optimizer. And what option is it enabled by, or do you need to modify the run_clm.py code?", "@dimaischenko DeepSpeed in library implemented into huggingface (https://github.com/microsoft/DeepSpeed) and you do not need to modify run_clm.py code you fine-tune model like that:\r\ndeepspeed --num_gpus 1 run_clm.py --model_name_or_path EleutherAI/gpt-j-6B --num_train_epochs 10 --model_revision float16 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --train_file train.txt --validation_file test.txt --do_train --do_eval --output_dir GPTJ --save_steps 1000 --logging_steps 100 --logging_dir GPTJ/runs --fp16 --deepspeed zero3ws.json --overwrite_output_dir\r\n\r\nMy deepspeed config file can be found here: https://github.com/MantasLukauskas/DeepSpeed-Test/blob/main/zero3.json\r\n\r\n", "@MantasLukauskas thanks! I'll try today and write about the results.", "@dimaischenko \r\n> @StellaAthena I am trying to fine-tune GPT-J from your branch. But neither Tesla A100 with 40GB GPU RAM (Google Cloud) nor TPU v3-8 allow for this. OOM error in both cases.\r\n\r\nIf you are working on TPUs, I strongly recommend using the [mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax) library which was written for the purpose of producing GPT-J models. The version on HuggingFace is a PyTorch port of the original Jax code which has been used by many people on TPUs.\r\n\r\nI'm not sure why you're having trouble with A100s though, as I have run the code on A100s before. Can you provide further details about how you're running the model? Is it loading the model or the act of fine-tuning that OOMs?", "@StellaAthena Thanks! I'll try `mesh-transformer-jax`. It's just that I already have a reliable fine-tuning pipeline for HuggingFace.\r\n\r\nAbout OOM. I'll repeat my attempts today and will write logs. But the exact loading of the model was successful. And even performed validation with perplexity calculation on validation samples. But when it tried «to eat» the first sample in training, OOM would crash.", "> @StellaAthena Thanks! I'll try `mesh-transformer-jax`. It's just that I already have a reliable fine-tuning pipeline for HuggingFace.\r\n> \r\n> About OOM. I'll repeat my attempts today and will write logs. But the exact loading of the model was successful. And even performed validation with perplexity calculation on validation samples. But when it tried «to eat» the first sample in training, OOM would crash.\r\n\r\nThis is really weird, given that you've said the batch size is set to 1. How much memory is allocated before you feed the first datum into the model? Does a different architecture that takes up the same amount of memory also fail?", "@StellaAthena I tried again running run_clm.py from the latest branch on single GPU A100 (40Gb)\r\n\r\n```\r\npython run_clm_orig.py \\\r\n --model_type gptj \\\r\n --model_name_or_path EleutherAI/gpt-j-6B \\\r\n --model_revision float16 \\\r\n --do_train \\\r\n --do_eval \\\r\n --train_file ./data/train.txt \\\r\n --validation_file ./data/val.txt \\\r\n --evaluation_strategy steps \\\r\n --logging_step 300 \\\r\n --learning_rate 0.00002 \\\r\n --save_steps 1500 \\\r\n --fp16 \\\r\n --per_device_train_batch_size 1 \\\r\n --per_device_eval_batch_size 1 \\\r\n --gradient_accumulation_steps 1 \\\r\n --num_train_epochs 1 \\\r\n --block_size 1024 \\\r\n --save_total_limit 1 \\\r\n --overwrite_output_dir \\\r\n --output_dir ./out/test_gptj_orig \r\n```\r\n\r\nand got OOM error\r\n```\r\n[INFO|trainer.py:414] 2021-09-01 11:39:10,987 >> Using amp fp16 backend\r\n[INFO|trainer.py:1168] 2021-09-01 11:39:10,997 >> ***** Running training *****\r\n[INFO|trainer.py:1169] 2021-09-01 11:39:10,997 >> Num examples = 6011\r\n[INFO|trainer.py:1170] 2021-09-01 11:39:10,997 >> Num Epochs = 1\r\n[INFO|trainer.py:1171] 2021-09-01 11:39:10,997 >> Instantaneous batch size per device = 1\r\n[INFO|trainer.py:1172] 2021-09-01 11:39:10,997 >> Total train batch size (w. parallel, distributed & accumulation) = 1\r\n[INFO|trainer.py:1173] 2021-09-01 11:39:10,997 >> Gradient Accumulation steps = 1\r\n[INFO|trainer.py:1174] 2021-09-01 11:39:10,997 >> Total optimization steps = 6011\r\n 0%| | 0/6011 [00:00<?, ?it/s$\r\nTraceback (most recent call last):\r\n File \"run_clm_orig.py\", line 522, in <module>\r\n main()\r\n File \"run_clm_orig.py\", line 472, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/transformers/trainer.py\", line 1284, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/transformers/trainer.py\", line 1787, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/transformers/trainer.py\", line 1821, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/transformers/models/gptj/modeling_gptj.py\", line 780, in forward\r\n return_dict=return_dict,\r\n File \"/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/transformers/models/gptj/modeling_gptj.py\", line 631, in forward\r\n output_attentions=output_attentions,\r\n File \"/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/transformers/models/gptj/modeling_gptj.py\", line 286, in forward\r\n feed_forward_hidden_states = self.mlp(hidden_states)\r\n File \"/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/transformers/models/gptj/modeling_gptj.py\", line 249, in forward\r\n hidden_states = self.fc_in(hidden_states)\r\n File \"/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/torch/nn/modules/linear.py\", line 96, in forward\r\n return F.linear(input, self.weight, self.bias)\r\n File \"/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/torch/nn/functional.py\", line 1847, in linear\r\n return torch._C._nn.linear(input, weight, bias)\r\nRuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 39.59 GiB total capacity; 37.49 GiB already allocated; 19.19 MiB free; 37.73 GiB reserved in total by PyTorch)\r\n 0%| | 0/6011 [00:00<?, ?it/s]\r\n```\r\n\r\nToday will switch to `mesh-transformer-jax` and try to fine-tune on TPU v3-8 and then convert checkpoint to HuggingFace format.", "You are trying to use the Adam optimizer with a model of 24Gb. With Adam, you have four copies of your model: model, gradients, and in the optimizer state the gradients averaged and square averaged. Even with fp16, all of that is still stored in FP32 because of **mixed** precision training (the optimzier update is in full precision). So unless you use DeepSpeed to offload the optimizer state and the gradient copy in FP32, you won't be able to fit this 4 x 24GB on your 80GB card.", "@sgugger Thanks for clarification! I configured DeepSpeed and everything started up on the A100 GPU. However, now I need 80Gb cpu RAM, but this is solvable 😄 ", "There is also the NVME offload if CPU RAM becomes a problem :-) ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "@sgugger - is there any docs on how to do this - you can point to ? I got rtx 3090 - and hitting the KeyError: 'gptj'\r\n(The error is really obscure. it should really have some thing easier to understand.)\r\nI've got 32gb of RAM - @dimaischenko - did bumping to 80gb fix things? ", "@johndpope what is your `transformers` version? It looks like it is outdated and does not have the GPT-J model available.", "@LysandreJik I agree with you. I think that's the problem. @johndpope Yes 80gb ram was enough. To be honest, I don't remember the details anymore, but it seems that it took even less with `DeepSpeed`.", "had trouble with ram - but found this / installing now / supposedly fits 17 / 15gb in VRAM + uses fastapi - https://news.ycombinator.com/item?id=27731266\r\n(it uses tensorflow / but keeps memory footprint lower )\r\nhttps://gist.githubusercontent.com/kinoc/f3225092092e07b843e3a2798f7b3986/raw/fc0dbe522d09d3797dd2a64e7182003f7d9a7fa8/jserv.py" ]
1,630
1,636
1,633
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.10.0.dev0 - Platform: Linux-4.19.0-10-cloud-amd64-x86_64-with-debian-10.5 - Python version: 3.7.8 - PyTorch version (GPU?): 1.7.1+cu110 (True) - Tensorflow version (GPU?): 2.4.1 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help - text generation: @patrickvonplaten - trainer: @sgugger - pipelines: @LysandreJik --> ## Information The model I am using GPT-J from HuggingFaceHub models, there is KeyError with this model, error listed below: Traceback (most recent call last): File "run_clm.py", line 522, in <module> main() File "run_clm.py", line 320, in main config = AutoConfig.from_pretrained(model_args.model_name_or_path, **config_kwargs) File "/opt/conda/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 514, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] File "/opt/conda/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 263, in __getitem__ raise KeyError(key) KeyError: 'gptj'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13329/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13329/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13328
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13328/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13328/comments
https://api.github.com/repos/huggingface/transformers/issues/13328/events
https://github.com/huggingface/transformers/issues/13328
982,608,421
MDU6SXNzdWU5ODI2MDg0MjE=
13,328
Licenses for Helsinki-NLP models
{ "login": "okalldal", "id": 6553693, "node_id": "MDQ6VXNlcjY1NTM2OTM=", "avatar_url": "https://avatars.githubusercontent.com/u/6553693?v=4", "gravatar_id": "", "url": "https://api.github.com/users/okalldal", "html_url": "https://github.com/okalldal", "followers_url": "https://api.github.com/users/okalldal/followers", "following_url": "https://api.github.com/users/okalldal/following{/other_user}", "gists_url": "https://api.github.com/users/okalldal/gists{/gist_id}", "starred_url": "https://api.github.com/users/okalldal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/okalldal/subscriptions", "organizations_url": "https://api.github.com/users/okalldal/orgs", "repos_url": "https://api.github.com/users/okalldal/repos", "events_url": "https://api.github.com/users/okalldal/events{/privacy}", "received_events_url": "https://api.github.com/users/okalldal/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "@jorgtied might be able to answer this (and then we can programmatically update all models if needed)\r\n\r\nThanks!\r\n\r\n(also cc @sshleifer and @patil-suraj for visibility)", "\nThey come with a CC-BY 4.0 license.\nJörg\n\n\n\n\n> On 30. Aug 2021, at 13.16, Julien Chaumond ***@***.***> wrote:\n> \n> \n> @jorgtied <https://github.com/jorgtied> might be able to answer this (and then we can programmatically update all models if needed)\n> \n> Thanks!\n> \n> (also cc @sshleifer <https://github.com/sshleifer> and @patil-suraj <https://github.com/patil-suraj> for visibility)\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub <https://github.com/huggingface/transformers/issues/13328#issuecomment-908221978>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAEWCPQVHTREFMSMDBTZPCLT7NK6NANCNFSM5DBQJLIQ>.\n> Triage notifications on the go with GitHub Mobile for iOS <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675> or Android <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>. \n> \n\n", "Thanks a lot Jörg 🙏 . I'll update the repos programmatically tomorrow morning", "Done.\r\n\r\nFor reference, here's the script I've run (depends on https://github.com/huggingface/huggingface_hub/pull/339 to be able to run it using `huggingface_hub`): https://gist.github.com/julien-c/b2dcde5df5d5e41ad7c4b594cb54aba3\r\n\r\nAnd here's a partial list of the generated commits (full list attached to the gist):\r\n\r\n```\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-bg-en/commit/3a34359f5781368c7748219c2868ffd065f24df0\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-bg-fi/commit/04d4dd3690cc730690da31b45745fb3f74198b0f\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-bg-sv/commit/7f2c7cc3887492a080441266c63b20fd13497e56\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-bi-en/commit/feb365f89ee1f47cad4f1581896b80ae88978983\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-bi-es/commit/40001c75cc73df30ac2ffe45d8c3f224ee17781b\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-bi-fr/commit/31712329599ad7b50590cd35299ccc8d94029122\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-bi-sv/commit/fa443f611486bd359dee28a2ef896a03ca81e515\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-bzs-en/commit/4a0238e6463445a99590c0abe7aed5f2f95e064d\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-bzs-es/commit/b03449222edb29b8497af1df03c30782995912f5\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-bzs-fi/commit/26a623904cfb745bdc48f4e62f4de8ec0f0f0bbb\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-bzs-fr/commit/5f69cdba6de378f61042d90ed0a19f3047837ea1\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-bzs-sv/commit/2a12941aeaeaa78979240cfcb1d63e44958af76f\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-ca-en/commit/22113f5e0e8e89677d6e0142e55c85402eecb455\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-ca-es/commit/3b93f0ccce95f7d8c7a78d56ec5c658271f6d244\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-ceb-es/commit/94ff5e6902541d95fc1890e7e5e185477d922271\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-ceb-fi/commit/8c5cdaa45a8ef959061c6d97a7f118e2714725bc\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-ceb-fr/commit/90d773c1774988007f9fd8f44477de8d5ee310b6\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-ceb-sv/commit/bf1810fb698cbeb2a7beeecb96917557ece3158f\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-chk-en/commit/d9a7fad4fdc70b734457a5eee20835d8899e7415\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-chk-es/commit/c41790360ecb70331ba71c881db1c592b0923502\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-chk-fr/commit/6db3456d236063ccbb97abdea52dc574da37a898\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-chk-sv/commit/de1bf0196adc388148bb52c5388fd795c46191b6\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-crs-de/commit/f0552c0fcef8dc8b03acc5ecf9c170a3a9356ca1\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-crs-en/commit/7ee4bb979dd28886b7d98f890298c4548e84a847\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-crs-es/commit/808d78b9c72092991bba047542192f26c3bff3b8\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-crs-fi/commit/e61325e6904fe87fbad3e6d978dca63fb4e766ba\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-crs-fr/commit/341ed6222bcb84709acf9b8a3d5d57991b350c5e\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-crs-sv/commit/a338a7e5ef9b876f1edc63b0af6c6cd11e6a7611\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-cs-de/commit/f5a1b1443dc5381df3a0a83d790b3c2eb16cf811\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-cs-en/commit/186ab5dff3e18ca970a492525c0ca4b398d525ab\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-cs-fi/commit/d60a357cfb2c4d1df38b43f2fafe34dbff0199cf\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-cs-fr/commit/3040852ec5404c1da928602fa1ec636b6ddf9a2e\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-cs-sv/commit/ab967fe66d1c0d4f9403ae0b4c97c06ae8947b89\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-csg-es/commit/9742b7a5ed07cb69c4051567686b2e1ace50b061\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-csn-es/commit/c3086bbf7d9101947a5a07d286cb9ccc533f9e0a\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-cy-en/commit/775c85089bc7a55c8203bff544e9fa34cd4ba7ca\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-da-de/commit/2e4d10f7054f579178b167e5082b0e57726eee44\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-da-en/commit/8971eb3839ec41bddd060128b9b83038bb43fd96\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-da-es/commit/59b50e55d16babe69b0facb1fb1c4dfb175328fe\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-da-fi/commit/a2e614cb32e2b0fa09c5c1dcaba8122d9d647b18\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-da-fr/commit/186e4c938bc1744a9ddbd67073fe572c93a494c8\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-ZH/commit/93d4bc065a572a35ab1f1110ffeccc9740444a42\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-ase/commit/09e461fdf799287e13c7c48df0573fd89273b1bd\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-bcl/commit/628737ef8907e7d2db7989660f413420cfad41f5\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-bi/commit/7c40aed9a4611cec93aa9560f2bb99e49e895789\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-bzs/commit/30ed515b4d391e1f98cefdbf5f6fcc340c979fce\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-crs/commit/b9de144126655b973cd8cf74a5651ac999e551a2\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-cs/commit/683666e07ca027d76af9ac23c0902b29084a0d18\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-da/commit/bccfbee95d55ba1333fd447f67574453eba5d948\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-de/commit/7be6c82bcda2cf76f48ba1f730baeeebcbcb172d\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-ee/commit/42218c447d3da4a8836adb6de710d06bbad480c9\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-efi/commit/1309ccb2f74acba991a654adf4ff1363a577d51b\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-el/commit/ad3da773c26cf72780d46b4a75333226a19760e4\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-en/commit/6137149949ac01d19d8eeef6e35d32221dabc8e4\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-eo/commit/9188e5326cba934d553fcb0150a9e88de140a286\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-es/commit/d6bff091731341b977e4ca7294d2c309a2ca11e4\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-et/commit/55157cd448f864a87992b80aef23f95546a0280c\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-fi/commit/bbd50eeefdc1e26d75f6a806495192b55878c04a\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-fj/commit/596580a8225fb340357d25cd38639fed5d662681\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-fr/commit/6aa8c4011488513f5575b235ce75d6d795d90b35\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-gaa/commit/0722f96d5ce2e9fd6b2e0df3987105a78d062d1c\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-gil/commit/56bb25bf50c7b8268c9fd1ec8f8124e54631af59\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-guw/commit/7a441fe0e9e7c4c430889b46b3b4541005c93bb1\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-ha/commit/5a241c2d7ce3f36d42b7bbd7f563bd0da651d480\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-he/commit/44d42278e67bf34bd1c0a8dcca06c6525eca6263\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-hil/commit/4f0571df9d70e36af0435f1368a03cd059750c40\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-ho/commit/6f07189ef39e3e609a24c45936c40e30fd6b3ef8\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-hr/commit/d1b7e5205290af5c36e8be8cd6d73f6b5d9bba5f\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-ht/commit/2d296463f4735961ca4512271b415aacf7c0ba91\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-hu/commit/4b30440320ea86d33b6927fe70c46e20f671da86\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-ig/commit/862152c08618d17ff651fc7df9145d81519ba9f7\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-ilo/commit/e9260adbaa77c85f5a0203460399c1cec12357c1\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-iso/commit/d3d1caff0521142085ee7faa07112ce593803734\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-it/commit/cd2319a082a7be0dd471fe62701ae557a71833c2\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-kg/commit/495d68528e086b0ccea38761513241152e4f217f\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-ln/commit/05dd393385fb99c42d5849c22cef67931922eff3\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-loz/commit/efc9fe11206c281704056c9c3eda0b42f1cf43a0\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-lt/commit/e0105109d696baf37e2a4cca511a46f59fa97707\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-lua/commit/319b94b75439b497c0860a3fc80a34ecacb597a0\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-mt/commit/0d71c2c09e3838d7276288da102f7e66d2d24032\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-niu/commit/6b15b26f7d7752bfde0368809479c544880174cd\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-nl/commit/da037ec1ad70f9d79735c287d418c00158b55b68\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-nso/commit/fbd9a40fa66f610b52855ad16263d4ea32c8bd7c\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-ny/commit/595549133dfde470a3ea04e93674ff1c90c5ac5a\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-pag/commit/f03679f6d038388c5a0a40918acc4bf6406cac28\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-pap/commit/6c57622b7e815f9e1cb24f6e1f9a09b58627f0b7\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-pis/commit/ddfb8177ff0559adc697171c2c4c7704921bd4ec\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-pl/commit/67458bb97566391315397d8e0aa5f14f774bd238\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-de-pon/commit/d18f29c5ef79abbca40d53e34b94c8514ffd6235\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-ee-de/commit/5e01b793901fec6acbcaf6b35e9e0873d7190147\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-ee-en/commit/a69e3d990dc8b84d8d727b9502c20511a50233ed\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-ee-es/commit/976bee3eb2616b35a55d6e6467ca2d211ba68d49\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-ee-fi/commit/8547cfc9f2c5ef75f00c78ef563eef59fc0204ee\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-ee-fr/commit/066e2a847a6098c2a999d6db7a1f50b878578c8e\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-ee-sv/commit/8170bc4af3be1e3633e37ef4180cada5eb177b2c\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-efi-de/commit/cedf2694630c1ee2ea1d75dffead02c4dc49ef80\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-efi-en/commit/0bf437954f943da3d49a172b6f91aa7157c3525a\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-efi-fi/commit/02877c2ef68a205047cde71b4b376ffcc565e4a7\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-efi-fr/commit/7b528531e45c04716015e7c211ef2b74817ff438\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-efi-sv/commit/c02cd07b017c7c71d4583dbd6050dfee383a1cf0\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-el-fi/commit/aef52d8c3cc2129847cf9ea84c62a5e7b9bb41bc\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-el-fr/commit/b00ba91c42b2f20768228b179f01274048158001\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-el-sv/commit/e8894cf2f5713e1cc68fe7710636ecc4b4dc99d7\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-CELTIC/commit/69fe75e42d848a1b30f968800ff94783e3ed8fe2\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-ROMANCE/commit/92870a2f094c444064c7a568c25eef6971e07b03\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-af/commit/c6a79302395db2b59af8b15f4016081a66095ace\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-bcl/commit/fdda7e146d903da0f4da8895800c52bdcfa07ecc\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-bem/commit/7d0c704d934f400158d645345a7ed27c6cfe73e8\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-ber/commit/cad15de24b5374102d6dd95619d0c4011102dcce\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-bi/commit/b3e9ed52697fffab06a733a23c37d843a3464976\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-bzs/commit/2b7c7d345202d17dd7f42850eae846e4d11b6fda\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-ca/commit/81d80b5921b66885e45c3b27615752da4b511b40\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-ceb/commit/a5e0a21b4e9db37945be9cd5977573b53cd95999\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-chk/commit/a57e025c3f8a7a9b20968190b6a6db234ef1541a\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-crs/commit/1f25af1f9d1c0680005a9f0d16ed8bb412784c32\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-cs/commit/7cba4a7e3daff13c48fc2fcd740ef0711b1dd075\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-cy/commit/038aee0304224b119582e0258c0dff2bc1c1c411\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-da/commit/9786126ba34f1f86636af779ef13557bd9d1b246\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-de/commit/6c00b328d3da7183582a4928b638b24a4a14a79f\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-ee/commit/45d6ef20f2aac6de3ad001d7452ff5243f25f219\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-efi/commit/08b5f78e0bb66e8e1940fe1eb976a5b9de276f84\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-el/commit/cd8ab0896f1d0598007ba5266a0a30884fed71de\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-eo/commit/20a8920034dfbb6b2e5909f5065a32d6b1b5990b\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-et/commit/f696ce2db3f802cf4dd723ea97b2af1eda90c7e9\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-fi/commit/627fe90df5c335be61521cd89c68f62e2bdce050\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-fj/commit/2c98ee541817946993595aa514f12804b6c95efc\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-fr/commit/a8fbc1c711cb6263e8a20c5229b210cc05c57ff0\r\nhttps://huggingface.co/Helsinki-NLP/opus-mt-en-gaa/commit/2f75e3d8bc190f8e0e412beecf00e564c40e33c4\r\n```", "Looking great! Thank you for the quick resolution of this!", "I have a doubt, on the [opus-MT github page](https://github.com/Helsinki-NLP/OPUS-MT-train/tree/master) it says that all pretrained models are under the cc by 4.0 license, but on hugging face many opus-MT models have apache 2.0 license, for example, [this model](https://huggingface.co/Helsinki-NLP/opus-mt-it-fr) on hugging face has the apache 2.0 license, while downloading it from the opus-MT github page ([here](https://github.com/Helsinki-NLP/OPUS-MT-train/tree/master/models/it-fr)) it has the cc by 4.0 license among the files, is this an error or did I miss something?", "Hi @niedev, thanks for raising this! Could you open a discussion on the respective checkpoint pages on the hub? " ]
1,630
1,690
1,632
NONE
null
Some of the models in the hf-hub under the Helsinki-NLP repo are listed under the apache 2.0 license, but most are listed without a license. Example of model without license: https://huggingface.co/Helsinki-NLP/opus-mt-en-de Only 371 models tagged with a license here: https://huggingface.co/models?license=license:apache-2.0&sort=downloads&search=helsinki-nlp Is this omission intentional or are all models in the repo actually intended to be apache licensed? If so would it be possible to update them with license info?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13328/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13328/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13327
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13327/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13327/comments
https://api.github.com/repos/huggingface/transformers/issues/13327/events
https://github.com/huggingface/transformers/issues/13327
982,497,194
MDU6SXNzdWU5ODI0OTcxOTQ=
13,327
Wrong weight initialization for TF t5 model
{ "login": "danshirron", "id": 32061512, "node_id": "MDQ6VXNlcjMyMDYxNTEy", "avatar_url": "https://avatars.githubusercontent.com/u/32061512?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danshirron", "html_url": "https://github.com/danshirron", "followers_url": "https://api.github.com/users/danshirron/followers", "following_url": "https://api.github.com/users/danshirron/following{/other_user}", "gists_url": "https://api.github.com/users/danshirron/gists{/gist_id}", "starred_url": "https://api.github.com/users/danshirron/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danshirron/subscriptions", "organizations_url": "https://api.github.com/users/danshirron/orgs", "repos_url": "https://api.github.com/users/danshirron/repos", "events_url": "https://api.github.com/users/danshirron/events{/privacy}", "received_events_url": "https://api.github.com/users/danshirron/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I agree! Would you like to open a PR to fix it? :-)", "Will try to do it on coming days", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,630
1,633
1,633
CONTRIBUTOR
null
## Environment info - `transformers` version: 4.9.2 - Platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.5.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.3.4 (gpu) - Jax version: 0.2.18 - JaxLib version: 0.1.69 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: yes ### Who can help @patil-suraj @patrickvonplaten ## Information Model I am using: Pre-training T5-base The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below): Added to run_mlm.py the t5 data collator and keras adafactor optimizer The tasks I am working on is: * [X] my own task or dataset: Pre-training T5 base with oscar dataset (as in FLAX example) ## Expected behavior Before updating init weights to normal distribution (as in transformers/src/transformers/models/t5/modeling_flax_t5.py) loss stuck at 4.5 (unlike FLAX behaviour). after update of init weights i get same behaviour as in FLAX and reach <2 loss. Example: In flax code: class: FlaxT5DenseReluDense: lines 95:,96 wi_init_std = self.config.initializer_factor * (self.config.d_model ** -0.5) wo_init_std = self.config.initializer_factor * (self.config.d_ff ** -0.5) In TF code, the default initializer is used. My suggested fix: wi_initializer = tf.keras.initializers.RandomNormal(mean = 0, stddev = config.initializer_factor * (config.d_model ** -0.5)) wo_initializer = tf.keras.initializers.RandomNormal(mean = 0, stddev = config.initializer_factor * (config.d_ff ** -0.5)) self.wi = tf.keras.layers.Dense(config.d_ff, use_bias=False, name="wi",kernel_initializer=wi_initializer) self.wo = tf.keras.layers.Dense(config.d_model, use_bias=False, name="wo",kernel_initializer=wo_initializer) This is relevant for all weights and embeddings initialization.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13327/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13327/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13326
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13326/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13326/comments
https://api.github.com/repos/huggingface/transformers/issues/13326/events
https://github.com/huggingface/transformers/issues/13326
982,476,052
MDU6SXNzdWU5ODI0NzYwNTI=
13,326
Wav2Vec2ForCTC is not BaseModelOutput
{ "login": "yc-li20", "id": 15671418, "node_id": "MDQ6VXNlcjE1NjcxNDE4", "avatar_url": "https://avatars.githubusercontent.com/u/15671418?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yc-li20", "html_url": "https://github.com/yc-li20", "followers_url": "https://api.github.com/users/yc-li20/followers", "following_url": "https://api.github.com/users/yc-li20/following{/other_user}", "gists_url": "https://api.github.com/users/yc-li20/gists{/gist_id}", "starred_url": "https://api.github.com/users/yc-li20/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yc-li20/subscriptions", "organizations_url": "https://api.github.com/users/yc-li20/orgs", "repos_url": "https://api.github.com/users/yc-li20/repos", "events_url": "https://api.github.com/users/yc-li20/events{/privacy}", "received_events_url": "https://api.github.com/users/yc-li20/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @patrickvonplaten , I read the source code and found the wav2vecctc only conducts word-level tokenization. Does it support ctc fine-tuning on grapheme level or character level? Thanks.", "Oh yeah that's a typo in the docs indeed, but it's already fixed on master I think :-) \r\nSee: https://huggingface.co/transformers/master/model_doc/wav2vec2.html#wav2vec2forctc", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,630
1,633
1,633
NONE
null
On the website of huggingface: https://huggingface.co/transformers/model_doc/wav2vec2.html#wav2vec2forctc it says Wav2VecForCTC is "BaseModelOutput". But actually it is "CausalLMOutput", it has no attribute 'last_hidden_state' or the others of "BaseModelOutput". Its returns should belong to "CausalLMOutput": https://huggingface.co/transformers/main_classes/output.html#causallmoutput **The description of the returns of Wav2VecForCTC on the website:** <img width="1031" alt="130951325-9fd86ab4-4b2a-4965-b4bf-88b2cc556b46" src="https://user-images.githubusercontent.com/15671418/131300701-11106f9c-ab2a-42b8-8c35-9a7418e37474.png"> **The error when call "the last hidden state" of Wav2VecForCTC:** <img width="663" alt="130951343-eb4655a3-af57-4a2f-a387-0fa628f854dc" src="https://user-images.githubusercontent.com/15671418/131300840-107609a5-6d30-4d89-a0e1-ae003feb3934.png"> **The description of CasualLMOutput which the Wav2VecForCTC shoud be:** <img width="1077" alt="130951334-093e1df0-6207-45f0-b803-76b276f17f7b" src="https://user-images.githubusercontent.com/15671418/131300953-a8eaf063-db8d-46f3-836e-4baf534e1554.png"> @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13326/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13325
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13325/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13325/comments
https://api.github.com/repos/huggingface/transformers/issues/13325/events
https://github.com/huggingface/transformers/issues/13325
982,204,061
MDU6SXNzdWU5ODIyMDQwNjE=
13,325
Handling tag with no prefix for aggregation_strategy in TokenClassificationPipeline
{ "login": "jbpolle", "id": 51430205, "node_id": "MDQ6VXNlcjUxNDMwMjA1", "avatar_url": "https://avatars.githubusercontent.com/u/51430205?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jbpolle", "html_url": "https://github.com/jbpolle", "followers_url": "https://api.github.com/users/jbpolle/followers", "following_url": "https://api.github.com/users/jbpolle/following{/other_user}", "gists_url": "https://api.github.com/users/jbpolle/gists{/gist_id}", "starred_url": "https://api.github.com/users/jbpolle/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jbpolle/subscriptions", "organizations_url": "https://api.github.com/users/jbpolle/orgs", "repos_url": "https://api.github.com/users/jbpolle/repos", "events_url": "https://api.github.com/users/jbpolle/events{/privacy}", "received_events_url": "https://api.github.com/users/jbpolle/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Narsil ", "Hi @jbpolle what do you mean `correctly` ? We should not have changed behavior there, but indeed it's not part of the testing right now, so there might be some issues.\r\n\r\nCould you provide a small script on an older transformers version that displays the intended behavior ?", "Hello Nicolas,\n\nHere is what it looks like now in the \"hosted inference API » panel:\n\n\nThis is from my model here: \nhttps://huggingface.co/Jean-Baptiste/camembert-ner?text=Je+m%27appelle+jean-baptiste+et+je+vis+%C3%A0+montr%C3%A9al\n\nIn previous version, It would display « jean-baptiste PER » and « Montreal LOC ».\n\nHowever I renamed my entities in the config.json file to I-PER, I-ORG,…which I believe should fix this issue. \nBefore that the entities were just PER, LOC,…\n\nI hope this help,\nThank you,\nJean-Baptiste\n\n> Le 30 août 2021 à 09:15, Nicolas Patry ***@***.***> a écrit :\n> \n> \n> Hi @jbpolle <https://github.com/jbpolle> what do you mean correctly ? We should not have changed behavior there, but indeed it's not part of the testing right now, so there might be some issues.\n> \n> Could you provide a small script on an older transformers version that displays the intended behavior ?\n> \n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub <https://github.com/huggingface/transformers/issues/13325#issuecomment-908333781>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AMIMGPPBHLPBSACPHW5PZWDT7OAAXANCNFSM5DAT4PAA>.\n> Triage notifications on the go with GitHub Mobile for iOS <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675> or Android <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>. \n> \n\n", "Adding missing screenshot in previous message:\r\n\r\n<img width=\"541\" alt=\"PastedGraphic-1\" src=\"https://user-images.githubusercontent.com/51430205/131381199-64fa35a0-05dd-4233-a3de-e53307bd6f71.png\">\r\n", "I went back to `4.3.3` and I can see that the splitting was exactly the same. (no grouping when tags didn't include B-, I- ).\r\n\r\nThe fact that the cache wasn't probably cleaned on the widget is still an issue, clearing it.", "I was working on 4.3.2 and here is how this was working:\r\n![image](https://user-images.githubusercontent.com/51430205/132059845-039b0ade-54f2-45b1-80b1-993641b520e6.png)\r\n\r\nBut now in 4.9:\r\n![image](https://user-images.githubusercontent.com/51430205/132060946-dce28417-0080-43cb-93c3-95d0cc76e2cc.png)\r\n\r\n\r\nAnd even when playing with new aggregation_strategy parameters, I can't get previous results.\r\nAnyway it's fixed in my case by adding the prefix so don't hesitate to close the ticket.\r\n\r\nThank you, ", "Ok, I must have tested it wrong before. I can confirm. This is indeed because the default for tags wasn't really explicited, but did behave as `I- `\r\n\r\nCode was:\r\n\r\n```python\r\nentity[\"entity\"].split(\"-\")[0] != \"B\"\r\n```\r\nWhich would resolve to `\"PER\" != \"B\"` whereas now the default tag was explicitely set as B-:\r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/token_classification.py#L413\r\n\r\nThe fix would be easy but I am unsure about reverting this now that was merged 6th June. \r\nTagging a core maintainer for advice for how to handle this. @LysandreJik \r\n \r\nWe would need to run some numbers on the hub too, to get an idea of amount of affected repos.", "I would fix it to behave the same as it was in v4.3.2 as this is the expected behavior when using `grouped_entities`", "PR opened. https://github.com/huggingface/transformers/pull/13493" ]
1,630
1,631
1,631
NONE
null
# 🚀 Feature request Previously the parameter grouped_entities would handle entity with no prefix (like "PER" instead of "B-PER") and would correctly group similar entities next to each others. With the new parameter aggregation_strategy, this is not the case anymore. ## Motivation In some simple models, the prefix add some complexity that is not always required. Because of this we are forced to add a prefix to make aggregation works even if not required by the model. ## Your contribution
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13325/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13324
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13324/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13324/comments
https://api.github.com/repos/huggingface/transformers/issues/13324/events
https://github.com/huggingface/transformers/pull/13324
982,070,128
MDExOlB1bGxSZXF1ZXN0NzIxOTMyMjA0
13,324
distilbert-flax
{ "login": "kamalkraj", "id": 17096858, "node_id": "MDQ6VXNlcjE3MDk2ODU4", "avatar_url": "https://avatars.githubusercontent.com/u/17096858?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kamalkraj", "html_url": "https://github.com/kamalkraj", "followers_url": "https://api.github.com/users/kamalkraj/followers", "following_url": "https://api.github.com/users/kamalkraj/following{/other_user}", "gists_url": "https://api.github.com/users/kamalkraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/kamalkraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kamalkraj/subscriptions", "organizations_url": "https://api.github.com/users/kamalkraj/orgs", "repos_url": "https://api.github.com/users/kamalkraj/repos", "events_url": "https://api.github.com/users/kamalkraj/events{/privacy}", "received_events_url": "https://api.github.com/users/kamalkraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Great Great job @kamalkraj! \r\n\r\nThink the only major thing to update is to docs in the modeling file (at the moment it looks like it's the PyTorch docs, but should be Flax :-)) \r\n\r\n", "@patrickvonplaten \r\nThanks for the review.\r\nDone changes according to your review. ", "Hi @kamalkraj , I'm also really interested in that PR - thanks for adding it :hugs: \r\n\r\nDo you also plan to add a script for the distillation process (like it is done in the [\"old\" script](https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation)), as I would like to re-distillate some of my previous DistilBERT models (I don't have access to multi GPU setups, only to TPUs at the moment).", "Hi @stefan-it,\r\n\r\nI will go through the scripts and pings you. \r\nI have multi-GPU access. Which TPU do you use? v3-8 ?", "```\r\nJAX_PLATFORM_NAME=cpu RUN_SLOW=1 pytest tests/test_modeling_flax_distilbert.py::FlaxDistilBertModelIntegrationTest::test_inference_no_head_absolute_embedding\r\n```\r\n\r\npasses and the code looks good :-) Ready to merge IMO :tada: ! \r\n\r\n@patil-suraj the slow test doesn't pass on TPU since distilbert has pretty extreme activations in the forward pass like a couple of other models. We need to think a bit how to adapt the slow test depending on whether they're run on TPU or not in general...", "Great work @kamalkraj !" ]
1,630
1,631
1,630
CONTRIBUTOR
null
# What does this PR do? DistilBert Flax <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @VictorSanh @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13324/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13324/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13324", "html_url": "https://github.com/huggingface/transformers/pull/13324", "diff_url": "https://github.com/huggingface/transformers/pull/13324.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13324.patch", "merged_at": 1630325778000 }
https://api.github.com/repos/huggingface/transformers/issues/13323
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13323/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13323/comments
https://api.github.com/repos/huggingface/transformers/issues/13323/events
https://github.com/huggingface/transformers/issues/13323
981,993,631
MDU6SXNzdWU5ODE5OTM2MzE=
13,323
Documentation mismatch in Preprocessing data
{ "login": "Apoorvgarg-creator", "id": 57873504, "node_id": "MDQ6VXNlcjU3ODczNTA0", "avatar_url": "https://avatars.githubusercontent.com/u/57873504?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Apoorvgarg-creator", "html_url": "https://github.com/Apoorvgarg-creator", "followers_url": "https://api.github.com/users/Apoorvgarg-creator/followers", "following_url": "https://api.github.com/users/Apoorvgarg-creator/following{/other_user}", "gists_url": "https://api.github.com/users/Apoorvgarg-creator/gists{/gist_id}", "starred_url": "https://api.github.com/users/Apoorvgarg-creator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Apoorvgarg-creator/subscriptions", "organizations_url": "https://api.github.com/users/Apoorvgarg-creator/orgs", "repos_url": "https://api.github.com/users/Apoorvgarg-creator/repos", "events_url": "https://api.github.com/users/Apoorvgarg-creator/events{/privacy}", "received_events_url": "https://api.github.com/users/Apoorvgarg-creator/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Indeed. Would you mind opening a PR with the change?" ]
1,630
1,630
1,630
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.10.0.dev0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.9.2 - PyTorch version (GPU?): 1.9.0 (False) - Tensorflow version (GPU?): 2.6.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu) - Jax version: 0.2.19 - JaxLib version: 0.1.70 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @sgugger @SaulLu <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information There seems a conflict in [ Utilities for tokenizers ](https://huggingface.co/transformers/internal/tokenization_utils.html?highlight=truncation#transformers.tokenization_utils_base.PreTrainedTokenizerBase.__call__) and [Preprocessing data](https://huggingface.co/transformers/preprocessing.html?highlight=truncation#everything-you-always-wanted-to-know-about-padding-and-truncation). In **Preprocessing data**, For **`truncation_strategy = True`**, It states "truncate to a maximum length specified by the max_length argument or the maximum length accepted by the model if no max_length is provided (max_length=None). This will only truncate the first sentence of a pair if a pair of sequences (or a batch of pairs of sequences) is provided." whereas for the same in **Utilities for tokenizers**, it states "Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided.". <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior In Preprocessing_data documentation , `truncation_strategy=True` must match with `longest_first` instead of `only_first`. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13323/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13323/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13322
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13322/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13322/comments
https://api.github.com/repos/huggingface/transformers/issues/13322/events
https://github.com/huggingface/transformers/issues/13322
981,976,914
MDU6SXNzdWU5ODE5NzY5MTQ=
13,322
DestilGTP2 code from pytorch-transformers does not work in transformers, I made a basic example
{ "login": "Oxi84", "id": 25420033, "node_id": "MDQ6VXNlcjI1NDIwMDMz", "avatar_url": "https://avatars.githubusercontent.com/u/25420033?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Oxi84", "html_url": "https://github.com/Oxi84", "followers_url": "https://api.github.com/users/Oxi84/followers", "following_url": "https://api.github.com/users/Oxi84/following{/other_user}", "gists_url": "https://api.github.com/users/Oxi84/gists{/gist_id}", "starred_url": "https://api.github.com/users/Oxi84/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Oxi84/subscriptions", "organizations_url": "https://api.github.com/users/Oxi84/orgs", "repos_url": "https://api.github.com/users/Oxi84/repos", "events_url": "https://api.github.com/users/Oxi84/events{/privacy}", "received_events_url": "https://api.github.com/users/Oxi84/received_events", "type": "User", "site_admin": false }
[ { "id": 1897896961, "node_id": "MDU6TGFiZWwxODk3ODk2OTYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Migration", "name": "Migration", "color": "e99695", "default": false, "description": "" } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,630
1,633
1,633
NONE
null
How would i convert this to new version of transformers. Or is it possible to somehow use DestilGTP2 with pytorch-transformers. use_transformers = True if use_transformers: import torch from transformers import GPT2Tokenizer, GPT2Model, GPT2LMHeadModel tokenizer1 = GPT2Tokenizer.from_pretrained('distilgpt2',cache_dir="/var/software/Models/") model1 = GPT2LMHeadModel.from_pretrained('distilgpt2',cache_dir="/var/software/Models/") model1.eval() model1.to('cuda') text = "Who was Jim Henson ?" indexed_tokens = tokenizer1.encode(text) tokens_tensor = torch.tensor([indexed_tokens]) tokens_tensor = tokens_tensor.to('cuda') with torch.no_grad(): predictions_1 = model1(tokens_tensor) print(predictions_1) else: import torch from pytorch_transformers import GPT2Tokenizer, GPT2Model, GPT2LMHeadModel tokenizer1 = GPT2Tokenizer.from_pretrained('gpt2',cache_dir="/var/software/Models/") # cache_dir=None model1 = GPT2LMHeadModel.from_pretrained('gpt2',cache_dir="/var/software/Models/") model1.eval() model1.to('cuda') text = "Who was Jim Henson ?" indexed_tokens = tokenizer1.encode(text) tokens_tensor = torch.tensor([indexed_tokens]) tokens_tensor = tokens_tensor.to('cuda') with torch.no_grad(): predictions_1 = model1(tokens_tensor) print(predictions_1) When i try i get an error, and tried to follow the guide but do not get what the new tokeniser does differently.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13322/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13322/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13321
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13321/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13321/comments
https://api.github.com/repos/huggingface/transformers/issues/13321/events
https://github.com/huggingface/transformers/pull/13321
981,897,917
MDExOlB1bGxSZXF1ZXN0NzIxODI2MTkw
13,321
Add missing module __spec__
{ "login": "laurahanu", "id": 32672979, "node_id": "MDQ6VXNlcjMyNjcyOTc5", "avatar_url": "https://avatars.githubusercontent.com/u/32672979?v=4", "gravatar_id": "", "url": "https://api.github.com/users/laurahanu", "html_url": "https://github.com/laurahanu", "followers_url": "https://api.github.com/users/laurahanu/followers", "following_url": "https://api.github.com/users/laurahanu/following{/other_user}", "gists_url": "https://api.github.com/users/laurahanu/gists{/gist_id}", "starred_url": "https://api.github.com/users/laurahanu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/laurahanu/subscriptions", "organizations_url": "https://api.github.com/users/laurahanu/orgs", "repos_url": "https://api.github.com/users/laurahanu/repos", "events_url": "https://api.github.com/users/laurahanu/events{/privacy}", "received_events_url": "https://api.github.com/users/laurahanu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for your PR!\r\n\r\nAs you can see, this makes all the tests fail because you changed the init of `_LazyModule` without adapting all the places it's used (all the intermediates init of each model). I'm not sure whether those intermediate inits need to pass along the spec attribute or not, if they do you should add it in each one of them (don't forget the model template as well), and if they don't, you should make that argument optional.", "@sgugger Thanks for looking at it! Changed the `module_spec` arg to be optional as I don't see why the other intermediate inits would need it.", "Great! One last thing: could you run `make style` on your branch to solve the code quality issue?", "Last thing caught by the CI new that the style is correct: your new test file will never be run by the CI. Since it's linked to `_LazyModule` defined in file_utils, could you move it to `test_file_utils`? Thanks a lot.", "Is this one ready to be merged and published? " ]
1,630
1,631
1,630
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds a missing `__spec__` object when importing the library that would be `None` otherwise. Fixes #12904 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13321/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13321/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13321", "html_url": "https://github.com/huggingface/transformers/pull/13321", "diff_url": "https://github.com/huggingface/transformers/pull/13321.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13321.patch", "merged_at": 1630341546000 }
https://api.github.com/repos/huggingface/transformers/issues/13320
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13320/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13320/comments
https://api.github.com/repos/huggingface/transformers/issues/13320/events
https://github.com/huggingface/transformers/pull/13320
981,798,625
MDExOlB1bGxSZXF1ZXN0NzIxNzYxNTE5
13,320
examples: only use keep_linebreaks when reading TXT files
{ "login": "stefan-it", "id": 20651387, "node_id": "MDQ6VXNlcjIwNjUxMzg3", "avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stefan-it", "html_url": "https://github.com/stefan-it", "followers_url": "https://api.github.com/users/stefan-it/followers", "following_url": "https://api.github.com/users/stefan-it/following{/other_user}", "gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}", "starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions", "organizations_url": "https://api.github.com/users/stefan-it/orgs", "repos_url": "https://api.github.com/users/stefan-it/repos", "events_url": "https://api.github.com/users/stefan-it/events{/privacy}", "received_events_url": "https://api.github.com/users/stefan-it/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,630
1,630
1,630
COLLABORATOR
null
Hi, this is a follow-up (bug-fix) PR for #13150. It turns out - as reported in #13312 - that the `keep_linebreaks` argument only works, when Datasets extension is `text`. I used this logic to only pass the `keep_linebreaks` argument, when extension is `text`, simplified as: ``` dataset_args = {} if extension == "text": dataset_args["keep_linebreaks"] = True dataset = load_dataset(extension, data_files=data_files, **dataset_args) print(dataset["train"][0]) ``` When `keep_linebreaks` was set to `True` and reading in a text file, the output looks like: ```bash {'text': 'Heute ist ein schöner Tach\n'} ``` For `keep_linebreaks` set to `False` the output looks like: ```bash {'text': 'Heute ist ein schöner Tach'} ``` So the proposed way is working with the `dataset_args` argument. I also checked all examples that they're working when passing a CSV dataset.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13320/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13320", "html_url": "https://github.com/huggingface/transformers/pull/13320", "diff_url": "https://github.com/huggingface/transformers/pull/13320.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13320.patch", "merged_at": 1630160549000 }
https://api.github.com/repos/huggingface/transformers/issues/13319
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13319/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13319/comments
https://api.github.com/repos/huggingface/transformers/issues/13319/events
https://github.com/huggingface/transformers/pull/13319
981,798,296
MDExOlB1bGxSZXF1ZXN0NzIxNzYxMjg5
13,319
neptune.ai logger: add ability to connect to a neptune.ai run
{ "login": "fcakyon", "id": 34196005, "node_id": "MDQ6VXNlcjM0MTk2MDA1", "avatar_url": "https://avatars.githubusercontent.com/u/34196005?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fcakyon", "html_url": "https://github.com/fcakyon", "followers_url": "https://api.github.com/users/fcakyon/followers", "following_url": "https://api.github.com/users/fcakyon/following{/other_user}", "gists_url": "https://api.github.com/users/fcakyon/gists{/gist_id}", "starred_url": "https://api.github.com/users/fcakyon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fcakyon/subscriptions", "organizations_url": "https://api.github.com/users/fcakyon/orgs", "repos_url": "https://api.github.com/users/fcakyon/repos", "events_url": "https://api.github.com/users/fcakyon/events{/privacy}", "received_events_url": "https://api.github.com/users/fcakyon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks a lot for your PR!" ]
1,630
1,630
1,630
CONTRIBUTOR
null
single line is changed when `NEPTUNE_RUN_ID` environmetnt variable is set, neptune will log into the previous run with id `NEPTUNE_RUN_ID` trainer: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13319/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13319/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13319", "html_url": "https://github.com/huggingface/transformers/pull/13319", "diff_url": "https://github.com/huggingface/transformers/pull/13319.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13319.patch", "merged_at": 1630331957000 }
https://api.github.com/repos/huggingface/transformers/issues/13318
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13318/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13318/comments
https://api.github.com/repos/huggingface/transformers/issues/13318/events
https://github.com/huggingface/transformers/issues/13318
981,790,795
MDU6SXNzdWU5ODE3OTA3OTU=
13,318
Errors when fine-tuning RAG on cloud env
{ "login": "agi-templar", "id": 21965264, "node_id": "MDQ6VXNlcjIxOTY1MjY0", "avatar_url": "https://avatars.githubusercontent.com/u/21965264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/agi-templar", "html_url": "https://github.com/agi-templar", "followers_url": "https://api.github.com/users/agi-templar/followers", "following_url": "https://api.github.com/users/agi-templar/following{/other_user}", "gists_url": "https://api.github.com/users/agi-templar/gists{/gist_id}", "starred_url": "https://api.github.com/users/agi-templar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/agi-templar/subscriptions", "organizations_url": "https://api.github.com/users/agi-templar/orgs", "repos_url": "https://api.github.com/users/agi-templar/repos", "events_url": "https://api.github.com/users/agi-templar/events{/privacy}", "received_events_url": "https://api.github.com/users/agi-templar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @DapangLiu,\r\n\r\nWe sadly don't actively maintain the `research_projects` folder except for Wav2Vec2. Could you try to use the forum: https://discuss.huggingface.co/ instead? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,630
1,633
1,633
NONE
null
Hi the team, I'm trying to fine-tune RAG with [the scripts you provided](https://github.com/huggingface/transformers/tree/9ec0f01b6c3aff4636869aee735859fb6f89aa98/examples/research_projects/rag). My env is cloud servers (4 V100 with 48G GRAM), and I always have these errors when do the fine-tuning: > RuntimeError: Error in faiss::Index* faiss::read_index(faiss::IOReader*, int) at /__w/faiss-wheels/faiss-wheels/faiss/faiss/impl/index_read.cpp:480: Error: 'ret == (size)' failed: read error in <cache path>: 6907889358 != 16160765700 (Success) It seems like errors are from faiss (and I don't know how to interpret it. Sizes do not macth?). I used this command to do the fine-tuning: ```bash - python run_rag_ft.py --data_dir /msmarco --output_dir ./msmarco_rag --model_name_or_path facebook/rag-sequence-nq --model_type rag_sequence --fp16 --gpus 4 --distributed_retriever pytorch --num_retrieval_workers 4 --fp16 --profile --do_train --do_predict --n_val -1 --train_batch_size 8 --eval_batch_size 1 --max_source_length 128 --max_target_length 40 --val_max_target_length 40 --test_max_target_length 40 --label_smoothing 0.1 --dropout 0.1 --attention_dropout 0.1 --weight_decay 0.001 --adam_epsilon 1e-08 --max_grad_norm 0.1 --lr_scheduler polynomial --learning_rate 3e-05 --num_train_epochs 2 --warmup_steps 500 --gradient_accumulation_steps 1 ``` Nothing special but just use my own data (MSMARCO). I sticked to the pytorch for the distributed retriever, and have not yet tested the ray version. Is that the problem? I cannot run this on my local machines because of OOM errors (two 24GRAM GPUs). I think @patrickvonplaten could help me on this. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13318/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13317
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13317/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13317/comments
https://api.github.com/repos/huggingface/transformers/issues/13317/events
https://github.com/huggingface/transformers/issues/13317
981,782,329
MDU6SXNzdWU5ODE3ODIzMjk=
13,317
How to use the pretraining task of ProphetNet
{ "login": "StevenTang1998", "id": 37647985, "node_id": "MDQ6VXNlcjM3NjQ3OTg1", "avatar_url": "https://avatars.githubusercontent.com/u/37647985?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StevenTang1998", "html_url": "https://github.com/StevenTang1998", "followers_url": "https://api.github.com/users/StevenTang1998/followers", "following_url": "https://api.github.com/users/StevenTang1998/following{/other_user}", "gists_url": "https://api.github.com/users/StevenTang1998/gists{/gist_id}", "starred_url": "https://api.github.com/users/StevenTang1998/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StevenTang1998/subscriptions", "organizations_url": "https://api.github.com/users/StevenTang1998/orgs", "repos_url": "https://api.github.com/users/StevenTang1998/repos", "events_url": "https://api.github.com/users/StevenTang1998/events{/privacy}", "received_events_url": "https://api.github.com/users/StevenTang1998/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @qiweizhen", "@StevenTang1998 - could you maybe try to use the forum: https://discuss.huggingface.co/ for such questions. I haven't played around with the model enough to give a qualified answer here sadly :-/ ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,630
1,633
1,633
CONTRIBUTOR
null
I want to use the pretraining task of ProphetNet, that recovers the mask span of the input sentence. I follow the instruction of Figure 1 in the paper. For example, the input is `But I [MASK][MASK] my life for some lovin\' and some gold` and I only recover the first `[MASK]`. (the sentence is from the pretraining corpus BookCorpus) I use the following code: ```python from transformers import ProphetNetTokenizer, ProphetNetForConditionalGeneration tokenizer = ProphetNetTokenizer.from_pretrained('prophetnet') model = ProphetNetForConditionalGeneration.from_pretrained('prophetnet') # the sentence is from the pretraining corpus BookCorpus input_ids = tokenizer('But I traded all my life for some lovin\' and some gold', return_tensors="pt")['input_ids'] mask_id = input_ids[0][2] input_ids[0][2:4] = tokenizer.pad_token_id decoder_input_ids = tokenizer('[MASK][MASK] I', return_tensors="pt")['input_ids'] # the way of MASS: decoder_input_ids = tokenizer('[MASK][MASK][MASK]', return_tensors="pt")['input_ids'] output = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) probs = output.logits[0][2] # the rank of the target word in the vocabulary print((probs[mask_id]<probs).sum()) ``` However, the rank of `traded` is 15182 among 30522 words. And I also tried different masked words and masked spans, but the results are all unexpected. So, I want to ask if my way to recover the mask has some errors? @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13317/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13317/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13316
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13316/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13316/comments
https://api.github.com/repos/huggingface/transformers/issues/13316/events
https://github.com/huggingface/transformers/pull/13316
981,738,054
MDExOlB1bGxSZXF1ZXN0NzIxNzIwMzQw
13,316
Squeeze and Excitation Network
{ "login": "AdityaDas-IITM", "id": 64326826, "node_id": "MDQ6VXNlcjY0MzI2ODI2", "avatar_url": "https://avatars.githubusercontent.com/u/64326826?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AdityaDas-IITM", "html_url": "https://github.com/AdityaDas-IITM", "followers_url": "https://api.github.com/users/AdityaDas-IITM/followers", "following_url": "https://api.github.com/users/AdityaDas-IITM/following{/other_user}", "gists_url": "https://api.github.com/users/AdityaDas-IITM/gists{/gist_id}", "starred_url": "https://api.github.com/users/AdityaDas-IITM/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AdityaDas-IITM/subscriptions", "organizations_url": "https://api.github.com/users/AdityaDas-IITM/orgs", "repos_url": "https://api.github.com/users/AdityaDas-IITM/repos", "events_url": "https://api.github.com/users/AdityaDas-IITM/events{/privacy}", "received_events_url": "https://api.github.com/users/AdityaDas-IITM/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nThanks for your PR! However, I don't think that we want to add this block to files of other models. It's more appropriate to add a new SesameBERT model (if pretrained weights are available), or add it under the `research_projects` directory.\r\n\r\ncc @LysandreJik ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Hi @AdityaDas-IITM, are you interested in working on this PR (making it a research project instead)?", "Hey @NielsRogge, Yes I'll get started on it soon", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,630
1,636
1,636
NONE
null
# What does this PR do? This PR implements an optional Squeeze and Excitation Block in Bert and the copied modules (RoBerta, Electra, splinter and layoutlm) in pytorch. Fixes #11998 Additional tests have been added to the corresponding test scripts and the docs updated. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13316/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13316", "html_url": "https://github.com/huggingface/transformers/pull/13316", "diff_url": "https://github.com/huggingface/transformers/pull/13316.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13316.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/13315
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13315/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13315/comments
https://api.github.com/repos/huggingface/transformers/issues/13315/events
https://github.com/huggingface/transformers/issues/13315
981,684,437
MDU6SXNzdWU5ODE2ODQ0Mzc=
13,315
Current trainer.py doesn't support beam search
{ "login": "aiswaryasankar", "id": 7874177, "node_id": "MDQ6VXNlcjc4NzQxNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/7874177?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aiswaryasankar", "html_url": "https://github.com/aiswaryasankar", "followers_url": "https://api.github.com/users/aiswaryasankar/followers", "following_url": "https://api.github.com/users/aiswaryasankar/following{/other_user}", "gists_url": "https://api.github.com/users/aiswaryasankar/gists{/gist_id}", "starred_url": "https://api.github.com/users/aiswaryasankar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aiswaryasankar/subscriptions", "organizations_url": "https://api.github.com/users/aiswaryasankar/orgs", "repos_url": "https://api.github.com/users/aiswaryasankar/repos", "events_url": "https://api.github.com/users/aiswaryasankar/events{/privacy}", "received_events_url": "https://api.github.com/users/aiswaryasankar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This post on the forum will answer your question: https://discuss.huggingface.co/t/trainer-vs-seq2seqtrainer/3145/2?u=nielsr", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,630
1,633
1,633
NONE
null
# 🚀 Feature request Currently I can't find any support for beam search in trainer.py - to begin with it doesn't even import the BearScorer or BeamHypotheses classes and the evaluation_loop and prediction_loop don't make any use of beam search logic internally. Its misleading because in the predict and evaluate functions in trainer_seq2seq.py it includes setting the self._num_beams to a passed in hyper parameter however that isn't used by the parent predict or evaluate functions. Also the run_summarization.py script also includes a beam search hyperparameter which isn't made use of. What would be the simplest way to have an evaluation and prediction step call and evaluate beam search? ## Motivation Beam search is very critical for the evaluation of seq2seq methods. HuggingFace must have a trainer that does integrate with beam search just not sure where it is exposed / how that integration works. Currently for reference https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L2342 the prediction_step doesn't call beam search / perform beam search and this is called to get the loss, logits, labels for each step in evaluation. Thus it isn't actually performing a search over all the possible beams and instead evaluating for each next step in the dataloader. ## Your contribution I would further look into how the Beam search util file is used in other models. The code exists https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L1612 I just wonder why trainer.py isn't calling it in the evaluate or predict functions - is there a reason for that? @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13315/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13315/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13314
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13314/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13314/comments
https://api.github.com/repos/huggingface/transformers/issues/13314/events
https://github.com/huggingface/transformers/pull/13314
981,666,718
MDExOlB1bGxSZXF1ZXN0NzIxNjY5Mzc3
13,314
neptune.ai logger: utilize `rewrite_logs` in `NeptuneCallback` as in `WandbCallback`
{ "login": "fcakyon", "id": 34196005, "node_id": "MDQ6VXNlcjM0MTk2MDA1", "avatar_url": "https://avatars.githubusercontent.com/u/34196005?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fcakyon", "html_url": "https://github.com/fcakyon", "followers_url": "https://api.github.com/users/fcakyon/followers", "following_url": "https://api.github.com/users/fcakyon/following{/other_user}", "gists_url": "https://api.github.com/users/fcakyon/gists{/gist_id}", "starred_url": "https://api.github.com/users/fcakyon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fcakyon/subscriptions", "organizations_url": "https://api.github.com/users/fcakyon/orgs", "repos_url": "https://api.github.com/users/fcakyon/repos", "events_url": "https://api.github.com/users/fcakyon/events{/privacy}", "received_events_url": "https://api.github.com/users/fcakyon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "it turns out neptune.ai ui doesnt support charts for nested logged variables" ]
1,630
1,630
1,630
CONTRIBUTOR
null
single line ischanged, utilized a missing conversion in neptune logger as implemented in wandb logger trainer: @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13314/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13314", "html_url": "https://github.com/huggingface/transformers/pull/13314", "diff_url": "https://github.com/huggingface/transformers/pull/13314.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13314.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/13313
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13313/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13313/comments
https://api.github.com/repos/huggingface/transformers/issues/13313/events
https://github.com/huggingface/transformers/pull/13313
981,636,322
MDExOlB1bGxSZXF1ZXN0NzIxNjQ1NDU2
13,313
[Testing] Add Flax Tests on GPU, Add Speech and Vision to Flax & TF tests
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> Looks great! Mostly left nitpicks. I'm fine with seeing which tests failed on a first run and then adapting/removing failed tests.\r\n> \r\n> Do you have a number in mind regarding total runtime?\r\n\r\nAll non-slow tests together took 1h20, we don't have that many slow tests in Flax at the moment - so I'd assume that the total runtime would be something like 1h40", "Running all the jitted tests takes a lot of time (but they're quite important IMO) ", "Ok sounds good!" ]
1,630
1,630
1,630
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR does two things: - 1. Adds Flax to the daily slow tests on GPU and adds flax tests on GPU. There is a TF Docker image that works well for JAX on GPU - see: https://github.com/google/jax/discussions/6338 . I think it's easiest to just use this image for now until there is an official JAX docker image for GPU. - 2. We now have slow tests in both TF and Flax that require `soundfile` (TFHubert, TFWav2Vec2, FlaxWav2Vec2, ... Also there is FlaxViT in Flax which requires the `vision` package IMO. A new `tf-flax-speech` extension is added to make sure one doesn't install torch along torchaudio for TF and Flax's speech models and it is added to all the tests. Also it is very likely that some slow tests in Flax will fail at the moment since they have been written to pass on TPU. If ok, @patil-suraj and I can fix them one-by-one after getting a report from the daily slow tests - we'll probably have to add some `if-else` statements depending on the backend there... 2nd, at the moment, we don't have any multi-GPU or multi-TPU tests for Flax, but I nevertheless enable the tests on multi-gpu on Flax here already. I'll add a multi-gpu/multi-tpu test for all flax models next week (cc @patil-suraj) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13313/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13313/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13313", "html_url": "https://github.com/huggingface/transformers/pull/13313", "diff_url": "https://github.com/huggingface/transformers/pull/13313.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13313.patch", "merged_at": 1630400902000 }
https://api.github.com/repos/huggingface/transformers/issues/13312
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13312/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13312/comments
https://api.github.com/repos/huggingface/transformers/issues/13312/events
https://github.com/huggingface/transformers/issues/13312
981,579,789
MDU6SXNzdWU5ODE1Nzk3ODk=
13,312
Having problem Pre-training GPT models
{ "login": "mosh98", "id": 48658042, "node_id": "MDQ6VXNlcjQ4NjU4MDQy", "avatar_url": "https://avatars.githubusercontent.com/u/48658042?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mosh98", "html_url": "https://github.com/mosh98", "followers_url": "https://api.github.com/users/mosh98/followers", "following_url": "https://api.github.com/users/mosh98/following{/other_user}", "gists_url": "https://api.github.com/users/mosh98/gists{/gist_id}", "starred_url": "https://api.github.com/users/mosh98/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mosh98/subscriptions", "organizations_url": "https://api.github.com/users/mosh98/orgs", "repos_url": "https://api.github.com/users/mosh98/repos", "events_url": "https://api.github.com/users/mosh98/events{/privacy}", "received_events_url": "https://api.github.com/users/mosh98/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Probably caused by this PR: #13150 \r\n\r\ncc @stefan-it ", "I'm looking into it right now :)", "Oh no, this is only happening when using CSV files as input :thinking: \r\n\r\n@mosh98 As a very quick workaround, could you try to \"convert\" your csv file into a normal text file (file extension .txt) and then re-run the training :thinking: ", "The `keep_linebreaks` argument is only implemented for text files in :hugs: Datasets:\r\n\r\nhttps://github.com/huggingface/datasets/blob/67574a8d74796bc065a8b9b49ec02f7b1200c172/src/datasets/packaged_modules/text/text.py#L19\r\n\r\nFor CSV it is not available:\r\n\r\nhttps://github.com/huggingface/datasets/blob/67574a8d74796bc065a8b9b49ec02f7b1200c172/src/datasets/packaged_modules/csv/csv.py", "I'm working on a fix now (so that `keep_linebreaks` is only used when file extension is `.txt`)", "Sure i can try that, i do have aquestion tho,\r\n\r\n when i convert my csv into a text file how will i organize it so that it uses each line as a sample and also what command do i have to put when i run the script? \r\n\r\nAt the moment i have each row is the csv file as an individual sample", "You can use the same structure (one individual sample per line) for the text file. \r\n\r\nCommand would be pretty much the same, but you need to use the file ending `.txt`, so that the training script will infer the correct extension for the `load_dataset` argument :)", "Thank you @stefan-it the script works now, running out of cuda memeory tho but i think it's irrelevant to the actual script and more to do with my device. \r\n\r\nThanks Again!\r\n" ]
1,630
1,630
1,630
NONE
null
## Environment info - `transformers` version: 4.9.2 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.11 - PyTorch version (GPU?): 1.9.0+cu102 (True) - Tensorflow version (GPU?): 2.6.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): EleutherAI/gpt-neo-2.7B The problem arises when using: * [X ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X ] my own task or dataset: (give details below) ## To reproduce I used a csv file with each line have a sample of example Steps to reproduce the behavior: My input: `!python /content/transformers/examples/pytorch/language-modeling/run_clm.py --model_name_or_path EleutherAI/gpt-neo-2.7B --train_file /content/df.csv --output_dir /tmp/test-clm` i also tried using the no trainer version but still doesn't work. What am i doing wrong? What i got back: ``` Traceback (most recent call last): File "/content/transformers/examples/pytorch/language-modeling/run_clm.py", line 520, in <module> main() File "/content/transformers/examples/pytorch/language-modeling/run_clm.py", line 291, in main cache_dir=model_args.cache_dir, File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 830, in load_dataset **config_kwargs, File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 710, in load_dataset_builder **config_kwargs, File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 271, in __init__ **config_kwargs, File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 370, in _create_builder_config builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs) TypeError: __init__() got an unexpected keyword argument 'keep_linebreaks' ``` ## Expected behavior Just want to further train the GPT model notebook: https://colab.research.google.com/drive/1bk8teH0Egu-gAmBC_zlvUifMHS7y_SyM?usp=sharing Any help is much appreciated
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13312/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13312/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13311
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13311/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13311/comments
https://api.github.com/repos/huggingface/transformers/issues/13311/events
https://github.com/huggingface/transformers/issues/13311
981,544,431
MDU6SXNzdWU5ODE1NDQ0MzE=
13,311
[Feature request] Introduce GenericTransformer to ease deployment of custom models to the Hub
{ "login": "jordiae", "id": 2944532, "node_id": "MDQ6VXNlcjI5NDQ1MzI=", "avatar_url": "https://avatars.githubusercontent.com/u/2944532?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jordiae", "html_url": "https://github.com/jordiae", "followers_url": "https://api.github.com/users/jordiae/followers", "following_url": "https://api.github.com/users/jordiae/following{/other_user}", "gists_url": "https://api.github.com/users/jordiae/gists{/gist_id}", "starred_url": "https://api.github.com/users/jordiae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jordiae/subscriptions", "organizations_url": "https://api.github.com/users/jordiae/orgs", "repos_url": "https://api.github.com/users/jordiae/repos", "events_url": "https://api.github.com/users/jordiae/events{/privacy}", "received_events_url": "https://api.github.com/users/jordiae/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sgugger regarding 2. :)", "The plan is to add in the coming weeks support for custom models directly in the AutoModel classes, with the user providing the code of their models in a modeling file in the same repository on the model hub (same for custom tokenizers).\r\n\r\nETA for this feature should be end of next week.", "> The plan is to add in the coming weeks support for custom models directly in the AutoModel classes, with the user providing the code of their models in a modeling file in the same repository on the model hub (same for custom tokenizers).\r\n> \r\n> ETA for this feature should be end of next week.\r\n\r\nPerfect, thanks!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "PR https://github.com/huggingface/transformers/pull/13467 introduced a first version of what you were asking for @jordiae! Let us know if it works for you :) ", "> PR #13467 introduced a first version of what you were asking for @jordiae! Let us know if it works for you :)\r\n\r\nCool! Thanks!" ]
1,630
1,634
1,633
NONE
null
# 🚀 Feature request Introduce a GenericTransformer model that can handle many different variants and tweaks of the Transformer architecture. There are 2 ways of doing this and I'm not 100% sure of which one would better suit HF: 1. Introduce a GenericTransformerModel with many different options (extensive config file), such as different positional embeddings or attention variants. The modeling code would constantly be updated by HF or contributions from the community and would be included in each release of the library itself. Backward compatibility would not necessarily be an issue if all new additions were disabled by default in the config class. Also, the model could be designed in a modular way to ease the addition of new variants (see torchtext's MHA container https://github.com/pytorch/text/blob/main/torchtext/nn/modules/multiheadattention.py). 2. Allow users to submit code following HF's interfaces alongside checkpoints. GenericTransformerModel would dynamically download and load code from the hub. I think the first one would be more convenient to avoid third-party dependencies and potentially unsafe code. The second one would be way more flexible, though. ## Motivation An important point of the HF Transformers library philosophy is outlined in the README of the repo: > Why shouldn't I use transformers? > This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving into additional abstractions/files. To clarify, this feature request does NOT intend to modify this philosophy. which clearly has many advantages. Instead, it has the purpose of potentially alleviating one of the drawbacks of this philosophy: the difficulties in sharing custom models, even if these models just introduce small tweaks (see https://github.com/stanford-crfm/mistral/issues/85, https://github.com/huggingface/transformers/pull/12243). This would hopefully encourage researching different variants and combinations. In case one variant stabilized as a well-defined architecture that was worth using, then it might be considered to add it to the library the "classical" way, having a specific class, documentation, etc. ## Your contribution I can't allocate time to this at the moment. Sorry about that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13311/reactions", "total_count": 5, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/huggingface/transformers/issues/13311/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13310
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13310/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13310/comments
https://api.github.com/repos/huggingface/transformers/issues/13310/events
https://github.com/huggingface/transformers/pull/13310
981,462,512
MDExOlB1bGxSZXF1ZXN0NzIxNTA3NTQ2
13,310
:bug: fix small model card bugs
{ "login": "nateraw", "id": 32437151, "node_id": "MDQ6VXNlcjMyNDM3MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nateraw", "html_url": "https://github.com/nateraw", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "organizations_url": "https://api.github.com/users/nateraw/orgs", "repos_url": "https://api.github.com/users/nateraw/repos", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "received_events_url": "https://api.github.com/users/nateraw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "good catch and thanks for working on this", "Heres a repo I created with it: [nateraw/vit-base-beans-demo-v3](https://huggingface.co/nateraw/vit-base-beans-demo-v3)" ]
1,630
1,630
1,630
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> - `model_index` ➡️ `model-index` - `metric` ➡️ `metrics` - Metrics Dict ➡️ List of Metrics Dicts These changes fix problem of user-provided evaluation metrics not showing up on model pages pushed to hub from trainer. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13310/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13310/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13310", "html_url": "https://github.com/huggingface/transformers/pull/13310", "diff_url": "https://github.com/huggingface/transformers/pull/13310.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13310.patch", "merged_at": 1630334757000 }
https://api.github.com/repos/huggingface/transformers/issues/13309
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13309/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13309/comments
https://api.github.com/repos/huggingface/transformers/issues/13309/events
https://github.com/huggingface/transformers/pull/13309
981,422,884
MDExOlB1bGxSZXF1ZXN0NzIxNDc2MTAy
13,309
Fixing a typo in the data_collator documentation
{ "login": "Serhiy-Shekhovtsov", "id": 607527, "node_id": "MDQ6VXNlcjYwNzUyNw==", "avatar_url": "https://avatars.githubusercontent.com/u/607527?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Serhiy-Shekhovtsov", "html_url": "https://github.com/Serhiy-Shekhovtsov", "followers_url": "https://api.github.com/users/Serhiy-Shekhovtsov/followers", "following_url": "https://api.github.com/users/Serhiy-Shekhovtsov/following{/other_user}", "gists_url": "https://api.github.com/users/Serhiy-Shekhovtsov/gists{/gist_id}", "starred_url": "https://api.github.com/users/Serhiy-Shekhovtsov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Serhiy-Shekhovtsov/subscriptions", "organizations_url": "https://api.github.com/users/Serhiy-Shekhovtsov/orgs", "repos_url": "https://api.github.com/users/Serhiy-Shekhovtsov/repos", "events_url": "https://api.github.com/users/Serhiy-Shekhovtsov/events{/privacy}", "received_events_url": "https://api.github.com/users/Serhiy-Shekhovtsov/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,630
1,630
1,630
CONTRIBUTOR
null
# Fixed a typo in the documentation
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13309/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13309", "html_url": "https://github.com/huggingface/transformers/pull/13309", "diff_url": "https://github.com/huggingface/transformers/pull/13309.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13309.patch", "merged_at": 1630404072000 }
https://api.github.com/repos/huggingface/transformers/issues/13308
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13308/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13308/comments
https://api.github.com/repos/huggingface/transformers/issues/13308/events
https://github.com/huggingface/transformers/pull/13308
981,402,836
MDExOlB1bGxSZXF1ZXN0NzIxNDU5ODU5
13,308
[Large PR] Entire rework of pipelines.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "What about integrating this [idea](https://github.com/huggingface/transformers/issues/13274) into this rework?", "@xegulon It would be a great addition, we already have similar functionnality within HF.\r\n\r\nThe code is not open source just because it's messy and wouldn't fit `transformers` requirements (backward compatiblity and maintaining this is out of scope in our opinion) but we do reuse most tools that we provide (like export_onnx), so it's mostly plumbing.\r\n\r\nIf we can find something clean enough, it's probable it would be a welcome addition.\r\n\r\nFew caveats to mention: \r\n- Using `ONNX` in fully optimized mode makes it hardware dependent (you HAVE to run on similar hardware as where the optimized file was created). \r\n- Using quantization might lead to performance drop (but also huge speedup).\r\n- Using ONNX with fancy methods like `generate` is much harder to do to keep performance (you have to take care of `past_key_values`).\r\n- Using ONNX with `generate` and running on GPU is actually counterproductive because we can't run the beam search directly on GPU tensors (that's an ORT limitation). So there's a lot of back&forth between GPU and CPU which is bad for performance. (We also tried the `beam_search` proposed by ORT but didn't find it was worth it as implementation differs significantly from transformers.)\r\n\r\nWith those caveats in mind, feel free to add a PR, it would be a welcome addition if we manage to make it readable and orthogonal (the new refactor should help for sure).\r\nTry to make the PR small and more like PoC so everyone could weigh in in terms of design (most notably transformers core maintainers)", "Hey, it's really great to see work on general code organisation to any degree. Thanks for your work.\r\n\r\nIt looks like this PR introduced a bug around completing empty prompts:\r\n```\r\ntransformers.pipeline('text-generation')('')\r\n```\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/user/.local/lib/python3.8/site-packages/transformers/pipelines/text_generation.py\", line 150, in __call__\r\n return super().__call__(text_inputs, **kwargs)\r\n File \"/home/user/.local/lib/python3.8/site-packages/transformers/pipelines/base.py\", line 915, in __call__\r\n return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)\r\n File \"/home/user/.local/lib/python3.8/site-packages/transformers/pipelines/base.py\", line 922, in run_single\r\n model_outputs = self.forward(model_inputs, **forward_params)\r\n File \"/home/user/.local/lib/python3.8/site-packages/transformers/pipelines/base.py\", line 871, in forward\r\n model_outputs = self._forward(model_inputs, **forward_params)\r\n File \"/home/user/.local/lib/python3.8/site-packages/transformers/pipelines/text_generation.py\", line 162, in _forward\r\n generated_sequence = self.model.generate(input_ids=input_ids, **generate_kwargs) # BS x SL\r\n File \"/home/user/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py\", line 28, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/home/user/.local/lib/python3.8/site-packages/transformers/generation_utils.py\", line 1016, in generate\r\n return self.sample(\r\n File \"/home/user/.local/lib/python3.8/site-packages/transformers/generation_utils.py\", line 1529, in sample\r\n outputs = self(\r\n File \"/home/user/.local/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/user/.local/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py\", line 949, in forward\r\n transformer_outputs = self.transformer(\r\n File \"/home/user/.local/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/user/.local/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py\", line 673, in forward\r\n input_ids = input_ids.view(-1, input_shape[-1])\r\nRuntimeError: cannot reshape tensor of 0 elements into shape [-1, 0] because the unspecified dimension size -1 can be any value and is ambiguous\r\n```", "🤑💪🏻", "how to use my own dataset, that is txt file ,per line is the input for NER model\r\ncould you pls help me ?\r\n", "> how to use my own dataset, that is txt file ,per line is the input for NER model\r\n> could you pls help me ?\r\n\r\nthe example scripts want itin jsonlines or csv. https://huggingface.co/docs/transformers/run_scripts#use-a-custom-dataset . you can use a tool to convert to jsonlines. it takes some patience to figure out a way to do each step, and then it works." ]
1,630
1,674
1,631
CONTRIBUTOR
null
# What does this PR do? tl;dr: Make pipeline code much more consistent and enable large speedups with GPU inference. # GPU pipeline Currently the way pipeline are setup, it's kind of hard to keep the GPU busy 100% because we're not enabling the use of DataLoader (on pytorch), which is necessary to keep CPU working on next items to tokenize, while processing an item on GPU. We cannot realistically use the current API to maximize utilization: ```python for item in dataset: # item == "This is some test" for instance output = pipe(item) # output == {"label": "POSITIVE", "score": 0,99} ``` So we need to change up the API to something closer to what `DataLoader` does, which is use an iterable, which enables to have worker CPU threads process next items while the GPU is busy on the current one, meaning we're now using 100% of the GPU. ```python for output in pipe(dataset): # output == {"label": "POSITIVE", "score": 0,99} pass ``` In order to make that change possible, we **need** to separate better what happens on the CPU vs the GPU. The proposed way is to split the __call__ of pipeline into 3 distinct function calls - `preprocess`: in charge of taking the original pipeline input, and output a dict of everything necessary to do `model(**model_inputs)` for instance (or a `generate` call, but stuff that will really involve the GPU. - `forward`: In most cases it's a simple function call to the model forward method, but can be more complex depending on the pipeline. It needs to be separate from the other 2 because this is where the GPU might be used. so we can encapsulate more logic around this in the base class (`no_grad`, sending and retrieving tensors to/from GPU etc..) - `postprocess`: Usually links to processing the logits into something more user-friendly for the task at hand, again usually pretty fast and should happen on CPU (but should be so fast it does not matter really to have a separate thread for this). In order to increase consistency across pipelines, ALL pipelines will have to implement the 3 methods, and should have a `__call__` method (with exceptions discussed in consistency). They should be readable on their own too, meaning, the outputs of `preprocess` should be **exactly** what is sent to `forward` and what is returned by `forward` exactly the inputs of `preprocess`. So: ```python model_inputs = pipe.preprocess(item) model_outputs = pipe.forward(item) outputs = pipe.postprocess(model_outputs) ``` will always be perfectly valid, even if not the most efficient. # Consistency of pipelines Right now, pipelines are quite inconsistent in their returned outputs. - Some have parameters to change the output format (this is fine) - Most pipelines accept lists of items, and will return a list of outputs but: - Some will return a single item only if the input was a list of a single item (regardless of what the inputs originally was) - Some will do it better and return single item only if single item was sent - Some will use lists as batching, some will not, leading to slowdowns at best, OOM errors on large lists, and overall pretty poor efficiency on GPU (more info: https://github.com/huggingface/transformers/issues/13141, https://github.com/huggingface/transformers/pull/11251, https://github.com/huggingface/transformers/pull/11251) Batching on GPU seems like what is speeding up, things, but really it's not at inference times, batching in ML is used because of gradients and it's necessary for the gradient descent to be smooth, the speed part of the GPU is really linked to overall GPU usage, using `DataLoader` is the key part here. Nonetheless, sometimes, depending on actual hardware, pipeline, and input data, batching *can* be used efficiently, so the new design should enable that. However, it shouldn't be done the way it's currently setup, which is some pipelines do, some don't and no consistency overall, it should be done on a different layer than dataprocessing part of the pipeline. Because of the inconsitencies mentionned above, this refactor will include some `__call__` methods to change the return type based on what was previously there so (`preprocess`, `forward` and `postprocess` are mostly pure, while `__call__` will handle backwards compatibilty) # Parameter handling Another cause of concern for pipelines was parameter handling. Most parameters were sent to `__call__` method, but some where sent to `__init__`. Some in both. That meant that you would have to look though the docs to guess if you needed to to ```python pipe = pipeline(....., num_beams=2) outputs = pipe(item) # or pipe = pipeline(....) outputs = pipe(item, num_beams=2) ``` The goal in this PR, was to make that explicit, so BOTH will be supported and have the exact same behavior. In order to do that, we introduced a new mandatory method `set_parameters` which would be called both in `__call__` and `__init__` in the same way so that it would always work. 1. Because this new `set_parameters` is a standard method, we can use it to properly discard unexpected keyword with a real errors instead of just ignoring it. 2. Because `__init__` and `__call__` are now base class only (roughly), we can capture parameters much better, meaning we don't have extra layer of parameter guessing (is it tokenization param, model param, pipeline param ?). Each method will capture everything it needs and pass on the rest, the ultimate method in the chain is `set_parameters` which might be specific parameters, or accept everything (like **generate_kwargs, so utlimately `generate` will have the final word). 3. Because `set_parameters` will be called at least 2 times and we don't know which one will have actual real values, it needs to be done in a somewhat odd way. The ways most pipelines will do, is simply have a default argument to `None`, so if the argument is `None` we know that the caller didn't supply this argument so we don't override it (the default one is defined in the `__init__` if dynamic or directly in the class if static. This however does not work when `None` is a valid choice for some parameter, this is true **only** for `zero-shot-classification` test, where we specially test that we raise some error when passing `None` as a value (so it can probably be changed, but will be backward incompatible regarding tests). For those, more complex logic is required. 4. Because we're now using `self` as the holder for parameters that means that using threading mecanisms to run the pipelines might lead to some oddities (but people most likely aren't using 1 pipeline on different threads, most likely shouldn't be at least). Other options are possible but would passing them though all 3 functions `preprocess`, `forward` and `postprocess` reducing readability IMHO, for debattable gains. # Results Currently we're sitting here performance wise bench code ```python from transformers import pipeline from transformers.pipelines.base import KeyDataset import datasets import tqdm pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0) dataset = datasets.load_dataset("superb", name="asr", split="test") print("New style of pipeline") for out in tqdm.tqdm(pipe(KeyDataset(dataset, "file"))): pass print("Old style of pipeline") for item in tqdm.tqdm(dataset): out = pipe(item["file"]) ``` Speed (done on old suffering GTX 970): ![F02AXFBCPJN](https://user-images.githubusercontent.com/204321/131305601-e1b75b93-97e6-47c8-a9ce-55bcbbaece58.png) ## Backward compatibility We're currently sitting at 100% backward compatibility regarding tests. We're not however 100% backward compatible. By fixing the inconsistencies of pipelines, we will break any code that was using parameters wrong (as they will suddenly start working or crashing because they're invalid). ## Tensorflow I mentionned `DataLoader` which will be used to great effectiveness on Pytorch + `list` inputs or `Dataset` input. (on single inference on GPU + pt, you will get a warning, prompting you to use more efficient methods) On tensorflow however, more work is needed to make it faster there too. At the very least we shouldn't degrade performance too much, this has to be checked (both GPU and CPU). Ideally we would have a similar mecanism than `DataLoader` to maximise efficiency on GPU tensorflow. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13308/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13308", "html_url": "https://github.com/huggingface/transformers/pull/13308", "diff_url": "https://github.com/huggingface/transformers/pull/13308.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13308.patch", "merged_at": 1631278069000 }
https://api.github.com/repos/huggingface/transformers/issues/13307
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13307/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13307/comments
https://api.github.com/repos/huggingface/transformers/issues/13307/events
https://github.com/huggingface/transformers/pull/13307
981,360,502
MDExOlB1bGxSZXF1ZXN0NzIxNDI2MDEx
13,307
[Flax] Correct all return tensors to numpy
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,630
1,630
1,630
MEMBER
null
# What does this PR do? This PR adapts all examples to return `numpy` instead of `jax` to avoid preventing asynchronous dispatch: https://jax.readthedocs.io/en/latest/async_dispatch.html
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13307/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13307/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13307", "html_url": "https://github.com/huggingface/transformers/pull/13307", "diff_url": "https://github.com/huggingface/transformers/pull/13307.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13307.patch", "merged_at": 1630078714000 }
https://api.github.com/repos/huggingface/transformers/issues/13306
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13306/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13306/comments
https://api.github.com/repos/huggingface/transformers/issues/13306/events
https://github.com/huggingface/transformers/issues/13306
981,354,883
MDU6SXNzdWU5ODEzNTQ4ODM=
13,306
Missing on_predict event in TrainerCallback
{ "login": "rpowalski", "id": 10357417, "node_id": "MDQ6VXNlcjEwMzU3NDE3", "avatar_url": "https://avatars.githubusercontent.com/u/10357417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rpowalski", "html_url": "https://github.com/rpowalski", "followers_url": "https://api.github.com/users/rpowalski/followers", "following_url": "https://api.github.com/users/rpowalski/following{/other_user}", "gists_url": "https://api.github.com/users/rpowalski/gists{/gist_id}", "starred_url": "https://api.github.com/users/rpowalski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rpowalski/subscriptions", "organizations_url": "https://api.github.com/users/rpowalski/orgs", "repos_url": "https://api.github.com/users/rpowalski/repos", "events_url": "https://api.github.com/users/rpowalski/events{/privacy}", "received_events_url": "https://api.github.com/users/rpowalski/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sgugger ", "The reason why there is no `on_predict` event is that the `predict` method is never called during the training loop. You can add the code you want to be run after `Trainer.predict` just after calling that method.", "Ok, I see. For my cases having this custom code in the callback feels cleaner and more consistent with the post-processing done during `evaluate()` method, but will do it as you suggest." ]
1,630
1,630
1,630
NONE
null
# 🚀 Feature request Can we add `on_predict` event support in the `Trainer` and `TrainerCallback`? ## Motivation I was in need to use it already in multiple projects. I think it makes sense since `Trainer` already support `on_evaluate` event inside `evaluate()` method. Corresponding event handler is missing in `predict()` method which is part of the `Trainer` class. Some of the training support libraries are supporting such events, so I guess there are no strong reasons against that ([link](https://pytorch-lightning.readthedocs.io/en/latest/extensions/generated/pytorch_lightning.callbacks.Callback.html#pytorch_lightning.callbacks.Callback.on_test_end)). ## Your contribution I am willing to make a PR that will implement that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13306/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13306/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13305
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13305/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13305/comments
https://api.github.com/repos/huggingface/transformers/issues/13305/events
https://github.com/huggingface/transformers/pull/13305
981,353,486
MDExOlB1bGxSZXF1ZXN0NzIxNDIwNDM3
13,305
Layoutlm onnx support
{ "login": "nishprabhu", "id": 33579638, "node_id": "MDQ6VXNlcjMzNTc5NjM4", "avatar_url": "https://avatars.githubusercontent.com/u/33579638?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nishprabhu", "html_url": "https://github.com/nishprabhu", "followers_url": "https://api.github.com/users/nishprabhu/followers", "following_url": "https://api.github.com/users/nishprabhu/following{/other_user}", "gists_url": "https://api.github.com/users/nishprabhu/gists{/gist_id}", "starred_url": "https://api.github.com/users/nishprabhu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nishprabhu/subscriptions", "organizations_url": "https://api.github.com/users/nishprabhu/orgs", "repos_url": "https://api.github.com/users/nishprabhu/repos", "events_url": "https://api.github.com/users/nishprabhu/events{/privacy}", "received_events_url": "https://api.github.com/users/nishprabhu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@nishprabhu would you mind closing this PR and opening a new one, without any additional changes?\r\n\r\nGithub has a weird issue making the diff almost impossible to review.\r\n\r\nSorry for the inconvenience, please let us know if you need any help.", "Sure, @mfuntowicz \r\nI'll open a new PR with the changes." ]
1,630
1,630
1,630
CONTRIBUTOR
null
# What does this PR do? This PR extends ONNX support to LayoutLM as explained in https://huggingface.co/transformers/serialization.html?highlight=onnx#converting-an-onnx-model-using-the-transformers-onnx-package Fixes Issue #13300 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @mfuntowicz @NielsRogge
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13305/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13305/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13305", "html_url": "https://github.com/huggingface/transformers/pull/13305", "diff_url": "https://github.com/huggingface/transformers/pull/13305.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13305.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/13304
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13304/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13304/comments
https://api.github.com/repos/huggingface/transformers/issues/13304/events
https://github.com/huggingface/transformers/pull/13304
981,324,112
MDExOlB1bGxSZXF1ZXN0NzIxMzk2MDU2
13,304
Slow tests - run rag token in half precision
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,630
1,630
1,630
MEMBER
null
Currently `tests/test_modeling_rag.py::RagModelIntegrationTests::test_rag_token_generate_batch` errors out with OOM in the slow tests -> let's run it in half precision on GPU. Output has been verified to stay the same
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13304/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13304", "html_url": "https://github.com/huggingface/transformers/pull/13304", "diff_url": "https://github.com/huggingface/transformers/pull/13304.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13304.patch", "merged_at": 1630317728000 }
https://api.github.com/repos/huggingface/transformers/issues/13303
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13303/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13303/comments
https://api.github.com/repos/huggingface/transformers/issues/13303/events
https://github.com/huggingface/transformers/pull/13303
981,308,431
MDExOlB1bGxSZXF1ZXN0NzIxMzgzMDY1
13,303
[Slow tests] Disable Wav2Vec2 pretraining test for now
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,630
1,630
1,630
MEMBER
null
Wav2Vec2 pretraining seems not to be working curretnly. This PR disables the test: tests/test_modeling_wav2vec2.py::Wav2Vec2ModelIntegrationTest::test_inference_integration Test will be re-enabled once succesfull wav2vec2 pretraining has been done
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13303/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13303/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13303", "html_url": "https://github.com/huggingface/transformers/pull/13303", "diff_url": "https://github.com/huggingface/transformers/pull/13303.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13303.patch", "merged_at": 1630317782000 }
https://api.github.com/repos/huggingface/transformers/issues/13302
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13302/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13302/comments
https://api.github.com/repos/huggingface/transformers/issues/13302/events
https://github.com/huggingface/transformers/pull/13302
981,287,135
MDExOlB1bGxSZXF1ZXN0NzIxMzY1NDky
13,302
Fix loading for newer m2m models
{ "login": "harveenchadha", "id": 30959215, "node_id": "MDQ6VXNlcjMwOTU5MjE1", "avatar_url": "https://avatars.githubusercontent.com/u/30959215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/harveenchadha", "html_url": "https://github.com/harveenchadha", "followers_url": "https://api.github.com/users/harveenchadha/followers", "following_url": "https://api.github.com/users/harveenchadha/following{/other_user}", "gists_url": "https://api.github.com/users/harveenchadha/gists{/gist_id}", "starred_url": "https://api.github.com/users/harveenchadha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/harveenchadha/subscriptions", "organizations_url": "https://api.github.com/users/harveenchadha/orgs", "repos_url": "https://api.github.com/users/harveenchadha/repos", "events_url": "https://api.github.com/users/harveenchadha/events{/privacy}", "received_events_url": "https://api.github.com/users/harveenchadha/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for the PR @harveenchadha !\r\nCould you post the link for the new m2m models, I couldn't find anything new here https://github.com/pytorch/fairseq/tree/master/examples/m2m_100", "Hey Suraj, from newer models I mean models trained with newer version of fairseq. I was trying to convert [Indic Trans](https://github.com/AI4Bharat/indicTrans) and ran into issues using this script.", "I see, thanks! PR looks good, just one style check is failing, you could fix it by running `make style` and `make quality`.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "gently pinging @harveenchadha :) ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,630
1,635
1,635
NONE
null
Newer version of m2m models have args parameter as None. args are present in cfg['model']. Fixing input arguments to the function as well.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13302/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13302/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13302", "html_url": "https://github.com/huggingface/transformers/pull/13302", "diff_url": "https://github.com/huggingface/transformers/pull/13302.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13302.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/13301
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13301/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13301/comments
https://api.github.com/repos/huggingface/transformers/issues/13301/events
https://github.com/huggingface/transformers/pull/13301
981,285,845
MDExOlB1bGxSZXF1ZXN0NzIxMzY0NDA3
13,301
Fixing mbart50 with `return_tensors` argument too.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,630
1,630
1,630
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13301/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13301", "html_url": "https://github.com/huggingface/transformers/pull/13301", "diff_url": "https://github.com/huggingface/transformers/pull/13301.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13301.patch", "merged_at": 1630077726000 }
https://api.github.com/repos/huggingface/transformers/issues/13300
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13300/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13300/comments
https://api.github.com/repos/huggingface/transformers/issues/13300/events
https://github.com/huggingface/transformers/issues/13300
981,205,622
MDU6SXNzdWU5ODEyMDU2MjI=
13,300
Support for converting LayoutLM to ONNX
{ "login": "nishprabhu", "id": 33579638, "node_id": "MDQ6VXNlcjMzNTc5NjM4", "avatar_url": "https://avatars.githubusercontent.com/u/33579638?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nishprabhu", "html_url": "https://github.com/nishprabhu", "followers_url": "https://api.github.com/users/nishprabhu/followers", "following_url": "https://api.github.com/users/nishprabhu/following{/other_user}", "gists_url": "https://api.github.com/users/nishprabhu/gists{/gist_id}", "starred_url": "https://api.github.com/users/nishprabhu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nishprabhu/subscriptions", "organizations_url": "https://api.github.com/users/nishprabhu/orgs", "repos_url": "https://api.github.com/users/nishprabhu/repos", "events_url": "https://api.github.com/users/nishprabhu/events{/privacy}", "received_events_url": "https://api.github.com/users/nishprabhu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Sure that would be great. LayoutLM is literally only adding 4 additional embedding layers to BERT:\r\n\r\nhttps://github.com/huggingface/transformers/blob/a3f96f366a49bbe2cbdeaebd2e32ccdc1260a1d6/src/transformers/models/layoutlm/modeling_layoutlm.py#L66-L69\r\n\r\nSo I guess it won't be that difficult to support?\r\n\r\ncc @mfuntowicz ", "The guide written here is very helpful: https://huggingface.co/transformers/serialization.html?highlight=onnx#converting-an-onnx-model-using-the-transformers-onnx-package", "Thanks! It was very useful!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "dumb question - how do i format the ORT inputs for LayoutLM onnx? does anyone have an example of LayoutLM ONNX inference?\r\n\r\nI'm trying to pass in the output of a collator into the onxx session. Its not liking the bounding box tensor since it its dimensions are different than input_ids, token_type_ids and attention_mask.\r\n" ]
1,630
1,665
1,633
CONTRIBUTOR
null
# 🚀 Feature request Transformers currently provides ready configurations for converting BERT, BART, RoBERTa and several other models to ONNX. Can we extend this to also support LayoutLM? ## Motivation ONNX is quickly becoming the default runtime environment in many production settings. Ideally, all models supported by the library should have an easy path to conversion. ## Your contribution I am willing to submit a PR that implements this.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13300/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13300/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13299
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13299/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13299/comments
https://api.github.com/repos/huggingface/transformers/issues/13299/events
https://github.com/huggingface/transformers/pull/13299
981,126,059
MDExOlB1bGxSZXF1ZXN0NzIxMjMzMjY2
13,299
Moving `zero-shot-classification` pipeline to new testing.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,630
1,630
1,630
CONTRIBUTOR
null
# What does this PR do? And removing the old mixins ! <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13299/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13299/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13299", "html_url": "https://github.com/huggingface/transformers/pull/13299", "diff_url": "https://github.com/huggingface/transformers/pull/13299.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13299.patch", "merged_at": 1630071972000 }
https://api.github.com/repos/huggingface/transformers/issues/13298
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13298/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13298/comments
https://api.github.com/repos/huggingface/transformers/issues/13298/events
https://github.com/huggingface/transformers/issues/13298
981,098,725
MDU6SXNzdWU5ODEwOTg3MjU=
13,298
Examples: label mapping for text classication tasks are not written into configuration
{ "login": "stefan-it", "id": 20651387, "node_id": "MDQ6VXNlcjIwNjUxMzg3", "avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stefan-it", "html_url": "https://github.com/stefan-it", "followers_url": "https://api.github.com/users/stefan-it/followers", "following_url": "https://api.github.com/users/stefan-it/following{/other_user}", "gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}", "starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions", "organizations_url": "https://api.github.com/users/stefan-it/orgs", "repos_url": "https://api.github.com/users/stefan-it/repos", "events_url": "https://api.github.com/users/stefan-it/events{/privacy}", "received_events_url": "https://api.github.com/users/stefan-it/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sgugger @Rocketknight1 ", "Should be fixed by the PR above, I forgot the `label_to_name` dict was left to None for GLUE tasks." ]
1,630
1,630
1,630
COLLABORATOR
null
Hi, I've shortly discussed this with @patrickvonplaten , and we came to the conclusion that the following scenario is a bug: When using the text classification for a GLUE task, no label mapping will be written into the configuration of the final fine-tuned model. This leads to an unesthetic "label_0" on the model hub, as it can be seen here: ![Bildschirmfoto_2021-08-27_12-17-09](https://user-images.githubusercontent.com/20651387/131112292-ab2fc8f8-2fad-42cf-b436-0a0a5e0a4475.png) One has to manually extend the `config.json`: ```json "id2label": { "0": "NEGATIVE", "1": "POSITIVE" }, "label2id": { "NEGATIVE": 0, "POSITIVE": 1 } ``` to get the following output: ![Bildschirmfoto_2021-08-27_12-17-29](https://user-images.githubusercontent.com/20651387/131112468-bff93d0f-4f3f-428d-ac3b-696ae6e08543.png) The text classification examples should be extended, so that those label mappings are automatically added to the configuration file.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13298/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13298/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13297
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13297/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13297/comments
https://api.github.com/repos/huggingface/transformers/issues/13297/events
https://github.com/huggingface/transformers/pull/13297
981,066,941
MDExOlB1bGxSZXF1ZXN0NzIxMTgzNDI3
13,297
Moving `translation` pipeline to new testing scheme.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It seems one of the tests will need to be adapted: \r\n```\r\n=================================== FAILURES ===================================\r\n_____________ MBartEnroIntegrationTest.test_tokenizer_translation ______________\r\n[gw1] linux -- Python 3.7.11 /usr/local/bin/python\r\n\r\nself = <tests.test_tokenization_mbart.MBartEnroIntegrationTest testMethod=test_tokenizer_translation>\r\n\r\n @require_torch\r\n def test_tokenizer_translation(self):\r\n> inputs = self.tokenizer._build_translation_inputs(\"A test\", src_lang=\"en_XX\", tgt_lang=\"ar_AR\")\r\nE TypeError: _build_translation_inputs() missing 1 required positional argument: 'return_tensors'\r\n```", "Updated the test ! " ]
1,630
1,630
1,630
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13297/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13297", "html_url": "https://github.com/huggingface/transformers/pull/13297", "diff_url": "https://github.com/huggingface/transformers/pull/13297.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13297.patch", "merged_at": 1630059977000 }
https://api.github.com/repos/huggingface/transformers/issues/13296
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13296/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13296/comments
https://api.github.com/repos/huggingface/transformers/issues/13296/events
https://github.com/huggingface/transformers/issues/13296
981,063,531
MDU6SXNzdWU5ODEwNjM1MzE=
13,296
__version__ attribute missing in mode config for sentence-transformers/paraphrase-mpnet-base-v2
{ "login": "pratikchhapolika", "id": 11159549, "node_id": "MDQ6VXNlcjExMTU5NTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/11159549?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pratikchhapolika", "html_url": "https://github.com/pratikchhapolika", "followers_url": "https://api.github.com/users/pratikchhapolika/followers", "following_url": "https://api.github.com/users/pratikchhapolika/following{/other_user}", "gists_url": "https://api.github.com/users/pratikchhapolika/gists{/gist_id}", "starred_url": "https://api.github.com/users/pratikchhapolika/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pratikchhapolika/subscriptions", "organizations_url": "https://api.github.com/users/pratikchhapolika/orgs", "repos_url": "https://api.github.com/users/pratikchhapolika/repos", "events_url": "https://api.github.com/users/pratikchhapolika/events{/privacy}", "received_events_url": "https://api.github.com/users/pratikchhapolika/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Maybe of interest to @nreimers ", "Hi @pratikchhapolika \r\nThe above code works well with the most recent sentence-transformers version v1 (v1.2.1) or (better) v2 (>= 2.0.0). \r\n\r\nWith old sentence-transformers versions 1 the model does not work, as the folder structure has changed to make it compatible with the hub.\r\n\r\nA folder 0_Transformer is not required and was removed in v2, so that models can be also loaded with HF transformers from the hub.\r\n\r\nJust update to v1.2.1 or v2.0.0 and everything works.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,630
1,632
1,632
NONE
null
I've manually downloaded a model `paraphrase-mpnet-base-v2`, and it appears that the `SentenceTransformer.py` code above is requesting a field` __version__ `in the model config that doesn't appear to be there. I have read the same topic : https://github.com/UKPLab/sentence-transformers/issues/184 but it doesn't solve the issue. In the link below: https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2 **Code given in the link:** ``` from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/paraphrase-mpnet-base-v2') embeddings = model.encode(sentences) print(embeddings) ``` **I guess the above code will not work as mentioned in the link. Please correct this if it's an ongoing issue.** For Sentence Transformer, I guess we need following files: ``` The folder should consist these files: 0_Transformer/ 1_Pooling/ config.json modules.json ``` But when we download the model `paraphrase-mpnet-base-v2 ` and unzip it. It doesn't have `0_Transformer` in it? Any Suggestions?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13296/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13295
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13295/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13295/comments
https://api.github.com/repos/huggingface/transformers/issues/13295/events
https://github.com/huggingface/transformers/issues/13295
981,012,934
MDU6SXNzdWU5ODEwMTI5MzQ=
13,295
GPT2 model state dictionary Tensor types are not matching with pytorch
{ "login": "snaik2016", "id": 18183245, "node_id": "MDQ6VXNlcjE4MTgzMjQ1", "avatar_url": "https://avatars.githubusercontent.com/u/18183245?v=4", "gravatar_id": "", "url": "https://api.github.com/users/snaik2016", "html_url": "https://github.com/snaik2016", "followers_url": "https://api.github.com/users/snaik2016/followers", "following_url": "https://api.github.com/users/snaik2016/following{/other_user}", "gists_url": "https://api.github.com/users/snaik2016/gists{/gist_id}", "starred_url": "https://api.github.com/users/snaik2016/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/snaik2016/subscriptions", "organizations_url": "https://api.github.com/users/snaik2016/orgs", "repos_url": "https://api.github.com/users/snaik2016/repos", "events_url": "https://api.github.com/users/snaik2016/events{/privacy}", "received_events_url": "https://api.github.com/users/snaik2016/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @snaik2016,\r\n\r\nNote that `transformer.h.0.attn.bias` is actually not the bias weights of the attention layer but a pre-computed causal mask: https://github.com/huggingface/transformers/blob/319d840b46fd3a13e0434de9de69bd74a2f22f43/src/transformers/models/gpt2/modeling_gpt2.py#L130 \r\n\r\n=> This means that you don't need to pass this parameter - it'll be generated automatically.\r\n\r\nTLDR; this behavior is expected IMO", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,630
1,633
1,633
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.0.1 and 4.4.2 - Platform: windows - Python version: 3.6 - PyTorch version (GPU?): 1.8 - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: GPT2 - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: @patrickvonplaten, @LysandreJik 1. model = AutoModelForCausalLM.from_pretrained('gpt2', force_download=True) 2. Download size is 548M the size on disk is 535M 2. model.save_pretrained(<some_dir>) the model size comes down to 498M 3. Now a freshly downloaded model not using from_pretrained which has size of 535M is loaded using torch.load 4. for k, v in model.items(): ... if 'attn.bias' in k: ... print(v.type()) 5. The types are FloatTensor whereas if the same code is run with the model in step 1. I get key name transformer.h.0.attn.bias = type = **torch.ByteTensor** key name transformer.h.0.attn.c_attn.bias = type = torch.FloatTensor Is this expected? <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior the model serialization and deserialization shouldn't change the tensor types. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13295/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13294
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13294/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13294/comments
https://api.github.com/repos/huggingface/transformers/issues/13294/events
https://github.com/huggingface/transformers/pull/13294
981,012,229
MDExOlB1bGxSZXF1ZXN0NzIxMTM4OTcy
13,294
albert flax
{ "login": "kamalkraj", "id": 17096858, "node_id": "MDQ6VXNlcjE3MDk2ODU4", "avatar_url": "https://avatars.githubusercontent.com/u/17096858?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kamalkraj", "html_url": "https://github.com/kamalkraj", "followers_url": "https://api.github.com/users/kamalkraj/followers", "following_url": "https://api.github.com/users/kamalkraj/following{/other_user}", "gists_url": "https://api.github.com/users/kamalkraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/kamalkraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kamalkraj/subscriptions", "organizations_url": "https://api.github.com/users/kamalkraj/orgs", "repos_url": "https://api.github.com/users/kamalkraj/repos", "events_url": "https://api.github.com/users/kamalkraj/events{/privacy}", "received_events_url": "https://api.github.com/users/kamalkraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@patrickvonplaten \r\nDone changes according to your suggestions.\r\nThanks for the review ", "Hey @kamalkraj , thanks for that PR! Can't wait to road-test it :hugs: \r\n\r\nI tried training several ALBERT models with the official implementation, but had no luck in training a good performing model.", "\r\n\r\n\r\n> I tried training several ALBERT models with the official implementation, but had no luck in training a good performing model.\r\n\r\npre-training with TF 1 code?", "Yeah, it was the TF 1 code base, and I've also trained various model sizes. Let's see if I have more luck using the FLAX implementation 😅", "Awesome addition @kamalkraj - thanks a lot :-) Left a couple of final additions", "@patrickvonplaten \r\nDone changes according to review.", "Slow tests are passing on CPU - thanks for the model addition @kamalkraj ! " ]
1,630
1,631
1,630
CONTRIBUTOR
null
# What does this PR do? ALBERT Flax Model <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @LysandreJik @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13294/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13294", "html_url": "https://github.com/huggingface/transformers/pull/13294", "diff_url": "https://github.com/huggingface/transformers/pull/13294.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13294.patch", "merged_at": 1630337368000 }
https://api.github.com/repos/huggingface/transformers/issues/13293
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13293/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13293/comments
https://api.github.com/repos/huggingface/transformers/issues/13293/events
https://github.com/huggingface/transformers/issues/13293
980,966,469
MDU6SXNzdWU5ODA5NjY0Njk=
13,293
DistilBertTokenizer for distilbert-base-multilingual-cased is unable to encode / decode Japanese characters properly adding unnecessary characters in between
{ "login": "shreyajain4", "id": 23450481, "node_id": "MDQ6VXNlcjIzNDUwNDgx", "avatar_url": "https://avatars.githubusercontent.com/u/23450481?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shreyajain4", "html_url": "https://github.com/shreyajain4", "followers_url": "https://api.github.com/users/shreyajain4/followers", "following_url": "https://api.github.com/users/shreyajain4/following{/other_user}", "gists_url": "https://api.github.com/users/shreyajain4/gists{/gist_id}", "starred_url": "https://api.github.com/users/shreyajain4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shreyajain4/subscriptions", "organizations_url": "https://api.github.com/users/shreyajain4/orgs", "repos_url": "https://api.github.com/users/shreyajain4/repos", "events_url": "https://api.github.com/users/shreyajain4/events{/privacy}", "received_events_url": "https://api.github.com/users/shreyajain4/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,630
1,633
1,633
NONE
null
I have used a google collab notebook. `transformers` version used is 4.9.2 Model used : distilbert-base-multilingual-cased @LysandreJik I am facing an issue with the tokenizer: @LysandreJik The problem arises when using the tokenizer in the case of Japanese text. I have attached an example script of what I am going to suggest ![image](https://user-images.githubusercontent.com/23450481/131085602-be7ffceb-c408-4243-92b4-8abd99b4ec5f.png) I wanted to obtain the token ids for string "祝い めでた 動画". When I used the list of token ids to obtain the corresponding string, I obtained another string "祝い めでた 動 画" seems like there is a bug in either of the functions. Steps to reproduce the behavior: ```python from transformers import DistilBertTokenizer import torch tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-multilingual-cased') print(tokenizer.decode([ 101, 5914, 15221, 1965, 12236, 20058, 2621, 115384, 102])) print(tokenizer.decode([ 101, 5914, 1906, 1965, 12236, 20058, 2621, 5618, 102])) inputs = tokenizer("祝い めでた 動画", return_tensors="pt") print(inputs) ``` Expected behavior: I should have obtained the same string when I was decoding the token ids obtained from the input string.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13293/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13293/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13292
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13292/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13292/comments
https://api.github.com/repos/huggingface/transformers/issues/13292/events
https://github.com/huggingface/transformers/pull/13292
980,962,520
MDExOlB1bGxSZXF1ZXN0NzIxMDk4NTkw
13,292
Add REALM
{ "login": "qqaatw", "id": 24835382, "node_id": "MDQ6VXNlcjI0ODM1Mzgy", "avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qqaatw", "html_url": "https://github.com/qqaatw", "followers_url": "https://api.github.com/users/qqaatw/followers", "following_url": "https://api.github.com/users/qqaatw/following{/other_user}", "gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}", "starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions", "organizations_url": "https://api.github.com/users/qqaatw/orgs", "repos_url": "https://api.github.com/users/qqaatw/repos", "events_url": "https://api.github.com/users/qqaatw/events{/privacy}", "received_events_url": "https://api.github.com/users/qqaatw/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
null
[]
[ "Hi @qqaatw,\r\n\r\nThank you so much for putting effort on this PR, providing pre-trained REALM models in Pytorch Transformers API.\r\n\r\nI am wondering whether your REALM models in pytorch can reproduce Table 2 of their original [paper](https://arxiv.org/pdf/2002.08909.pdf)?\r\n\r\nAlternatively, do you verify their tensorflow pre-train model has the same embeddings as your converted pytorch models given arbitrary input sequence?\r\n\r\nThanks again for the awesome work! ", "Hello @OctoberChang, thanks for your reply!\r\n\r\nThis is my first time trying to port a model from Tensorflow, so I may need some time to clarify the structure and behavior of the original model. Currently, the retriever part was successfully converted.\r\n\r\nRegarding your concerns, I've verified the retriever's behavior by feeding the same inputs to TensorFlow and PyTorch models respectively, and then checking their outputs that are nearly identical. For now, I may not have enough resources to complete those ablation experiments sadly, but I think it can be reproduced as long as the PyTorch model's behavior is nearly the same as that of Tensorflow.", "> Hello @OctoberChang, thanks for your reply!\r\n> \r\n> This is my first time trying to port a model from Tensorflow, so I may need some time to clarify the structure and behavior of the original model. Currently, the retriever part was successfully converted.\r\n> \r\n> Regarding your concerns, I've verified the retriever's behavior by feeding the same inputs to TensorFlow and PyTorch models respectively, and then checking their outputs that are nearly identical. For now, I may not have enough resources to complete those ablation experiments sadly, but I think it can be reproduced as long as the PyTorch model's behavior is nearly the same as that of Tensorflow.\r\n\r\nAwesome! Looking forward to this PR and the pre-trained Realm models in Pytorch Transformers!\r\n", "The reason I didn't add `RealmForQuestionAnswering` is the following:\r\n\r\n1. The fine-tuning code is placed at another project, [ORQA](https://github.com/google-research/language/tree/master/language/orqa), which has its own [paper](https://arxiv.org/abs/1906.00300).\r\n2. The architecture of fine-tuned models is not compatible with the existing question answering architecture in `transformers`.\r\n\r\nTherefore, I think residing the fine-tuning code to research_project folder or making it a new model would be more appropriate.\r\n\r\n ", "Tests related to REALM have passed! Some failures seem related to Flax Big Bird model.\r\n@sgugger @LysandreJik @patrickvonplaten ", "@OctoberChang Do you have any suggestion on this PR? :-)", "@patrickvonplaten Thank you a lot for the comments and review. I've left replies on the threads.", "Hi ! Sure we can add the index in `datasets`.\r\nDo you know what data they used exactly ? Are the texts available ? If yes, did they also share the embeddings of the documents ?\r\n\r\nOtherwise we can just build an index from scratch using Wikipedia and the model to encode the documents", "Hey @qqaatw,\r\n\r\nSorry to answer that late. We just had a long discussion internally with @lhoestq on how to best integrate REALM into `transformers`.\r\nOur understanding of [ORQA](https://arxiv.org/pdf/1906.00300.pdf) and [REALM](https://arxiv.org/pdf/2002.08909.pdf) and how it relates to the integration to `transformes` is the following:\r\n\r\n- The ORQA paper was published before REALM. In ORQA only the retriever was pretrained. REALM was published afterwards and is (subjectively) an improved pre-training method for knowlegde-augmented language models. REALM compares its methods to ORQA by evaluating the models on open-ended question answering, *i.e.*:\r\n\r\n```\r\nWe evaluate our approach by fine-tuning the models pre-trained with REALM on the task of Opendomain Question Answering (Open-QA), one of the most\r\nknowledge-intensive tasks in natural language processing. We evaluate on three popular Open-QA benchmarks (NATURALQUESTIONS-OPEN, WEBQUESTIONS, and\r\nCURATEDTREC) and compare to state-of-the-art Open-QA\r\nmodels, including both extremely large models that store\r\nknowledge implicitly (such as T5) as well as previous approaches that also use a knowledge retriever to access external knowledge, but implement retrieval in a more heuristic fashion (Lee et al., 2019 - ORQA; Min et al., 2019a; Asai et al.,\r\n2019). REALM achieves new state-of-the-art results on all\r\nthree benchmarks, significantly outperforming all previous\r\nsystems by 4-16% absolute accuracy. We also demonstrate\r\nqualitative benefits of REALM, including interpretability\r\nand modularity.\r\n```\r\n(on page 2)\r\n\r\n=> this means that the REALM paper does not just provide a pretraining method, but also fine-tuned checkpoints for Open-QA.\r\n\r\n- As a conclusion REALM does not necessarly rely on the code of `ORQA`. REALM provides both pre-trained checkpoints: https://github.com/google-research/language/tree/master/language/realm#pre-trained-model-checkpoints as well as fine-tuned ones: https://github.com/google-research/language/tree/master/language/realm#pre-trained-model-checkpoints which from a logical point of view are 1-to-1 related to the REALM paper (since REALM evaluated its models on Open-QA). Therefore, in our opinion, the community should be able to load all of the following checkpoints within `modeling_realm.py`:\r\n- \r\n```\r\nREALM pre-trained with CC-News as the target corpus and Wikipedia as the knowledge corpus is available at gs://realm-data/cc_news_pretrained\r\nREALM fine-tuned to perform open-domain QA:\r\non WebQuestions: gs://realm-data/orqa_wq_model_from_realm\r\non NaturalQuestions: gs://realm-data/orqa_nq_model_from_realm\r\n```\r\n\r\nTherefore, I think we should add all the logic of your [codebase](https://github.com/qqaatw/pytorch-realm-orqa): `RealmSearcher` and `RealmReader` as well as `RealmForOpenQA` in `modeling_realm.py`. This has a lot of advantages also from a community's point of view:\r\n\r\n1. REALM without an implementation for QA cannot really be used by most people as pretraining is just to expensive\r\n2. REALM with QA would allow us to nicely demo the model.\r\n\r\n=> would it be ok for you to implement `RealmSearcher` and `RealmReader` similar to how you've done it in [your code-base](https://github.com/qqaatw/pytorch-realm-orqa) as well as a `RealmForOpenQA` class that wraps both of those classes so that QA can be done with a single model instance.\r\n\r\nIn a first step, I think we can transfer most of your code from https://github.com/qqaatw/pytorch-realm-orqa into `modeling_realm.py` and add the integration test - once that passes we can refactor a bit.\r\n\r\nIn a second step, I think we should think a bit about a better abstraction of the retrieval part - ideally we can implement a `RealmRetriever` similar in design to the `RagRetriever` in https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_rag.py (@lhoestq and I are happy to help with that!)\r\n\r\nDoes this make sense to you? \r\n\r\ncc @lhoestq \r\n", "Thank you very much for your detailed replies @patrickvonplaten, I agree with you that we should add `RealmSearcher`, `RealmReader`, and `RealmQA` together.\r\n\r\nIn fact, I initially think that because *REALM* and *ORQA* belong to different codebases, even if fine-tuned *ORQA* checkpoints of *REALM* are provided, they should be put into different models. However, considering UX and costly *REALM* pre-training procedure to most users, integrating them into a single model in `transformers` totally makes sense!\r\n\r\nThe branch of `transformers` codebase in https://github.com/qqaatw/pytorch-realm-orqa is based on the branch of this PR, so I can merge it here seamlessly. \r\n\r\nBefore doing so, there are some concerns that we should clarify first:\r\n\r\n1. The term `Searcher` in ORQA's codebase is actually called `Retriever`, and I changed it to `Searcher` in order to prevent the name conflict between REALM's retriever and ORQA's retriever (Their logics are slightly different but have the same purpose.). Do you think this naming is OK?\r\n2. Currently `RealmSearcher` is leveraging `ScaNN` python package as the vector similarity searcher, would it be OK to add this package into `transformers`' requirements in `setup.py`? If we can't, there are two possible alternatives:\r\n - Asking users to install it manually in the model's docs.\r\n - Implementing brute-force matrix multiplication to replace `ScaNN`, (mentioned [here](https://github.com/google-research/language/tree/master/language/realm#code)).\r\n3. Since `block records` and `block embs` are bound together, `block records` is the corpus containing billions of documents, and `block embs` is a tensor with shape (num_block_records, retriever_proj_size) pre-computed from the corpus, if we upload `block records` into `datasets`'s index, we should also notify users that which model checkpoints in the hub containing `block embs` are corresponded to the `block records` in `datasets`.\r\n4. Would it be suitable to upload Google's official pre-trained and fine-tuned checkpoints to the hub under my HF account? or uploading it under Google's org account is a better choice? If so, do I need to get permission from them first? (Although I've uploaded some checkpoints to my account for testing)", "Hey @qqaatw,\r\n\r\nThanks for your answer:\r\n\r\n1) I think both \"Searcher\" and \"Retriever\" is fine! - happy to stick with \"Searcher\"\r\n2) Yes good question - we'll probably have to add an optional dependency to `setup.py` indeed. We can take care of this later though. The idea is to verify that scann is installed as soon as `Realm....from_pretrained(...)` is called. For now, feel free to just add it and I think we can later take care of it. It would be nice to have both the possibility to use Scann as well as brute-force matrix multiplication (maybe we can add a switch to the REALM config) - something like `use_scann`?\r\n3) cc @lhoestq what do you think here? Should we add both `block records` and `block embeds` to `datasets` with `block embeds` being model-specific or add just `block records` to `datasets` and `block embeds` to the corresponding model repo?\r\n4) Yes for now feel free to upload under your account name.\r\n\r\n\r\nThanks a lot for your work on this :-)", "Hello @patrickvonplaten, thanks for your answer.\r\n\r\nI've added `RealmSearcher`, `RealmReader`, and `RealmForOpenQA` into the model; also, I've added `use_scann` option in the config for search method switching (brute-force matrix multiplication searcher has been implemented).\r\n\r\nWe can complete integration tests as soon as the strategy of storing block records is decided.", "> Hello @patrickvonplaten, thanks for your answer.\r\n> \r\n> I've added `RealmSearcher`, `RealmReader`, and `RealmForOpenQA` into the model; also, I've added `use_scann` option in the config for search method switching (brute-force matrix multiplication searcher has been implemented).\r\n> \r\n> We can complete integration tests as soon as the strategy of storing block records is decided.\r\n\r\nAwesome work @qqaatw :-) Did you already push the additions? I can't see `RealmForOpenQA` in `modeling_realm.py` yet. @lhoestq - I think we should store the block records in `datasets` no?", "@patrickvonplaten The changes have been merged, thanks!", "@qqaatw - ok I think we have a general outline of how to implement everything in mind now. I think this will be a slighly bigger PR than originally expected but the result should be a very nice model :-) I'll try to help you with the integration as much as possible in the coming days/weeks! \r\n\r\nIn a first step, could you add a very hacky way of making an integration test pass? This way I can see exactly how each of the components interact with each other and can reproduce the results locally - feel free to retrieve the block records what is most sutiable for you right now. We'll iterate from this first version then :-)", "I believe @patrickvonplaten and @lhoestq are on it, but it's a very big contribution (thanks @qqaatw!!) so it might take a bit of time. Sorry about that!", "Hey @qqaatw, \r\n\r\nI freed up some time next week to dive into this! Very sorry for the delay", "@patrickvonplaten,\n\nThanks for taking time to dive into this, especially in the holiday. I'll keep tracking this thread to discuss more in detail.\n\nMerry Christmas!", "> @patrickvonplaten,\r\n> \r\n> Thanks for taking time to dive into this, especially in the holiday. I'll keep tracking this thread to discuss more in detail.\r\n> \r\n> Merry Christmas!\r\n\r\nHey @qqaatw,\r\n\r\nThanks a lot for the nice words! Merry Christmas to you as well! \r\n\r\nThe docs are now cleaned and now we can start to look how to best integrate REALM :-) \r\nIn a first step, it would be amazing if we could make sure that the performance of REALM matches more or less the paper. I see that you have some very nice analysis in your repo here: https://github.com/qqaatw/pytorch-realm-orqa#naturalquestionsnq\r\n\r\nDo you think you could post a command here that allows to reproduce those results with the current code, *e.g.* using this format:\r\n\r\n```python\r\n model = RealmForOpenQA.from_pretrained( \r\n r\"qqaatw/realm-orqa-nq-searcher\",\r\n r\"qqaatw/realm-orqa-nq-reader\", \r\n BLOCK_RECORDS_PATH, \r\n ) \r\n\r\n question = \"Who is the pioneer in modern computer science?\"\r\n searcher_output, reader_output, predicted_answer = model(question)\r\n\r\n self.assertEqual(predicted_answer, \"alan mathison turing\")\r\n```\r\njust like in the test: `test_inference_open_qa`. Once we have verified this, I think I'm able to quickly onboard @lhoestq and others to get this PR merged. Let me know if this is not very clear here :-)\r\n\r\nIn a first step I downloaded `https://storage.cloud.google.com/orqa-data/enwiki-20181220/blocks.tfr` and ran `test_inference_open_qa`. The weird thing is that when running it multiple times, sometimes I'm getting the correct answer, but unfortunately I also sometimes (20% of the time) get the following error: \r\n\r\n```bash\r\nE AssertionError: 'charles babbage' != 'alan mathison turing' \r\nE - charles babbage \r\nE + alan mathison turing \r\ntests/test_modeling_realm.py:489: AssertionError \r\n```\r\n\r\nSo there seems to be some kind of randomness in the forward pass - could you verify whether this is the case for you as well? 80% of the time I'm however getting the correct solution. \r\nNote that the reason could also be tiny differences in logits precision for multiple forward passes which I've previously seen with RAG as well. So if you can't reproduce the result it's fine as is I think :-) The important part would now be to verify your eval results here: https://github.com/qqaatw/pytorch-realm-orqa#naturalquestionsnq with this HF implementation .\r\n\r\nBTW, I'm using the following envs (transformers on this branch):\r\n\r\n```\r\n- `transformers` version: 4.16.0.dev0 (this branch)\r\n- Platform: Linux-5.3.0-64-generic-x86_64-with-glibc2.17\r\n- Python version: 3.8.12\r\n- PyTorch version (GPU?): 1.10.0+cu102 (True)\r\n- Tensorflow version (GPU?): 2.7.0 (False)\r\n- Using GPU in script?: Yes\r\n- Using distributed or parallel set-up in script?: No\r\n```\r\n\r\nand \r\n\r\n```\r\nscann version: 1.2.4\r\n```\r\n\r\nin case this might be a reason for the randomness in the test.", "Hi @patrickvonplaten,\r\n\r\nThanks for testing this out. I cannot reproduce the failure with brute-force searcher but can reproduce with ScaNN searcher.\r\n\r\nIt seems that the result from ScaNN is sometimes not deterministic using the same ScaNN parameters as that of in the `orqa` codebase. In contrast, because brute-force searcher always computes all inner products and finds the top K highest scores, it produces consistent results. \r\n\r\nHere is the way that uses brute-force searcher:\r\n```python\r\n realm_config=RealmConfig(use_scann=False)\r\n model = RealmForOpenQA.from_pretrained( \r\n r\"qqaatw/realm-orqa-nq-searcher\",\r\n r\"qqaatw/realm-orqa-nq-reader\", \r\n BLOCK_RECORDS_PATH,\r\n config=realm_config, \r\n ) \r\n\r\n question = \"Who is the pioneer in modern computer science?\"\r\n searcher_output, reader_output, predicted_answer = model(question)\r\n\r\n assert predicted_answer == \"alan mathison turing\"\r\n```\r\n\r\nMy testing env:\r\n```\r\n- `transformers` version: 4.16.0.dev0\r\n- Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.17\r\n- Python version: 3.8.10\r\n- PyTorch version (GPU?): 1.9.0+cu111 (True)\r\n- Tensorflow version (GPU?): 2.6.0 (False)\r\n- Flax version (CPU?/GPU?/TPU?): 0.3.6 (cpu)\r\n- Jax version: 0.2.24\r\n- JaxLib version: 0.1.69\r\n- Using GPU in script?: False\r\n- Using distributed or parallel set-up in script?: False\r\n\r\nScaNN version:\r\nscann 1.2.3\r\n```\r\n\r\n\r\nFor benchmark reproductions, because it requires some data loading, preprocessing, and evaluation function, there might not be practical to paste the entire code here. I just wrote a compact [script](https://github.com/qqaatw/pytorch-realm-orqa/blob/master/benchmark.py) that reproduces both NQ and WQ benchmark results using current up-to-date HF impl.\r\n\r\nTo run NQ:\r\n\r\n```bash\r\npython benchmark.py \\\r\n --dataset_name_path natural_questions \\\r\n --retriever_pretrained_name qqaatw/realm-orqa-nq-searcher \\\r\n --checkpoint_pretrained_name qqaatw/realm-orqa-nq-reader \\\r\n --block_records_path ./data/enwiki-20181220/blocks.tfr \\\r\n --device cuda\r\n```\r\n\r\nTo run WQ:\r\n\r\n```bash\r\npython benchmark.py \\\r\n --dataset_name_path web_questions \\\r\n --retriever_pretrained_name qqaatw/realm-orqa-wq-searcher \\\r\n --checkpoint_pretrained_name qqaatw/realm-orqa-wq-reader \\\r\n --block_records_path ./data/enwiki-20181220/blocks.tfr \\\r\n --device cuda\r\n```", "> Hi @patrickvonplaten,\r\n> \r\n> Thanks for testing this out. I cannot reproduce the failure with brute-force searcher but can reproduce with ScaNN searcher.\r\n> \r\n> It seems that the result from ScaNN is sometimes not deterministic using the same ScaNN parameters as that of in the `orqa` codebase. In contrast, because brute-force searcher always computes all inner products and finds the top K highest scores, it produces consistent results.\r\n> \r\n> Here is the way that uses brute-force searcher:\r\n> \r\n> ```python\r\n> realm_config=RealmConfig(use_scann=False)\r\n> model = RealmForOpenQA.from_pretrained( \r\n> r\"qqaatw/realm-orqa-nq-searcher\",\r\n> r\"qqaatw/realm-orqa-nq-reader\", \r\n> BLOCK_RECORDS_PATH,\r\n> config=realm_config, \r\n> ) \r\n> \r\n> question = \"Who is the pioneer in modern computer science?\"\r\n> searcher_output, reader_output, predicted_answer = model(question)\r\n> \r\n> assert predicted_answer == \"alan mathison turing\"\r\n> ```\r\n> \r\n> My testing env:\r\n> \r\n> ```\r\n> - `transformers` version: 4.16.0.dev0\r\n> - Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.17\r\n> - Python version: 3.8.10\r\n> - PyTorch version (GPU?): 1.9.0+cu111 (True)\r\n> - Tensorflow version (GPU?): 2.6.0 (False)\r\n> - Flax version (CPU?/GPU?/TPU?): 0.3.6 (cpu)\r\n> - Jax version: 0.2.24\r\n> - JaxLib version: 0.1.69\r\n> - Using GPU in script?: False\r\n> - Using distributed or parallel set-up in script?: False\r\n> \r\n> ScaNN version:\r\n> scann 1.2.3\r\n> ```\r\n> \r\n> For benchmark reproductions, because it requires some data loading, preprocessing, and evaluation function, there might not be practical to paste the entire code here. I just wrote a compact [script](https://github.com/qqaatw/pytorch-realm-orqa/blob/master/benchmark.py) that reproduces both NQ and WQ benchmark results using current up-to-date HF impl.\r\n> \r\n> To run NQ:\r\n> \r\n> ```shell\r\n> python benchmark.py \\\r\n> --dataset_name_path natural_questions \\\r\n> --retriever_pretrained_name qqaatw/realm-orqa-nq-searcher \\\r\n> --checkpoint_pretrained_name qqaatw/realm-orqa-nq-reader \\\r\n> --block_records_path ./data/enwiki-20181220/blocks.tfr \\\r\n> --device cuda\r\n> ```\r\n> \r\n> To run WQ:\r\n> \r\n> ```shell\r\n> python benchmark.py \\\r\n> --dataset_name_path web_questions \\\r\n> --retriever_pretrained_name qqaatw/realm-orqa-wq-searcher \\\r\n> --checkpoint_pretrained_name qqaatw/realm-orqa-wq-reader \\\r\n> --block_records_path ./data/enwiki-20181220/blocks.tfr \\\r\n> --device cuda\r\n> ```\r\n\r\nThanks for diving into the random output story. I indeed only tried the \"ScaNN\" approach. Given that \"ScaNN\" has a TF dependency I think it's better anyways to switch to `use_scann=False`. Just tried it and it seems to work well. \r\n\r\nThanks a lot for providing the benchmarking script - I'll run it today and start to think about how we can best integrate the model :-)\r\nThat's great! That's exactly what I was looking for :-) ", "@qqaatw - I can reproduce the eval results which is great. I've started to modify the structure of the elements a bit. I still need to do some modifications tomorrow, but after that I could maybe hand back to you :-) ", "Hey @qqaatw,\r\n\r\nI think we are pretty close to have a version that can be integrated into `transformers`. I've now finished the main changes, which were:\r\n- No `tokenizer` should ever be saved in a model, which is why I moved all tokenizer logic out of the model's forward pass\r\n- Non-PyTorch related code to the retrieval should be handled by a `RealmRetriever` class which I have added and integrated into the model.\r\n- The model `RealmForOpenQA` now **only** does PyTorch matrix multiplication and no tokenizer or retrieval-like operations. This way `RealmForOpenQA` keeps the same format of all other Transformer models.\r\n\r\nFor now, I think it is enough to ensure whenever commits are finished that the test `test_inference_open_qa` still works correctly. \r\n\r\n@qqaatw - the next big step now is to fully remove the `RealmSearcher` class and to load all necessary weights directly in `RealmForOpenQA` . The class `RealmForOpenQA` also should not have a special `from_pretrained(...)` or `save_pretrained(...)` method but should be able to use the default ones. \r\n\r\nWe are looking for the following API in the end:\r\n\r\n```python\r\nquestion = \"some question\"\r\n\r\nmodel_id = \"/path/to/full/realm/model/on/hub/\"\r\n\r\ntokenizer = RealmTokenizer.from_pretrained(model_id)\r\nretriever = RealmRetriever.from_pretrained(model_id) # will load the heavy tf records file\r\n\r\nmodel = RealmForOpenQA.from_pretrained(model_id, retriever=retriever)\r\n\r\nquestion_ids = tokenizer(question, return_tensors=\"pt\").input_ids\r\npredicted_ids = model(question_ids)[0]\r\n\r\nanswer = tokenizer.decode(predicted_ids)\r\n```\r\n\r\nwhich is very similar to https://huggingface.co/facebook/rag-token-nq#usage", "To get ```model = RealmForOpenQA.from_pretrained(model_id, retriever=retriever)``` you will probably have to do the following:\r\n\r\n1. Delete the special `save_pretrained(...)` method in `RealmForQA`\r\n2. Load `RealmForQA` as done in the `test_inference_open_qa` now.\r\n3. Then save the model as **one** using the default `save_pretrained(...)` method (since you deleted the old one) in `/temp_dir`\r\n4. Delete the special `from_pretrained(...)` method in `RealmForQA` \r\n5. Now try to load the model with `model = RealmForOpenQA.from_pretrained(model_id, retriever=retriever)` where `model_id` is your just saved `/temp_dir` directory.\r\n6. Debug this until `test_inference_open_qa` works and then upload the single `pytorch_model.bin` file to a repo", "Let me know if you have difficulties with this and I can try to take a look :-)", "Hey @patrickvonplaten,\r\n\r\nThanks a lot for doing these modifications. I will work on it tomorrow.", "Hey @qqaatw,\r\n\r\nI've done the main modifications now to form the structure into what is common for retrieval-augmented models in `transformers`. See my comments here: \r\nhttps://github.com/huggingface/transformers/pull/13292#discussion_r777452449 and here: https://github.com/huggingface/transformers/pull/13292/files#r777456805. Is this ok for you? Does that make sense? Do you want to discuss anything? \r\n\r\nCould you maybe copy the checkpoint: https://huggingface.co/patrickvonplaten/realm-open-qa under your namespace and play around with it to see if you're ok with those changes? \r\n\r\nThe next steps would be: \r\n- refactor the code (make it cleaner, more comments, treat the suggestions above, add `RealmForOpenQAOutputs`, etc...)\r\n- Then we also need to wait for @lhoestq to see how we can best store the tf records in a retriever\r\n\r\nLet me know if you have any questions or would like to discuss anything :-) ", "Hi @patrickvonplaten,\r\n\r\nI've completed the most modifications and still need to do some improvements.\r\n\r\nLet me know if there is anything that needs to be improved or fixed.\r\n", "Great work @qqaatw! We are very close with merging this PR I think. Everything fits very well now IMO and there is not much left to to:\r\n\r\n- Rename `bert` to `realm` \r\n- Add a `test_realm_retrieval.py` file (I can take care of it after talking to @lhoestq)\r\n- See how to best integrate the block records in the retriever `.from_pretrained(...)` method. (I'll also take care of this).\r\n\r\nWe could then think also a bit about how to best demo this model :-) Maybe in a space (depending on how big the required RAM is). Also I think a blog post could be a great idea", "@qqaatw - I just discussed with @lhoestq that we should store the block records as a numpy file on the hub. I've done some final changes to the retriever and uploaded the block records in numpy format here: https://huggingface.co/qqaatw/realm-orqa-nq-openqa/commit/ea416e495785cd9612f659b69af3a7857c91fe2a \r\nNote that this has the **big** advantage that your PyTorch code is now fully independent of TF -> there is no need to install TF anymore :-)\r\n\r\nI think the API is not more or less complete. The test `test_inference_open_qa` should now work without having to call upon any hard-coded path. Could you try it out on your side?\r\n\r\nI noticed that you already corrected your checkpoint to have `realm` in the weight names -> that's great. I've done some updates in the corresponding modeling file (hope you see this before doing the same thing yourself). \r\n\r\nThink we can merge the PR now by this week :-) It would be great if you could do some final clean-ups (fixing potentially failing tests) and if you want you could also give the `test_retrieval_realm.py` file a try (otherwise I can to it tomorrow or Friday). \r\n\r\nLet me know if you have any questions or need help :-)" ]
1,630
1,648
1,642
CONTRIBUTOR
null
# What does this PR do? This PR adds REALM. - Original paper: https://arxiv.org/abs/2002.08909 - Code and checkpoints: https://github.com/google-research/language/tree/master/language/realm ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Closes https://github.com/huggingface/transformers/issues/3497
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13292/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13292/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13292", "html_url": "https://github.com/huggingface/transformers/pull/13292", "diff_url": "https://github.com/huggingface/transformers/pull/13292.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13292.patch", "merged_at": 1642508653000 }
https://api.github.com/repos/huggingface/transformers/issues/13291
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13291/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13291/comments
https://api.github.com/repos/huggingface/transformers/issues/13291/events
https://github.com/huggingface/transformers/issues/13291
980,839,393
MDU6SXNzdWU5ODA4MzkzOTM=
13,291
torch longformer to tf longformer
{ "login": "aixuedegege", "id": 19356707, "node_id": "MDQ6VXNlcjE5MzU2NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/19356707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aixuedegege", "html_url": "https://github.com/aixuedegege", "followers_url": "https://api.github.com/users/aixuedegege/followers", "following_url": "https://api.github.com/users/aixuedegege/following{/other_user}", "gists_url": "https://api.github.com/users/aixuedegege/gists{/gist_id}", "starred_url": "https://api.github.com/users/aixuedegege/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aixuedegege/subscriptions", "organizations_url": "https://api.github.com/users/aixuedegege/orgs", "repos_url": "https://api.github.com/users/aixuedegege/repos", "events_url": "https://api.github.com/users/aixuedegege/events{/privacy}", "received_events_url": "https://api.github.com/users/aixuedegege/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Can't help you with the first question, but for the second:\r\n\r\n> how to turn it into a tf model\r\n\r\n```\r\nfrom transformers import TFLongformerModel\r\n\r\nmodel = TFLongformerModel.from_pretrained(\"xcjthu/Lawformer\", from_pt=True)\r\n```", "Hey @aixuedegege,\r\n\r\nplease use the forum: https://discuss.huggingface.co/ for questions like your first one as it's not really a bug of the library, but more a general question. Your chances of getting a good answer should be higher there :-)", "@NielsRogge Thanks for you answer. But using your code I got some warning \"Some weights or buffers of the TF 2.0 model TFLongformerModel were not initialized from the PyTorch model and are newly initialized\". I will go to https://discuss.huggingface.co/ for discussing this." ]
1,630
1,630
1,630
NONE
null
# 🚀 Feature request 1、how to use this model in triton 2、how to turn it into a tf model ## Motivation I downloaded the model from "https://huggingface.co/xcjthu/Lawformer/tree/main" which is a torch model. It can not be transformed to a pt file using "torch.jit.trace". I want to use triton to server it. So I want to try to trans it to a tf model and try to transform it to savedmodel ## Your contribution Sorry, I have no idea about how to Implement it I need your help haha. Thanks a lot.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13291/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13290
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13290/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13290/comments
https://api.github.com/repos/huggingface/transformers/issues/13290/events
https://github.com/huggingface/transformers/pull/13290
980,828,246
MDExOlB1bGxSZXF1ZXN0NzIwOTkzNDA5
13,290
Add GPT2ForTokenClassification
{ "login": "tucan9389", "id": 37643248, "node_id": "MDQ6VXNlcjM3NjQzMjQ4", "avatar_url": "https://avatars.githubusercontent.com/u/37643248?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tucan9389", "html_url": "https://github.com/tucan9389", "followers_url": "https://api.github.com/users/tucan9389/followers", "following_url": "https://api.github.com/users/tucan9389/following{/other_user}", "gists_url": "https://api.github.com/users/tucan9389/gists{/gist_id}", "starred_url": "https://api.github.com/users/tucan9389/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tucan9389/subscriptions", "organizations_url": "https://api.github.com/users/tucan9389/orgs", "repos_url": "https://api.github.com/users/tucan9389/repos", "events_url": "https://api.github.com/users/tucan9389/events{/privacy}", "received_events_url": "https://api.github.com/users/tucan9389/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Fixed all test failures and all passed.", "@sgugger Thanks for approving.\r\n\r\n@patrickvonplaten I just fixed some errors that occurred in `run_tests_torch` CI test and all tests were passed. \r\n\r\n" ]
1,630
1,630
1,630
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> - Add `GPT2ForTokenClassification` class for GPT2 upstream and NER downstream task ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @patrickvonplaten, @LysandreJik @sgugger, @patil-suraj <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13290/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13290/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13290", "html_url": "https://github.com/huggingface/transformers/pull/13290", "diff_url": "https://github.com/huggingface/transformers/pull/13290.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13290.patch", "merged_at": 1630405144000 }
https://api.github.com/repos/huggingface/transformers/issues/13289
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13289/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13289/comments
https://api.github.com/repos/huggingface/transformers/issues/13289/events
https://github.com/huggingface/transformers/pull/13289
980,823,220
MDExOlB1bGxSZXF1ZXN0NzIwOTg5MTgy
13,289
Fix minor typo in parallelism doc
{ "login": "jaketae", "id": 25360440, "node_id": "MDQ6VXNlcjI1MzYwNDQw", "avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jaketae", "html_url": "https://github.com/jaketae", "followers_url": "https://api.github.com/users/jaketae/followers", "following_url": "https://api.github.com/users/jaketae/following{/other_user}", "gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}", "starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jaketae/subscriptions", "organizations_url": "https://api.github.com/users/jaketae/orgs", "repos_url": "https://api.github.com/users/jaketae/repos", "events_url": "https://api.github.com/users/jaketae/events{/privacy}", "received_events_url": "https://api.github.com/users/jaketae/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,630
1,630
1,630
CONTRIBUTOR
null
# What does this PR do? This PR addresses a very minor typo in `parallelism.md`. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13289/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13289/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13289", "html_url": "https://github.com/huggingface/transformers/pull/13289", "diff_url": "https://github.com/huggingface/transformers/pull/13289.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13289.patch", "merged_at": 1630406945000 }
https://api.github.com/repos/huggingface/transformers/issues/13288
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13288/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13288/comments
https://api.github.com/repos/huggingface/transformers/issues/13288/events
https://github.com/huggingface/transformers/issues/13288
980,703,907
MDU6SXNzdWU5ODA3MDM5MDc=
13,288
GPT2 for classification - Errors encountered while running run_glue.py and (possible) fixes
{ "login": "bpraveenk", "id": 14226904, "node_id": "MDQ6VXNlcjE0MjI2OTA0", "avatar_url": "https://avatars.githubusercontent.com/u/14226904?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bpraveenk", "html_url": "https://github.com/bpraveenk", "followers_url": "https://api.github.com/users/bpraveenk/followers", "following_url": "https://api.github.com/users/bpraveenk/following{/other_user}", "gists_url": "https://api.github.com/users/bpraveenk/gists{/gist_id}", "starred_url": "https://api.github.com/users/bpraveenk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bpraveenk/subscriptions", "organizations_url": "https://api.github.com/users/bpraveenk/orgs", "repos_url": "https://api.github.com/users/bpraveenk/repos", "events_url": "https://api.github.com/users/bpraveenk/events{/privacy}", "received_events_url": "https://api.github.com/users/bpraveenk/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[ { "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false } ]
[ "Hey @bpraveenk,\r\n\r\ncould you attach a google colab to reproduce the error here? Pinging @Rocketknight1 for TF here.", "TF maintainer here! I reproduced the second error but not the first - but the second one seems like a much more serious problem anyway. The problem does not occur for me in any other models I tested except GPT2, but possibly there are other CLM models where this occurs. My suspicion is that the bug is in our `TFGPT2ForSequenceClassification` code, not in `run_glue.py`. Although you can write some code in `run_glue.py` to work around it, this might break other models that are currently working, like BERT. \r\n\r\nEither way, thank you for finding this! If you want to try to fix this yourself, please let me know, and ask any questions you like. Please make sure that any fixes you submit also work with MLM models like `bert-base-uncased` as well as `gpt2` though!", "Hey, actually, on further examination, I think the issue is that all CLM-trained models return outputs with past states. Therefore, all we need to do is check whether the output is an instance of `TFSequenceClassifierOutputWithPast`, in `run_glue.py`, and if so, to take `pooled_logits[:, -1, :]` as you suggested, and we shouldn't need to modify the GPT-2 code at all.", "Thank you @Rocketknight1 for your prompt response. I am glad I could help! \r\n\r\nAfter going over this tensorflow [issue](https://github.com/tensorflow/tensorflow/issues/33929#issuecomment-634181668), I guess the first error was probably resolved in later version of tensorflow-2.x. Could you share the version of tf that you are using to reproduce the error?\r\n\r\nRegarding error 2, adding `pooled_logits = pooled_logits[:, -1, :]` alone did not work for me. I had to remove the past states (see below) from the return object for training to proceed successfully. I recommend running the code in tensorflow-eager mode to see more descriptive error. The change I made is specific to GPT2 classification model and it didn't affect fine-tuning/training other models, e.g., bert-base-uncased, which I used to test the change. \r\n\r\n`return TFSequenceClassifierOutputWithPast(\r\n logits=pooled_logits,\r\n )\r\n`\r\n\r\nJust curious, would your proposed solution to check the instance of the output (e.g., `TFSequenceClassifierOutputWithPast`) in run_glue.py work with `model.fit`? Since the change is in run_glue.py, perhaps we should test the solution to make sure it works with other models too. \r\n\r\nOn a related note, what are your thoughts on using a flag to control the inclusion of past-states and loss in the GPT2Classification model forward-pass output?\r\n\r\nI am happy to fix the bug. Could you please point me to the document which includes steps to run relevant unit-tests, submit a patch and get it reviewed by the maintainers before its merged?\r\n\r\n\r\n\r\n", "Hi @bpraveenk! I was using TF 2.5, which might explain why I didn't see the first error. \r\n\r\nHowever, you're correct that the fix I suggested won't work with `model.fit`, so we would need some way to get CLM models to stop returning those past states. I'm going to check with the rest of the team about whether returning `TFSequenceClassifierOutputWithPast` is intended in this case, and what we can do about it. If we decide a flag like you suggested is appropriate, I'd be happy to work with you on implementing that.\r\n\r\nAlso, this isn't really relevant, but can I ask why you want to use a CLM model like GPT-2 for sequence classification instead of a more normal MLM model? It's definitely something we should be supporting, but it's still quite rare, so I'm curious to know what your use-case is there!", "Thank you @Rocketknight1 for your detailed response. I was curious to benchmark the performance of GPT2 against other LMs on classification tasks.", "That's interesting - my intuition is that it will do worse than MLMs, though it has the advantage of being quite a large model. That said, we're adding some equally-big MLM models to the hub, including a TF port of DeBERTaV2 in the next few days, which would be an interesting point of comparison. I'd love to see your benchmark results when they're ready!", "It's indeed exciting to hear that large MLM model will be made available! For discriminative and generative model performance comparison I am planning to use BART (encoder-decoder) model as well. Do I have to write custom code to fine-tune BART model on GLUE tasks or can I use run_glue.py?", "BART is a Seq2Seq model, and I'm not sure if we have a TF implementation of a sequence classifier head for it, unfortunately. You might have to build your own model, starting from TFBartModel and then adding a classifier head on top.", "It seems that pass the pad_token_id works too?\r\nI met the same problem today when I want to build a classifier head on the TFGPT2Model, I try to follow the source code in modeling_tf_gpt2.py to build a dense layer after the transformer(which is the gpt2 in this case), but I forgot this step:`in_logits = tf.gather(logits, sequence_lengths, batch_dims=1, axis=1)`, when I use fit function, the bug occurred(shape mismatch). Thanks @bpraveenk @Rocketknight1 you do me big favor to fix the bug.\r\nNow, I use dense()[:,-1,:] instead of dense() as the outputs , and it can fit now.\r\nBut I still hold a concern about why they get the different output between TFAutoModelForSequenceClassification and my model( TFGPT2Model + dense(I copy the 'score' parameter from modeling_tf_gpt2.py)[:,-1,:], is it because of the weights? which I haven't trained. ( But I guess the weight of the score hasn't trained in the TFAutoModelForSequenceClassification...)\r\n\r\n**my custom model:**\r\n`from tensorflow.keras.layers import Dense\r\nimport tensorflow as tf\r\ninput_ids = tf.keras.layers.Input(shape=(128,), name='input_ids', dtype='int32')\r\nattention_mask = tf.keras.layers.Input(shape=(128,), name='attention_mask', dtype='int32')\r\nembeddings = gpt2_hf(input_ids=input_ids,attention_mask=attention_mask)[0]\r\n\r\nscore = tf.keras.layers.Dense(112,kernel_initializer=tf.initializers.TruncatedNormal(config.initializer_range),name=\"score\",use_bias=False,)(embeddings)[:,-1,:]\r\n\r\n\r\nmodel = tf.keras.Model(inputs=[input_ids,attention_mask], outputs=score,name='GPT2_Multiclass')\r\n`\r\n\r\n\r\n**pad token id**\r\n` if self.config.pad_token_id is None:\r\n sequence_lengths = -1\r\n else:\r\n if inputs[\"input_ids\"] is not None:\r\n sequence_lengths = (\r\n tf.reduce_sum(\r\n tf.cast(\r\n tf.math.not_equal(inputs[\"input_ids\"], self.config.pad_token_id),\r\n dtype=inputs[\"input_ids\"].dtype,\r\n ),\r\n -1,\r\n keepdims=False,\r\n )\r\n - 1\r\n )\r\n in_logits = tf.gather(logits, sequence_lengths, batch_dims=1, axis=1)`", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,630
1,634
1,634
NONE
null
Here is a description of series of errors I encountered while fine-tuning gpt2 pre-trained model using run_glue.py (which were also reported [here](https://github.com/huggingface/transformers/issues/13123)). I am also mentioning here the code fixes I had to make to fix these errors. If the custodians of the code-base are happy with the changes, I will be glad to check the changes in if the set of instructions to submit the patch, get it reviewed and checkin are shared with me. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.10.0.dev0 - Platform: Linux-5.4.0-1051-azure-x86_64-with-glibc2.10 - Python version: 3.8.1 - PyTorch version (GPU?): 1.9.0 - Tensorflow version (GPU?): 2.3.0 - Using GPU in script?: Yes (1 gpu) - Using distributed or parallel set-up in script?: ### Who can help @patrickvonplaten, @sgugger, @patil-suraj Model I am using (Bert, XLNet ...): GPT2 The problem arises when using: * [ ] the official example scripts: (give details below) examples/tensorflow/text-classification/run_glue.py The tasks I am working on is: * [ ] an official GLUE/SQUaD task: GLUE ## To reproduce Steps to reproduce the behavior: (applicable to any GLUE classification task) 1. python run_glue.py --model_name_or_path gpt2 --task_name sst2 --do_train --do_eval --do_predict --output_dir ./output **Error 1** File "run_glue.py", line 567, in <module> main() File "run_glue.py", line 415, in main optimizer = tf.keras.optimizers.Adam( File "/anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/keras/optimizer_v2/adam.py", line 115, in __init__ super(Adam, self).__init__(name, **kwargs) File "/anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py", line 335, in __init__ raise ValueError("Gradient clipping in the optimizer " ValueError: Gradient clipping in the optimizer (by setting clipnorm or clipvalue) is currently unsupported when using a distribution strategy. **Fix** Don't set the clipnorm parameter # clipnorm=training_args.max_grad_norm, **Error 2** ValueError: in user code: /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:806 train_function * return step_function(self, iterator) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:796 step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/distribute/one_device_strategy.py:184 run return super(OneDeviceStrategy, self).run(fn, args, kwargs, options) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:1211 run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/distribute/one_device_strategy.py:367 _call_for_each_replica return fn(*args, **kwargs) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:789 run_step ** outputs = model.train_step(data) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:748 train_step loss = self.compiled_loss( /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/keras/engine/compile_utils.py:204 __call__ loss_value = loss_obj(y_t, y_p, sample_weight=sw) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/keras/losses.py:149 __call__ losses = ag_call(y_true, y_pred) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/keras/losses.py:253 call ** return ag_fn(y_true, y_pred, **self._fn_kwargs) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py:201 wrapper return target(*args, **kwargs) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/keras/losses.py:1566 sparse_categorical_crossentropy return K.sparse_categorical_crossentropy( /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py:201 wrapper return target(*args, **kwargs) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/keras/backend.py:4790 sparse_categorical_crossentropy return array_ops.reshape(res, output_shape[:-1]) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py:201 wrapper return target(*args, **kwargs) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py:195 reshape result = gen_array_ops.reshape(tensor, shape, name) /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/ops/gen_array_ops.py:8233 reshape _, _, _op, _outputs = _op_def_library._apply_op_helper( /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/framework/op_def_library.py:742 _apply_op_helper op = g._create_op_internal(op_type_name, inputs, dtypes=None, /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py:591 _create_op_internal return super(FuncGraph, self)._create_op_internal( # pylint: disable=protected-access /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/framework/ops.py:3477 _create_op_internal ret = Operation( /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/framework/ops.py:1974 __init__ self._c_op = _create_c_op(self._graph, node_def, inputs, /anaconda/envs/azureml_py38/lib/python3.8/site-packages/tensorflow/python/framework/ops.py:1815 _create_c_op raise ValueError(str(e)) ValueError: Dimension size must be evenly divisible by 192 but is 8 for '{{node sparse_categorical_crossentropy_2/Reshape_2}} = Reshape[T=DT_FLOAT, Tshape=DT_INT32](sparse_categorical_crossentropy_2/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits, sparse_categorical_crossentropy_2/strided_slice_1)' with input shapes: [8], [4] and with input tensors computed as partial shapes: input[1] = [2,8,12,?]. **Fix** It looks the like call to **TFGPT2ForSequenceClassification** return logits in shape (batch_size, sequence_length, num_labels), which is causing the above error. After pooled_logits are computed, add the following line to extract the logits from last step of the sequence pooled_logits = pooled_logits[:, -1, :] and change return TFSequenceClassifierOutputWithPast( loss=loss, logits=pooled_logits, past_key_values=transformer_outputs.past_key_values, hidden_states=transformer_outputs.hidden_states, attentions=transformer_outputs.attentions, ) to return TFSequenceClassifierOutputWithPast( logits=pooled_logits, ) ## Expected behavior Successful completion of training and evaluation
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13288/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13287
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13287/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13287/comments
https://api.github.com/repos/huggingface/transformers/issues/13287/events
https://github.com/huggingface/transformers/issues/13287
980,520,618
MDU6SXNzdWU5ODA1MjA2MTg=
13,287
Pretraining T5-v1_1 on Flax
{ "login": "peregilk", "id": 9079808, "node_id": "MDQ6VXNlcjkwNzk4MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/9079808?v=4", "gravatar_id": "", "url": "https://api.github.com/users/peregilk", "html_url": "https://github.com/peregilk", "followers_url": "https://api.github.com/users/peregilk/followers", "following_url": "https://api.github.com/users/peregilk/following{/other_user}", "gists_url": "https://api.github.com/users/peregilk/gists{/gist_id}", "starred_url": "https://api.github.com/users/peregilk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/peregilk/subscriptions", "organizations_url": "https://api.github.com/users/peregilk/orgs", "repos_url": "https://api.github.com/users/peregilk/repos", "events_url": "https://api.github.com/users/peregilk/events{/privacy}", "received_events_url": "https://api.github.com/users/peregilk/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Think both using `dropout_rate: 0.1` and not using it is fine! It also depends on the dataset you are using. The Flax T5 demo trains on Norwegian which is much smaller than English so it makes more sense here to use dropout for regularization", "OK. Thanks. Im training a much larger model here, just using the Flax T5 Demo as a starting point. But if I understand you correctly, just simply changing this manually to 'dropout_rate: 0', would then be more in line with what Google describes in v1.1 - and then changing it back before finetuning. \r\n\r\nWhat about the change that is called _\"no parameter sharing between embedding and classifier layer\"_ that T5 v1.1 is using? I was unable to see how this is implemented in the example code. Is this a setting in config.json, or does it require changing the architecture.\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,630
1,633
1,633
CONTRIBUTOR
null
@patrickvonplaten In the[Flax tutorial](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling) it is recommended loading the config from t5-v1_1-base when pretraining, using: `config = T5Config.from_pretrained("google/t5-v1_1-base", vocab_size=tokenizer.get_vocab_size())` This basically copies this [config](https://huggingface.co/google/t5-v1_1-base/blob/main/config.json) It seems like this is tuned for finetuning, since it has the line: '"dropout_rate": 0.1' Google [states](https://huggingface.co/google/t5-v1_1-base) that _"Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning."._ Should this be modified for pretraining? Google also state that "no parameter sharing between embedding and classifier layer". How is this achieved?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13287/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13287/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13286
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13286/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13286/comments
https://api.github.com/repos/huggingface/transformers/issues/13286/events
https://github.com/huggingface/transformers/pull/13286
980,381,207
MDExOlB1bGxSZXF1ZXN0NzIwNjMwNzUx
13,286
Moving `token-classification` pipeline to new testing.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,630
1,630
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13286/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13286", "html_url": "https://github.com/huggingface/transformers/pull/13286", "diff_url": "https://github.com/huggingface/transformers/pull/13286.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13286.patch", "merged_at": 1630056297000 }
https://api.github.com/repos/huggingface/transformers/issues/13285
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13285/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13285/comments
https://api.github.com/repos/huggingface/transformers/issues/13285/events
https://github.com/huggingface/transformers/pull/13285
980,331,282
MDExOlB1bGxSZXF1ZXN0NzIwNTg5NzM2
13,285
Moving `text-generation` pipeline to new testing framework.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,629
1,629
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13285/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13285", "html_url": "https://github.com/huggingface/transformers/pull/13285", "diff_url": "https://github.com/huggingface/transformers/pull/13285.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13285.patch", "merged_at": 1629991804000 }
https://api.github.com/repos/huggingface/transformers/issues/13284
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13284/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13284/comments
https://api.github.com/repos/huggingface/transformers/issues/13284/events
https://github.com/huggingface/transformers/issues/13284
980,308,731
MDU6SXNzdWU5ODAzMDg3MzE=
13,284
Question about bart-base model
{ "login": "HiXiaochen", "id": 31069872, "node_id": "MDQ6VXNlcjMxMDY5ODcy", "avatar_url": "https://avatars.githubusercontent.com/u/31069872?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HiXiaochen", "html_url": "https://github.com/HiXiaochen", "followers_url": "https://api.github.com/users/HiXiaochen/followers", "following_url": "https://api.github.com/users/HiXiaochen/following{/other_user}", "gists_url": "https://api.github.com/users/HiXiaochen/gists{/gist_id}", "starred_url": "https://api.github.com/users/HiXiaochen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HiXiaochen/subscriptions", "organizations_url": "https://api.github.com/users/HiXiaochen/orgs", "repos_url": "https://api.github.com/users/HiXiaochen/repos", "events_url": "https://api.github.com/users/HiXiaochen/events{/privacy}", "received_events_url": "https://api.github.com/users/HiXiaochen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "`BartModel` itself doesn't have a language modeling head, only `BartForConditionalGeneration` does. The latter adds a language modeling head on top of `BartModel`.", "> `BartModel` itself doesn't have a language modeling head, only `BartForConditionalGeneration` does. The latter adds a language modeling head on top of `BartModel`.\r\n\r\nThanks for your reply!!\r\nSorry that my description is not clear. The model I used is BartForConditionalGeneration. As I described above, \"model.encoder.embed_tokens.weight\", \"model.decoder.embed_tokens.weight\",\"lm_head.weight\" , as well as\"final_logits_bias\" appear in model.state_dict() but not in model.named_parameters(). I know that \"final_logits_bias\" is registered in model.buffers(), so it's normal. But aren't the other three supposed to be trainable in downstream missions(which means they should be in model.parameters())? ", "That's because input and output embeddings are tied (i.e. shared). This can be verified by printing the named parameters:\r\n\r\n```\r\nfrom transformers import BartForConditionalGeneration\r\n\r\nmodel = BartForConditionalGeneration.from_pretrained(\"facebook/bart-base\")\r\n\r\nfor name, param in model.named_parameters():\r\n print(name, param.shape)\r\n```\r\n\r\nwhich prints:\r\n```\r\nmodel.shared.weight torch.Size([50265, 768])\r\n(...)\r\n```\r\n\r\nYou can also verify that the weights of the embed_tokens and lm_head for example are exactly the same, like so:\r\n\r\n```\r\nimport torch\r\n\r\nassert torch.allclose(model.model.encoder.embed_tokens.weight, model.lm_head.weight)\r\n```", "> That's because input and output embeddings are tied (i.e. shared). This can be verified by printing the named parameters:\r\n> \r\n> ```\r\n> from transformers import BartForConditionalGeneration\r\n> \r\n> model = BartForConditionalGeneration.from_pretrained(\"facebook/bart-base\")\r\n> \r\n> for name, param in model.named_parameters():\r\n> print(name, param.shape)\r\n> ```\r\n> \r\n> which prints:\r\n> \r\n> ```\r\n> model.shared.weight torch.Size([50265, 768])\r\n> (...)\r\n> ```\r\n> \r\n> You can also verify that the weights of the embed_tokens and lm_head for example are exactly the same, like so:\r\n> \r\n> ```\r\n> import torch\r\n> \r\n> assert torch.allclose(model.model.encoder.embed_tokens.weight, model.lm_head.weight)\r\n> ```\r\n\r\nI got it. Thanks very much for your patient answer!!!" ]
1,629
1,630
1,630
NONE
null
When I use 'bart-base', I just want to update some parts of the parameters, so I call function"model.named_parameters()", and let "requires_grad" of parameters which I want to update be true, and others be false. However, when I call "model.state_dict()", I found "model.encoder.embed_tokens.weight", "model.decoder.embed_tokens.weight", "lm_head.weight" are both appear in state_dict() but not in named_parameters(). However, after I checked the initialization code of BartModel, I found all these weights are initialized with nn.Embedding() or nn.Parameters(): ''' self.shared = nn.Embedding(vocab_size, config.d_model, padding_idx) self.encoder = BartEncoder(config, self.shared) self.decoder = BartDecoder(config, self.shared) ''' ''' self.lm_head = nn.Linear(config.d_model, self.model.shared.num_embeddings, bias=False) ''' So all of they could be updated during training theoretically.Why don't they appear in mode.named_parameters()?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13284/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13284/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13283
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13283/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13283/comments
https://api.github.com/repos/huggingface/transformers/issues/13283/events
https://github.com/huggingface/transformers/pull/13283
980,296,157
MDExOlB1bGxSZXF1ZXN0NzIwNTYxMTkw
13,283
Moving `text2text-generation` to new pipeline testing mecanism
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,629
1,629
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13283/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13283", "html_url": "https://github.com/huggingface/transformers/pull/13283", "diff_url": "https://github.com/huggingface/transformers/pull/13283.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13283.patch", "merged_at": 1629988019000 }
https://api.github.com/repos/huggingface/transformers/issues/13282
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13282/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13282/comments
https://api.github.com/repos/huggingface/transformers/issues/13282/events
https://github.com/huggingface/transformers/pull/13282
980,284,500
MDExOlB1bGxSZXF1ZXN0NzIwNTUxNjY3
13,282
Hotfixing master tests.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,629
1,629
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13282/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13282/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13282", "html_url": "https://github.com/huggingface/transformers/pull/13282", "diff_url": "https://github.com/huggingface/transformers/pull/13282.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13282.patch", "merged_at": 1629986993000 }
https://api.github.com/repos/huggingface/transformers/issues/13281
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13281/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13281/comments
https://api.github.com/repos/huggingface/transformers/issues/13281/events
https://github.com/huggingface/transformers/pull/13281
980,263,712
MDExOlB1bGxSZXF1ZXN0NzIwNTM0ODE1
13,281
Moving `table-question-answering` pipeline to new testing
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,629
1,629
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13281/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13281/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13281", "html_url": "https://github.com/huggingface/transformers/pull/13281", "diff_url": "https://github.com/huggingface/transformers/pull/13281.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13281.patch", "merged_at": 1629986988000 }
https://api.github.com/repos/huggingface/transformers/issues/13280
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13280/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13280/comments
https://api.github.com/repos/huggingface/transformers/issues/13280/events
https://github.com/huggingface/transformers/pull/13280
980,205,732
MDExOlB1bGxSZXF1ZXN0NzIwNDg2OTI0
13,280
Moving `table-question-answering` pipeline to new testing.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,629
1,629
CONTRIBUTOR
null
<!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13280/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13280/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13280", "html_url": "https://github.com/huggingface/transformers/pull/13280", "diff_url": "https://github.com/huggingface/transformers/pull/13280.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13280.patch", "merged_at": 1629983398000 }
https://api.github.com/repos/huggingface/transformers/issues/13279
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13279/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13279/comments
https://api.github.com/repos/huggingface/transformers/issues/13279/events
https://github.com/huggingface/transformers/pull/13279
980,108,641
MDExOlB1bGxSZXF1ZXN0NzIwNDA2Mjc0
13,279
Moving `summarization` pipeline to new testing format.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,629
1,629
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13279/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13279", "html_url": "https://github.com/huggingface/transformers/pull/13279", "diff_url": "https://github.com/huggingface/transformers/pull/13279.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13279.patch", "merged_at": 1629982031000 }
https://api.github.com/repos/huggingface/transformers/issues/13278
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13278/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13278/comments
https://api.github.com/repos/huggingface/transformers/issues/13278/events
https://github.com/huggingface/transformers/pull/13278
980,070,277
MDExOlB1bGxSZXF1ZXN0NzIwMzczNzEz
13,278
[Hotfix] Fixing the test (warnings was incorrect.)
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,629
1,629
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13278/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13278/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13278", "html_url": "https://github.com/huggingface/transformers/pull/13278", "diff_url": "https://github.com/huggingface/transformers/pull/13278.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13278.patch", "merged_at": 1629972828000 }
https://api.github.com/repos/huggingface/transformers/issues/13277
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13277/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13277/comments
https://api.github.com/repos/huggingface/transformers/issues/13277/events
https://github.com/huggingface/transformers/pull/13277
980,065,346
MDExOlB1bGxSZXF1ZXN0NzIwMzY5NTM5
13,277
Moving question_answering tests to the new testing scheme. Had to tweak a little some ModelTesterConfig for pipelines.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,629
1,629
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13277/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13277/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13277", "html_url": "https://github.com/huggingface/transformers/pull/13277", "diff_url": "https://github.com/huggingface/transformers/pull/13277.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13277.patch", "merged_at": 1629974275000 }
https://api.github.com/repos/huggingface/transformers/issues/13276
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13276/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13276/comments
https://api.github.com/repos/huggingface/transformers/issues/13276/events
https://github.com/huggingface/transformers/pull/13276
980,046,415
MDExOlB1bGxSZXF1ZXN0NzIwMzUzNTgx
13,276
Announcing the default model used by the pipeline (with a link).
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,630
1,630
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes https://github.com/huggingface/transformers/issues/12845 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13276/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13276/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13276", "html_url": "https://github.com/huggingface/transformers/pull/13276", "diff_url": "https://github.com/huggingface/transformers/pull/13276.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13276.patch", "merged_at": 1630317871000 }
https://api.github.com/repos/huggingface/transformers/issues/13275
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13275/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13275/comments
https://api.github.com/repos/huggingface/transformers/issues/13275/events
https://github.com/huggingface/transformers/pull/13275
980,044,034
MDExOlB1bGxSZXF1ZXN0NzIwMzUxNjEz
13,275
Fix BeitForMaskedImageModeling
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,630
1,630
CONTRIBUTOR
null
# What does this PR do? Fixes #13235 I've also added an integration test for BeitForMaskedImageModeling, to make sure it returns the same logits as the original implementation on the same input image.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13275/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13275/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13275", "html_url": "https://github.com/huggingface/transformers/pull/13275", "diff_url": "https://github.com/huggingface/transformers/pull/13275.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13275.patch", "merged_at": 1630069797000 }
https://api.github.com/repos/huggingface/transformers/issues/13274
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13274/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13274/comments
https://api.github.com/repos/huggingface/transformers/issues/13274/events
https://github.com/huggingface/transformers/issues/13274
980,014,268
MDU6SXNzdWU5ODAwMTQyNjg=
13,274
`pipeline` backed with ONNX Runtime and quantization for faster inference
{ "login": "xegulon", "id": 74178038, "node_id": "MDQ6VXNlcjc0MTc4MDM4", "avatar_url": "https://avatars.githubusercontent.com/u/74178038?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xegulon", "html_url": "https://github.com/xegulon", "followers_url": "https://api.github.com/users/xegulon/followers", "following_url": "https://api.github.com/users/xegulon/following{/other_user}", "gists_url": "https://api.github.com/users/xegulon/gists{/gist_id}", "starred_url": "https://api.github.com/users/xegulon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xegulon/subscriptions", "organizations_url": "https://api.github.com/users/xegulon/orgs", "repos_url": "https://api.github.com/users/xegulon/repos", "events_url": "https://api.github.com/users/xegulon/events{/privacy}", "received_events_url": "https://api.github.com/users/xegulon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sgugger @LysandreJik find this interesting?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,633
1,633
NONE
null
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> It would be cool, when we load a pipeline, to do, during the loading, the conversion to an ONNX `InferenceSession` along with (optionally) the quantization, as these features provide significant speedups. About the live conversion, maybe it could have been done before, i.e. we could load the `.onnx` file from the Hub (among other model assets). Anyway, the main goal is the use of the speedups provided by ONNX Runtime in the pipeline object. One would instantiate the `pipeline` object that way: ```python nlp = pipeline('text-classification', onnx_runtime=True, quantization=True) ``` Reference: https://onnxruntime.ai/docs/tutorials/inferencing/huggingface.html ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> The speed and memory (with quantization) gains. ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md --> I could try :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13274/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 4, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13274/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13273
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13273/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13273/comments
https://api.github.com/repos/huggingface/transformers/issues/13273/events
https://github.com/huggingface/transformers/issues/13273
980,004,673
MDU6SXNzdWU5ODAwMDQ2NzM=
13,273
Docs: TrainingArguments call incorrect
{ "login": "UrosOgrizovic", "id": 25843402, "node_id": "MDQ6VXNlcjI1ODQzNDAy", "avatar_url": "https://avatars.githubusercontent.com/u/25843402?v=4", "gravatar_id": "", "url": "https://api.github.com/users/UrosOgrizovic", "html_url": "https://github.com/UrosOgrizovic", "followers_url": "https://api.github.com/users/UrosOgrizovic/followers", "following_url": "https://api.github.com/users/UrosOgrizovic/following{/other_user}", "gists_url": "https://api.github.com/users/UrosOgrizovic/gists{/gist_id}", "starred_url": "https://api.github.com/users/UrosOgrizovic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/UrosOgrizovic/subscriptions", "organizations_url": "https://api.github.com/users/UrosOgrizovic/orgs", "repos_url": "https://api.github.com/users/UrosOgrizovic/repos", "events_url": "https://api.github.com/users/UrosOgrizovic/events{/privacy}", "received_events_url": "https://api.github.com/users/UrosOgrizovic/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false } ]
[ "No, this example is correct, as both syntax work.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,633
1,633
NONE
null
On [this](https://huggingface.co/transformers/training.html) page, there's the following code: ``` from transformers import TrainingArguments training_args = TrainingArguments("test_trainer", evaluation_strategy="epoch") ``` However, `evaluation_strategy` needs to be an `IntervalStrategy` instead of a `string`. The way `TrainingArguments` actually needs to be called is: ``` training_args = TrainingArguments("test_trainer", evaluation_strategy=IntervalStrategy.EPOCH) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13273/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13273/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13272
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13272/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13272/comments
https://api.github.com/repos/huggingface/transformers/issues/13272/events
https://github.com/huggingface/transformers/pull/13272
979,993,566
MDExOlB1bGxSZXF1ZXN0NzIwMzA5NzA2
13,272
Move `image-classification` pipeline to new testing
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,629
1,629
CONTRIBUTOR
null
# What does this PR do? - Enforce `test_small_models_{tf,pt}` methods to exist (enforce checking actual values in small tests) - Add support for non RGB image for the pipeline. - Some tests had to be modified (feature-extraction does not support a bunch of multi modal models, fill-mask can't work on Wav2vec2ForMaskedLM) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13272/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13272/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13272", "html_url": "https://github.com/huggingface/transformers/pull/13272", "diff_url": "https://github.com/huggingface/transformers/pull/13272.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13272.patch", "merged_at": 1629971569000 }
https://api.github.com/repos/huggingface/transformers/issues/13271
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13271/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13271/comments
https://api.github.com/repos/huggingface/transformers/issues/13271/events
https://github.com/huggingface/transformers/issues/13271
979,739,180
MDU6SXNzdWU5Nzk3MzkxODA=
13,271
Global transformers package imports, render local changes to the transformer src code useless for example scripts
{ "login": "aiswaryasankar", "id": 7874177, "node_id": "MDQ6VXNlcjc4NzQxNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/7874177?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aiswaryasankar", "html_url": "https://github.com/aiswaryasankar", "followers_url": "https://api.github.com/users/aiswaryasankar/followers", "following_url": "https://api.github.com/users/aiswaryasankar/following{/other_user}", "gists_url": "https://api.github.com/users/aiswaryasankar/gists{/gist_id}", "starred_url": "https://api.github.com/users/aiswaryasankar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aiswaryasankar/subscriptions", "organizations_url": "https://api.github.com/users/aiswaryasankar/orgs", "repos_url": "https://api.github.com/users/aiswaryasankar/repos", "events_url": "https://api.github.com/users/aiswaryasankar/events{/privacy}", "received_events_url": "https://api.github.com/users/aiswaryasankar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello! Maybe I misunderstood your issue, but if you clone the repository and install it as an editable package, you should be able to run the `run_summarization` script with the changes you have made.\r\n\r\n```\r\ngit clone https://github.com/huggingface/transformers\r\ncd transformers\r\n\r\npip install -e .\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,633
1,633
NONE
null
# 🚀 Feature request In the summarization.py example file, it imports the global transformers package not the src/transformers package making local development challenging. I'd like to make changes to the trainer.py file within src/transformers/trainer.py and have those changes reflected when I run the examples/pytorch/summarization/run_summarization.py script. However due to the current package structure in the repository as is - the transformers folder isn't a package on its own and thus setting up a system of relative imports to access this folder from the examples directory is hoping through a lot of loops to avoid errors such as `attempted relative import with no known parent package`. The repo structure also makes it hard to move the example file into the transformers package since none of the higher level folders are packages themselves causing the same issue. ## Motivation Not having this entire repo structured as packages makes it really challenging and not straight forward for someone who wants to experiment with making changes to the models and training exposed through this library. The provided scripts to run various tasks and train are helpful however since they use global imports they hardly work well with the rest of the repo and might as well be stand alone repositories by themselves. ## Your contribution @patil-suraj has mentioned he would be a point of contact for the summarization examples repo directly. Would first ask if he has suggestions into how this can be best addressed (other than asking everyone facing this to restructure the repo into packages etc themselves), otherwise I am looking into getting that restructure done just since its blocking in terms of making any updates and seeing that reflected when running these training scripts. Support here would make a huge difference to the accessibility / extensibility of these provided example scripts and notebooks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13271/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13271/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13270
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13270/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13270/comments
https://api.github.com/repos/huggingface/transformers/issues/13270/events
https://github.com/huggingface/transformers/issues/13270
979,705,277
MDU6SXNzdWU5Nzk3MDUyNzc=
13,270
Commit v4.9.2 release appears as v4.5.1 in "transformers-cli env"
{ "login": "merleyc", "id": 10016650, "node_id": "MDQ6VXNlcjEwMDE2NjUw", "avatar_url": "https://avatars.githubusercontent.com/u/10016650?v=4", "gravatar_id": "", "url": "https://api.github.com/users/merleyc", "html_url": "https://github.com/merleyc", "followers_url": "https://api.github.com/users/merleyc/followers", "following_url": "https://api.github.com/users/merleyc/following{/other_user}", "gists_url": "https://api.github.com/users/merleyc/gists{/gist_id}", "starred_url": "https://api.github.com/users/merleyc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/merleyc/subscriptions", "organizations_url": "https://api.github.com/users/merleyc/orgs", "repos_url": "https://api.github.com/users/merleyc/repos", "events_url": "https://api.github.com/users/merleyc/events{/privacy}", "received_events_url": "https://api.github.com/users/merleyc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I don't see you activating your conda environment anywhere, can that be the source of your issue? Is there a reason you're using `pip install` instead of `conda install` when you're using a conda environment?\r\n\r\nWhat happens when you do `which transformers-cli`?\r\n\r\nThe following seems to work flawlessly:\r\n\r\n```\r\ngit clone --recursive https://github.com/merleyc/transformers.git \r\ncd transformers\r\n\r\nconda create -n hf-dev-py380 python=3.8.0\r\nconda activate hf-dev-py380 \r\n\r\ngit checkout v4.9.2-release\r\npip install -e .\r\ntransformers-cli env\r\n```\r\n```\r\n- `transformers` version: 4.9.2\r\n- Platform: Linux-5.13.12-arch1-1-x86_64-with-glibc2.10\r\n- Python version: 3.8.0\r\n- PyTorch version (GPU?): not installed (NA)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n```\r\n", "Hi @LysandreJik ,\r\nYes, you're right. Thanks a lot!\r\nI was able to successfully reproduce your steps and get the correct info from _transformers-cli env_ command.\r\nHowever when I follow the same steps but just replacing \"_pip install -e ._\" by \"_pip install -e \".[dev]\", the _transformers-cli env_ returns the error: \"_ImportError: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found_\"\r\n\r\nMy steps:\r\n```\r\ngit clone --recursive https://github.com/merleyc/transformers.git\r\ncd transformers/\r\n\r\nconda create -n hf-dev-py380 python=3.8.0\r\nconda activate hf-dev-py380\r\n\r\ngit checkout v4.9.2-release\r\npip install -e \".[dev]\"\r\ntransformers-cli env -> It throws me error #1 below.\r\n\r\nconda install -c conda-forge librosa -> as mentioned [here](https://github.com/readthedocs/readthedocs.org/issues/6086)\r\ntransformers-cli env -> It throws me error #2 below.\r\nconda install libgcc -> as mentioned [here](https://github.com/BVLC/caffe/issues/4953)\r\nexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HOME/miniconda3/lib/\r\ntransformers-cli env -> And I get my expected result!\r\n\r\n```\r\nSo it seems the command _pip install -e \".[dev]\"_ has the dependences mentioned above. \r\n\r\nWhat would be the implication of using _pip install -e \".[dev]\"_ instead of _pip install -e ._ considering that my goal is change a Bert model (so I am a developer)? I do know that by \"Providing the --dev argument will put the dependency in a special [dev-packages] location in the Pipfile. These development packages only get installed if you specify the --dev argument with pipenv install.\" [source](https://realpython.com/pipenv-guide/)\r\n\r\nThanks a lot!!\r\n\r\n**Error #1:**\r\n```\r\n$ transformers-cli env\r\n2021-08-27 03:34:25.566885: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n2021-08-27 03:34:25.566952: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\nTraceback (most recent call last):\r\n File \"/home/mypath/miniconda3/envs/hf-dev-py380/bin/transformers-cli\", line 33, in <module>\r\n sys.exit(load_entry_point('transformers', 'console_scripts', 'transformers-cli')())\r\n File \"/home/mypath/miniconda3/envs/hf-dev-py380/bin/transformers-cli\", line 25, in importlib_load_entry_point\r\n return next(matches).load()\r\n File \"/home/mypath/miniconda3/envs/hf-dev-py380/lib/python3.8/importlib/metadata.py\", line 75, in load\r\n module = import_module(match.group('module'))\r\n File \"/home/mypath/miniconda3/envs/hf-dev-py380/lib/python3.8/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1014, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 671, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 783, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"/myotherpath/transformers/src/transformers/commands/transformers_cli.py\", line 23, in <module>\r\n from .run import RunCommand\r\n File \"/myotherpath/transformers/src/transformers/commands/run.py\", line 17, in <module>\r\n from ..pipelines import SUPPORTED_TASKS, TASK_ALIASES, Pipeline, PipelineDataFormat, pipeline\r\n File \"/myotherpath/transformers/src/transformers/pipelines/__init__.py\", line 26, in <module>\r\n from ..models.auto.feature_extraction_auto import FEATURE_EXTRACTOR_MAPPING, AutoFeatureExtractor\r\n File \"/myotherpath/transformers/src/transformers/models/auto/feature_extraction_auto.py\", line 20, in <module>\r\n from transformers import DeiTFeatureExtractor, Speech2TextFeatureExtractor, ViTFeatureExtractor\r\n File \"<frozen importlib._bootstrap>\", line 1039, in _handle_fromlist\r\n File \"/myotherpath/transformers/src/transformers/file_utils.py\", line 1985, in __getattr__\r\n value = getattr(module, name)\r\n File \"/myotherpath/transformers/src/transformers/file_utils.py\", line 1984, in __getattr__\r\n module = self._get_module(self._class_to_module[name])\r\n File \"/myotherpath/transformers/src/transformers/file_utils.py\", line 1993, in _get_module\r\n return importlib.import_module(\".\" + module_name, self.__name__)\r\n File \"/home/mypath/miniconda3/envs/hf-dev-py380/lib/python3.8/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"/myotherpath/transformers/src/transformers/models/speech_to_text/feature_extraction_speech_to_text.py\", line 23, in <module>\r\n import torchaudio.compliance.kaldi as ta_kaldi\r\n File \"/home/mypath/miniconda3/envs/hf-dev-py380/lib/python3.8/site-packages/torchaudio/__init__.py\", line 13, in <module>\r\n from torchaudio.backend import (\r\n File \"/home/mypath/miniconda3/envs/hf-dev-py380/lib/python3.8/site-packages/torchaudio/backend/__init__.py\", line 2, in <module>\r\n from . import utils\r\n File \"/home/mypath/miniconda3/envs/hf-dev-py380/lib/python3.8/site-packages/torchaudio/backend/utils.py\", line 7, in <module>\r\n from . import (\r\n File \"/home/mypath/miniconda3/envs/hf-dev-py380/lib/python3.8/site-packages/torchaudio/backend/soundfile_backend.py\", line 11, in <module>\r\n import soundfile\r\n File \"/home/mypath/miniconda3/envs/hf-dev-py380/lib/python3.8/site-packages/soundfile.py\", line 142, in <module>\r\n raise OSError('sndfile library not found')\r\nOSError: sndfile library not found\r\n```\r\n\r\n**Error #2:**\r\n```\r\n2021-08-27 03:51:25.067374: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n2021-08-27 03:51:25.067425: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\nTraceback (most recent call last):\r\n File \"/home/myotherpath/miniconda3/envs/hf-dev-py380/bin/transformers-cli\", line 33, in <module>\r\n sys.exit(load_entry_point('transformers', 'console_scripts', 'transformers-cli')())\r\n File \"/home/myotherpath/miniconda3/envs/hf-dev-py380/bin/transformers-cli\", line 25, in importlib_load_entry_point\r\n return next(matches).load()\r\n File \"/home/myotherpath/miniconda3/envs/hf-dev-py380/lib/python3.8/importlib/metadata.py\", line 75, in load\r\n module = import_module(match.group('module'))\r\n File \"/home/myotherpath/miniconda3/envs/hf-dev-py380/lib/python3.8/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1014, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 671, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 783, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"/mypath/transformers/src/transformers/commands/transformers_cli.py\", line 23, in <module>\r\n from .run import RunCommand\r\n File \"/mypath/transformers/src/transformers/commands/run.py\", line 17, in <module>\r\n from ..pipelines import SUPPORTED_TASKS, TASK_ALIASES, Pipeline, PipelineDataFormat, pipeline\r\n File \"/mypath/transformers/src/transformers/pipelines/__init__.py\", line 26, in <module>\r\n from ..models.auto.feature_extraction_auto import FEATURE_EXTRACTOR_MAPPING, AutoFeatureExtractor\r\n File \"/mypath/transformers/src/transformers/models/auto/feature_extraction_auto.py\", line 20, in <module>\r\n from transformers import DeiTFeatureExtractor, Speech2TextFeatureExtractor, ViTFeatureExtractor\r\n File \"/mypath/transformers/src/transformers/file_utils.py\", line 1985, in __getattr__\r\n value = getattr(module, name)\r\n File \"/mypath/transformers/src/transformers/file_utils.py\", line 1984, in __getattr__\r\n module = self._get_module(self._class_to_module[name])\r\n File \"/mypath/transformers/src/transformers/file_utils.py\", line 1993, in _get_module\r\n return importlib.import_module(\".\" + module_name, self.__name__)\r\n File \"/home/myotherpath/miniconda3/envs/hf-dev-py380/lib/python3.8/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"/mypath/transformers/src/transformers/models/deit/feature_extraction_deit.py\", line 20, in <module>\r\n from PIL import Image\r\n File \"/home/myotherpath/miniconda3/envs/hf-dev-py380/lib/python3.8/site-packages/PIL/Image.py\", line 114, in <module>\r\n from . import _imaging as core\r\nImportError: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by /home/myotherpath/miniconda3/envs/hf-dev-py380/lib/python3.8/site-packages/PIL/../../.././libLerc.so)\r\n```\r\n\r\n", "`pip install -e .[dev]` means you'll be installing all the dependencies that we have put down for the `dev` option; which is the option we use to imply that a user is working on the `transformers` package directly. It adds all possible dependencies that would otherwise be blocking: TensorFlow, PyTorch, Flax, speech, vision, ...\r\n\r\nIt is not necessary to install this to use the `transformers` library, only to work on it.\r\n\r\nSee the `setup.py` for more information: https://github.com/huggingface/transformers/blob/master/setup.py#L301-L308", "Thanks for the explanation, @LysandreJik !\r\n\r\nI was only able to successfully run `pip install -e .[dev]` after I installed extra packages (librosa and libgcc ) and set up the LD_LIBRARY_PATH. Do you also need to install these packages and set up this env variable or is it something only in my environment? If you also have this dependency, I should probably open a new issue about this it.\r\n\r\nThanks!\r\n", "Those are requirements that PIL and `soundfile` have on system-wide dependencies. Unfortunately, these are not dependencies that we can control from within the `setup.py` - but we should at least make that clearer, for example on this page: https://huggingface.co/transformers/installation.html#installation-with-pip\r\n\r\nWould you like to try your hand at modifying the docs to mention what might need to be done in case of a `[dev]` installation?", "I will be happy to (will work on that next week :) )", "That sounds great @merleyc, looking forward to it!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,633
1,633
NONE
null
## Environment info The command **_transformers-cli env_** returns: _2021-08-26 07:30:45.855430: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2021-08-26 07:30:45.855484: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. WARNING:tensorflow:From /home/mypath/miniconda3/lib/python3.7/site-packages/transformers/commands/env.py:50: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. 2021-08-26 07:30:47.909798: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-08-26 07:30:47.920073: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory 2021-08-26 07:30:47.920123: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303) 2021-08-26 07:30:47.920158: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (sr507): /proc/driver/nvidia/version does not exist_ - `transformers` version: 4.5.1 - Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-centos-7.6.1810-Core - Python version: 3.7.7 - PyTorch version (GPU?): 1.9.0+cu102 (False) - Tensorflow version (GPU?): 2.6.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help Documentation: @sgugger ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [x ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. git clone --recursive https://github.com/merleyc/transformers.git 2. conda create -n hf-dev-py380 python=3.8.0 3. ipython kernel install --user --name=hf-dev-py380 4. cd transformers/ 5. git checkout v4.9.2-release 6. git log -> It appears the correct release version: _commit 41981a25cdd028007a7491d68935c8d93f9e8b47 (HEAD -> exploration, tag: v4.9.2, origin/v4.9.2-release, v4.9.2-release) Author: Lysandre <[email protected]> Date: Mon Aug 9 16:01:36 2021 +0200 Patch release: v4.9.2_ 7. git checkout -b exploration 8. pip uninstall transformers 9. pip install -e ".[dev]" 10. git clone --recursive https://github.com/huggingface/datasets 11. cd datasets/ 12. pip install -e ".[dev]" 13. export http_proxy=http://xxx:yyy; export https_proxy=http://xxx:yyy; export ftp_proxy=xxx:yyy 14. python -m pytest -n 52 --dist=loadfile -s -v ./tests/ > ~/results_cli4.txt 15. conda install cloudpickle -> because it complained: "distributed 1.26.0 requires cloudpickle>=0.2.2, which is not installed." 16. pip uninstall huggingface-hub 17. pip install huggingface-hub==0.0.12 -> because it complained "transformers 4.9.2 requires huggingface-hub==0.0.12, but you have huggingface-hub 0.0.15 which is incompatible." 18. pip uninstall pycodestyle 19. pip install pycodestyle==2.7.0 -> because it complained: "autopep8 1.5.6 requires pycodestyle>=2.7.0, but you have pycodestyle 2.5.0 which is incompatible." 20. pip install -e ".[dev]" -> It complained: "flake8 3.7.9 requires pycodestyle<2.6.0,>=2.5.0, but you have pycodestyle 2.7.0 which is incompatible." and showed the below result: "_9 failed, 3240 passed, 2803 skipped, 172 warnings in 2630.90s (0:43:50)_" There are several issues I observed while following these steps, like unmatched python versions, pip package dependency incompatibilities, and failing tests, but let's, for now, focus on the transformer version please. I will be happy to open other issues if needed. ## Expected behavior The result of **_transformers-cli env_** should be the version I cloned from the repo, which is version: 4.9.2
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13270/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13269
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13269/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13269/comments
https://api.github.com/repos/huggingface/transformers/issues/13269/events
https://github.com/huggingface/transformers/pull/13269
979,550,115
MDExOlB1bGxSZXF1ZXN0NzE5OTQ2MDE2
13,269
Add PLBart
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "The pre-trained model conversion is working correctly. Have verified with encoder/decoder outputs and embedding outputs for encoder. \r\nThe states match except for pad tokens, for which the issue is mentioned in #13481.\r\nThe tokenizer for the pre-trained checkpoint contains 50005 tokens, with the special ones:\r\n```\r\n0 <s>\r\n1 <pad>\r\n2 </s>\r\n3 <unk>\r\n50001 [java]\r\n50002 [python]\r\n50003 [en_XX]\r\n50004 <mask>\r\n```", "@gchhablani - can I help you somehow to move forward with this PR? ", "Any update on this PR? @gchhablani ", "@patil-suraj\r\nUpdates:\r\n- Rebased the branch to master.\r\n- Updated the documentation. Should I move the generation example and other model/tokenizer docs from the model/tokenization files to the `mdx`?\r\n- Removed all the extra changes due to merging issues.\r\n\r\nRegarding the tokenizes, I am not sure if they can be moved to a single tokenizer considering [this comment](https://github.com/huggingface/transformers/pull/13269#discussion_r790167690). Please let me know if there's a way to do that.", "@patil-suraj @gchhablani - let me know when the PR is ready for a final review :-)", "@patrickvonplaten Would be awesome if you could take a final look now :) ", "PR is good for merge to me! Failing test is a flaky one from the Hub.", "Thanks a lot @gchhablani for all your work on this! Great job!", "Hi, @gchhablani @patil-suraj @sgugger \r\n\r\nLooks like `README.md` (and therefore `doc/source/index.mdx`) is not updated for this new added model.\r\nIdeally, it is usually updated when a new model is added, right?\r\n ", "Thanks for flagging @ydshieh , we indeed have forgotten to ask for it during our reviews! @gchhablani would you like to add it in a follow-up PR?", "@ydshieh Thanks a lot for informing! On it.\r\n\r\n@sgugger Yes, I'll quickly fix it." ]
1,629
1,645
1,645
CONTRIBUTOR
null
# What does this PR do? This PR adds PLBART. - Paper: https://arxiv.org/abs/2103.06333 - Code and Checkpoints: https://github.com/wasiahmad/PLBART - Authors: @wasiahmad @kaiweichang Motivation: This encoder-decoder BART-like model allows for downstream fine-tuning on code summarization, generation, classification, etc. and is useful to the community. The fine-tuned checkpoints are also available and can be used directly. EDIT 1: ------ I am trying to make the embeddings same for the original and my implementation. Have created an issue [here](#13481) as I'm stuck with `padding_idx` outputs not matching. ### Pending Tasks - [ ] Fix FastTokenizer - [ ] Tokenizer - ~MultiTokenizer~ - [x] Fix tests - [x] Modeling - [x] Tokenizer - ~MultiTokenizer~ - [ ] Verify behavior for all checkpoints - [x] Update Docs - [ ] Update Model Cards - [x] Add remaining checkpoints - [x] `plbart-large` - [x] `plbart-base-csnet` checkpoints
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13269/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13269/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13269", "html_url": "https://github.com/huggingface/transformers/pull/13269", "diff_url": "https://github.com/huggingface/transformers/pull/13269.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13269.patch", "merged_at": 1645190229000 }
https://api.github.com/repos/huggingface/transformers/issues/13268
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13268/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13268/comments
https://api.github.com/repos/huggingface/transformers/issues/13268/events
https://github.com/huggingface/transformers/issues/13268
979,504,369
MDU6SXNzdWU5Nzk1MDQzNjk=
13,268
additional global attended token in bigbird-roberta
{ "login": "calderma", "id": 18285670, "node_id": "MDQ6VXNlcjE4Mjg1Njcw", "avatar_url": "https://avatars.githubusercontent.com/u/18285670?v=4", "gravatar_id": "", "url": "https://api.github.com/users/calderma", "html_url": "https://github.com/calderma", "followers_url": "https://api.github.com/users/calderma/followers", "following_url": "https://api.github.com/users/calderma/following{/other_user}", "gists_url": "https://api.github.com/users/calderma/gists{/gist_id}", "starred_url": "https://api.github.com/users/calderma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/calderma/subscriptions", "organizations_url": "https://api.github.com/users/calderma/orgs", "repos_url": "https://api.github.com/users/calderma/repos", "events_url": "https://api.github.com/users/calderma/events{/privacy}", "received_events_url": "https://api.github.com/users/calderma/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @vasudevgupta7 ", "Hey @calderma, sorry for late reply. I somehow missed your comment.\r\n\r\nYes, only ITC code is supported for now and hence only 1st & last **blocks** (i.e collection of tokens) are global. So, you can control which tokens should be global by playing around with block_size a bit, though this would increase compute a bit as random & sliding tokens are also dependent on it.\r\n\r\nBut in your case, as you want to make 2nd token as global, it will be a global token if block size is > 2 (Note: default block size is 64).", "Ah ok got it. I thought it was just the first token, not the first block. My mistake. Thanks!" ]
1,629
1,631
1,631
NONE
null
Hello, based on my understanding of the bigbird style attention mechanism, the only tokens in the ITC construction supported in HuggingFace that are global are the first and last tokens. Is there an easy way to add an additional global token position that should always be attended to? For example, if I want to make sure the second token in the sequence is always globally attended to, where in the modeling_bigbird.py (or elsewhere) do I need to add that, or is there another easier way to pass additional globally attended token positions? Any help would be greatly appreciated. Thank you.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13268/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13268/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13267
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13267/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13267/comments
https://api.github.com/repos/huggingface/transformers/issues/13267/events
https://github.com/huggingface/transformers/pull/13267
979,379,874
MDExOlB1bGxSZXF1ZXN0NzE5ODAyNDk5
13,267
Better notification service
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,629
1,629
MEMBER
null
Better notification service that splits the scheduled tests in a separate channel for a less spammy experience.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13267/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13267", "html_url": "https://github.com/huggingface/transformers/pull/13267", "diff_url": "https://github.com/huggingface/transformers/pull/13267.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13267.patch", "merged_at": 1629908084000 }
https://api.github.com/repos/huggingface/transformers/issues/13266
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13266/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13266/comments
https://api.github.com/repos/huggingface/transformers/issues/13266/events
https://github.com/huggingface/transformers/pull/13266
979,356,076
MDExOlB1bGxSZXF1ZXN0NzE5NzgxNDg2
13,266
Add error message concerning revision
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,629
1,629
COLLABORATOR
null
# What does this PR do? Adds one more item to the error message when a model, config, or tokenizer cannot be loaded. If (and only if) the user provided a revision, the error message will also tell them that they need to check on the model page whether the revision number actually exists. This does _not_ close issue https://github.com/huggingface/transformers/issues/13264 as using the short commit hash does not work yet. This PR only slightly improves user-friendliness in terms of possible user errors. ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? https://github.com/huggingface/transformers/issues/13264#issuecomment-905614404 ## Who can review? @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13266/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13266/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13266", "html_url": "https://github.com/huggingface/transformers/pull/13266", "diff_url": "https://github.com/huggingface/transformers/pull/13266.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13266.patch", "merged_at": 1629966778000 }
https://api.github.com/repos/huggingface/transformers/issues/13265
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13265/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13265/comments
https://api.github.com/repos/huggingface/transformers/issues/13265/events
https://github.com/huggingface/transformers/pull/13265
979,225,951
MDExOlB1bGxSZXF1ZXN0NzE5NjY1NDgy
13,265
Add DINO conversion script
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,629
1,629
CONTRIBUTOR
null
# What does this PR do? I've uploaded the Vision Transformers trained using the self-supervised method called [DINO](https://github.com/facebookresearch/dino) to the hub: https://huggingface.co/models?other=dino This PR includes the conversion script that was used. I've also added a reference to DeiT, BEiT and the DINO checkpoints to the docs of ViT, to give these models some more love.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13265/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13265/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13265", "html_url": "https://github.com/huggingface/transformers/pull/13265", "diff_url": "https://github.com/huggingface/transformers/pull/13265.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13265.patch", "merged_at": 1629991521000 }
https://api.github.com/repos/huggingface/transformers/issues/13264
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13264/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13264/comments
https://api.github.com/repos/huggingface/transformers/issues/13264/events
https://github.com/huggingface/transformers/issues/13264
979,205,937
MDU6SXNzdWU5NzkyMDU5Mzc=
13,264
Revisions not working as expected
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Just an update (I've already posted it on Twitter):\r\n\r\nGit tags or branches should work, such as:\r\n\r\n```python\r\nfrom transformers import AutoModel\r\nmodel = AutoModel.from_pretrained(\"dbmdz/german-gpt2\", revision=\"v1.0\")\r\n```\r\n", "Hi @BramVanroy, thanks for opening an issue! This is also tracked in https://github.com/huggingface/huggingface_hub/issues/197 cc @julien-c @Pierrci \r\n\r\nThere's definitely an improvement to be done regarding the mention of the revision in the error message, feel free to give it a try if you have the time to, otherwise we'll take care of it ASAP. ", "@LysandreJik Great. I wasn't sure what the underlying issue was: `transformers` not correctly loading the short hash, or the web interface not displaying the full hash. Feel free to close this issue if you think that is better. I made a tiny PR for an improved error message.", "> So the bug is either\r\n> \r\n> * the model is not capable of looking up a revision based on the first seven characters of a hash (not sure if it should/could),\r\n> * or the model hub website does not provide enough information to make this intuitive for users \r\n\r\nIt's a partial mix of both: the model hub website does not currently have the feature to lookup a revision based on the first seven characters of a hash\r\n\r\n(loading a commit from the first few hash characters is a sugar-y feature of git, not a core feature)\r\n\r\n> A **first improvement** would be to add to this error message something about revisions, because obviously `GroNLP/bert-base-dutch-cased` is a correct name.\r\n\r\nYes definitely, as @LysandreJik said\r\n\r\nEDIT: and your PR looks a great improvement", "@julien-c I was reading through the docs on [short commit hashes](https://git-scm.com/book/en/v2/Git-Tools-Revision-Selection#_short_sha_1) (truncated) and this seems important: \r\n\r\n> Git is smart enough to figure out what commit you’re referring to if you provide the first few characters of the SHA-1 hash, as long as that partial hash is at least four characters long **and unambiguous; that is, no other object in the object database can have a hash that begins with the same prefix.**\r\n\r\nI don't know how (and if you) you would implement loading revisions from short hashes, but checking for ambiguity seems an important point to consider - even though the chances are quite small that the first 4-7 characters are identical between two commits within the same repo.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Bumping to keep this open.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Not sure if I should try to keep this open. Final bump unless others interact.", "It's really on the Hub side of thing, so https://github.com/huggingface/huggingface_hub/issues/197 should be tracking it and it can be closed on this side (unless I'm missing something).", "Hey @BramVanroy (and @sgugger) we solved this through better UX, actually:\r\n\r\nif you take a look at commit history on https://huggingface.co/bert-base-uncased/commits/main you now have buttons to copy the full commit hash (exactly like on GitHub), thanks to @beurkinger on the Hub team.\r\n\r\nsee screenshot below:\r\n\r\n<img width=\"1131\" alt=\"Screenshot 2021-10-21 at 18 51 05\" src=\"https://user-images.githubusercontent.com/326577/138322377-5d0a7195-522b-4f50-af35-a663e10390d7.png\">\r\n\r\nHope this helps!" ]
1,629
1,634
1,634
COLLABORATOR
null
We were getting a size mismatch when loading a finetuned checkpoint. After looking at the model config, I found that it had been updated and that the [embedding/vocab size had increased](https://huggingface.co/GroNLP/bert-base-dutch-cased/commit/b23d41bddd4d5c925bec648458cabd7cc578e47e). This is slightly annoying but not the core of this issue. My way of dealing with this, then, was naturally to rely on version control and simply use the previous commit which still had the config that we used for finetuning ([this one](https://huggingface.co/GroNLP/bert-base-dutch-cased/commit/61330c1ca1aa3a688f8aa015059142a1b20d3f63)). I would have expected that I can then load this revised model with the commit as given on the website: ```python from transformers import AutoModel model_name = "GroNLP/bert-base-dutch-cased" revision = "61330c1" model = AutoModel.from_pretrained(model_name, revision=revision) ``` This does not work an throws an error that the model cannot be found with the following message: ``` OSError: Can't load config for 'GroNLP/bert-base-dutch-cased'. Make sure that: - 'GroNLP/bert-base-dutch-cased' is a correct model identifier listed on 'https://huggingface.co/models' - or 'GroNLP/bert-base-dutch-cased' is the correct path to a directory containing a config.json file ``` A **first improvement** would be to add to this error message something about revisions, because obviously `GroNLP/bert-base-dutch-cased` is a correct name. The deeper issue is that the model revision is simply not found when I use the commit tag on the website. By coincidence I noticed that the URL includes a much longer identifier that starts with the commit number that you can see on the website (the full commit hash). When you try that, the code does run and the revision is correctly loaded. ```python from transformers import AutoModel model_name = "GroNLP/bert-base-dutch-cased" revision = "61330c1ca1aa3a688f8aa015059142a1b20d3f63" model = AutoModel.from_pretrained(model_name, revision=revision) ``` So the bug is either - the model is not capable of looking up a revision based on the first seven characters of a hash (not sure if it should/could), - or the model hub website does not provide enough information to make this intuitive for users. One way that would help, for instance, is that the "use in transformers" button adapts itself to the current revision that a user is browsing and when clicked it includes the revision (if any) in the example usage. And/or a copy function can be added to the commit identifier that - when clicked - does copies the whole hash. ### Who can help Note sure who to tag for the model page so tagging @sgugger and @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13264/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13264/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13263
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13263/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13263/comments
https://api.github.com/repos/huggingface/transformers/issues/13263/events
https://github.com/huggingface/transformers/pull/13263
979,152,742
MDExOlB1bGxSZXF1ZXN0NzE5NjA0ODQx
13,263
Replace assert statement with if condition and raise ValueError
{ "login": "nishprabhu", "id": 33579638, "node_id": "MDQ6VXNlcjMzNTc5NjM4", "avatar_url": "https://avatars.githubusercontent.com/u/33579638?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nishprabhu", "html_url": "https://github.com/nishprabhu", "followers_url": "https://api.github.com/users/nishprabhu/followers", "following_url": "https://api.github.com/users/nishprabhu/following{/other_user}", "gists_url": "https://api.github.com/users/nishprabhu/gists{/gist_id}", "starred_url": "https://api.github.com/users/nishprabhu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nishprabhu/subscriptions", "organizations_url": "https://api.github.com/users/nishprabhu/orgs", "repos_url": "https://api.github.com/users/nishprabhu/repos", "events_url": "https://api.github.com/users/nishprabhu/events{/privacy}", "received_events_url": "https://api.github.com/users/nishprabhu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,629
1,629
CONTRIBUTOR
null
# What does this PR do? The goal of this PR is to replace assert statements with if statements and raise appropriate exceptions (see issue #12789) Replaces ``` assert lr_init > lr_end, f"lr_end ({lr_end}) must be be smaller than initial lr ({lr_init})" ``` with ``` if not (lr_init > lr_end): raise ValueError(f"lr_end ({lr_end}) must be be smaller than initial lr ({lr_init})") ``` in optimization.py Contributes towards fixing issue #12789 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13263/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13263", "html_url": "https://github.com/huggingface/transformers/pull/13263", "diff_url": "https://github.com/huggingface/transformers/pull/13263.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13263.patch", "merged_at": 1629908043000 }
https://api.github.com/repos/huggingface/transformers/issues/13262
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13262/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13262/comments
https://api.github.com/repos/huggingface/transformers/issues/13262/events
https://github.com/huggingface/transformers/issues/13262
979,024,465
MDU6SXNzdWU5NzkwMjQ0NjU=
13,262
Printing weights of a pre-trained model
{ "login": "nivi1501", "id": 55272288, "node_id": "MDQ6VXNlcjU1MjcyMjg4", "avatar_url": "https://avatars.githubusercontent.com/u/55272288?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nivi1501", "html_url": "https://github.com/nivi1501", "followers_url": "https://api.github.com/users/nivi1501/followers", "following_url": "https://api.github.com/users/nivi1501/following{/other_user}", "gists_url": "https://api.github.com/users/nivi1501/gists{/gist_id}", "starred_url": "https://api.github.com/users/nivi1501/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nivi1501/subscriptions", "organizations_url": "https://api.github.com/users/nivi1501/orgs", "repos_url": "https://api.github.com/users/nivi1501/repos", "events_url": "https://api.github.com/users/nivi1501/events{/privacy}", "received_events_url": "https://api.github.com/users/nivi1501/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "In PyTorch, you can easily print out the weights of any model, like so (let's take BERT as an example):\r\n\r\n```\r\nfrom transformers import BertModel\r\n\r\nmodel = BertModel.from_pretrained(\"bert-base-uncased\")\r\n\r\nfor name, param in model.named_parameters():\r\n print(name, param.shape)\r\n```\r\nThis prints a long list of all parameter names, together with their shape. The keys, values and queries are parameters of each layer of BERT (BERT-base has 12 layers, so there are 12 key, value and query matrices). One of them is the following:\r\n\r\n```\r\nencoder.layer.0.attention.self.query.weight torch.Size([768, 768])\r\nencoder.layer.0.attention.self.query.bias torch.Size([768])\r\nencoder.layer.0.attention.self.key.weight torch.Size([768, 768])\r\nencoder.layer.0.attention.self.key.bias torch.Size([768])\r\nencoder.layer.0.attention.self.value.weight torch.Size([768, 768])\r\nencoder.layer.0.attention.self.value.bias torch.Size([768])\r\n```", "Yes, we can print the shape and parameter names by using this code. However, I wish to print the whole matrix [768,768] /[768]: (the value of the matrix). There must be some value assigned to this after the training is done.", "Just replace `print(name, param.shape)` by `print(name, param)` in the code above.", "Yea, that works. Thanks a lot " ]
1,629
1,629
1,629
NONE
null
During the generation of the key matrix, query matrix, and value matrix, what are the weights used? How do I print these weights?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13262/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13262/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13261
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13261/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13261/comments
https://api.github.com/repos/huggingface/transformers/issues/13261/events
https://github.com/huggingface/transformers/pull/13261
978,990,181
MDExOlB1bGxSZXF1ZXN0NzE5NDc2NDk2
13,261
Fix failing Hubert test
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,629
1,629
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13261/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13261/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13261", "html_url": "https://github.com/huggingface/transformers/pull/13261", "diff_url": "https://github.com/huggingface/transformers/pull/13261.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13261.patch", "merged_at": 1629909687000 }
https://api.github.com/repos/huggingface/transformers/issues/13260
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13260/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13260/comments
https://api.github.com/repos/huggingface/transformers/issues/13260/events
https://github.com/huggingface/transformers/pull/13260
978,957,405
MDExOlB1bGxSZXF1ZXN0NzE5NDQ5ODQ4
13,260
Add require flax to MT5 Flax test
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "🚀🚀🚀 this" ]
1,629
1,630
1,629
MEMBER
null
Adds a forgotten `require_flax`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13260/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13260/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13260", "html_url": "https://github.com/huggingface/transformers/pull/13260", "diff_url": "https://github.com/huggingface/transformers/pull/13260.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13260.patch", "merged_at": 1629910585000 }
https://api.github.com/repos/huggingface/transformers/issues/13259
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13259/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13259/comments
https://api.github.com/repos/huggingface/transformers/issues/13259/events
https://github.com/huggingface/transformers/pull/13259
978,952,578
MDExOlB1bGxSZXF1ZXN0NzE5NDQ2MDMz
13,259
Some `model_type`s cannot be in the mapping
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,629
1,629
MEMBER
null
Some `model_type`s cannot be in the mapping. This PR offers a fallback for these cases. The following had stopped working (tested by `test_bert2bert_summarization`): ``` tokenizer = BertTokenizer.from_pretrained("patrickvonplaten/bert2bert-cnn_dailymail-fp16" ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13259/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13259/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13259", "html_url": "https://github.com/huggingface/transformers/pull/13259", "diff_url": "https://github.com/huggingface/transformers/pull/13259.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13259.patch", "merged_at": 1629910576000 }
https://api.github.com/repos/huggingface/transformers/issues/13258
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13258/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13258/comments
https://api.github.com/repos/huggingface/transformers/issues/13258/events
https://github.com/huggingface/transformers/pull/13258
978,935,139
MDExOlB1bGxSZXF1ZXN0NzE5NDMxODc0
13,258
Add CLIP tokenizer to AutoTokenizer
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,629
1,629
MEMBER
null
CLIP was not added to the `AutoTokenizer` mapping
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13258/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13258/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13258", "html_url": "https://github.com/huggingface/transformers/pull/13258", "diff_url": "https://github.com/huggingface/transformers/pull/13258.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13258.patch", "merged_at": 1629910568000 }
https://api.github.com/repos/huggingface/transformers/issues/13257
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13257/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13257/comments
https://api.github.com/repos/huggingface/transformers/issues/13257/events
https://github.com/huggingface/transformers/pull/13257
978,927,917
MDExOlB1bGxSZXF1ZXN0NzE5NDI2MTAx
13,257
Remove side effects of disabling gradient computaiton
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,629
1,629
MEMBER
null
Disabling gradient computation here will affect all next operations using torch, and will remove the gradient computation. This made a few `Trainer` test fail in slow tests.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13257/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13257/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13257", "html_url": "https://github.com/huggingface/transformers/pull/13257", "diff_url": "https://github.com/huggingface/transformers/pull/13257.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13257.patch", "merged_at": 1629883971000 }
https://api.github.com/repos/huggingface/transformers/issues/13256
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13256/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13256/comments
https://api.github.com/repos/huggingface/transformers/issues/13256/events
https://github.com/huggingface/transformers/issues/13256
978,903,710
MDU6SXNzdWU5Nzg5MDM3MTA=
13,256
ingore_mismatched_sizes Wav2Vec2 unknown argument
{ "login": "flozi00", "id": 47894090, "node_id": "MDQ6VXNlcjQ3ODk0MDkw", "avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/flozi00", "html_url": "https://github.com/flozi00", "followers_url": "https://api.github.com/users/flozi00/followers", "following_url": "https://api.github.com/users/flozi00/following{/other_user}", "gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}", "starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flozi00/subscriptions", "organizations_url": "https://api.github.com/users/flozi00/orgs", "repos_url": "https://api.github.com/users/flozi00/repos", "events_url": "https://api.github.com/users/flozi00/events{/privacy}", "received_events_url": "https://api.github.com/users/flozi00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, ~kindly check out this [reply](https://github.com/huggingface/transformers/issues/13187#issuecomment-902116183) and see if it can solve your problem.~\r\n\r\nThere is a misspelling with your argument. `ignore_mismatched_sizes` not `ingore_mismatched_sizes`", "Oh, thank you.\r\nI didn't see that, I just copied it from another issue here" ]
1,629
1,629
1,629
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: Win 10 - Python version: 3.8 - PyTorch version (GPU?): yes - Tensorflow version (GPU?): - Using GPU in script?: yes - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @patrickvonplaten @sgugger ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) added size missmatch ignore to loading model The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce ``` model = Wav2Vec2ForCTC.from_pretrained( File "C:\Users\flozi\anaconda3\envs\wav2vec\lib\site-packages\transformers\modeling_utils.py", line 1321, in from_pretrained model = cls(config, *model_args, **model_kwargs) TypeError: __init__() got an unexpected keyword argument 'ingore_mismatched_sizes' ``` ``` model = Wav2Vec2ForCTC.from_pretrained( model_args.model_name_or_path, cache_dir=model_args.cache_dir, activation_dropout=model_args.activation_dropout, attention_dropout=model_args.attention_dropout, hidden_dropout=model_args.hidden_dropout, feat_proj_dropout=model_args.feat_proj_dropout, mask_time_prob=model_args.mask_time_prob, gradient_checkpointing=model_args.gradient_checkpointing, layerdrop=model_args.layerdrop, ctc_loss_reduction="mean", pad_token_id=processor.tokenizer.pad_token_id, vocab_size=len(processor.tokenizer), ingore_mismatched_sizes=True, ) ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The model can be loaded like Vit,Deit or Beit do with the ingore_mismatched_sizes argument
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13256/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13256/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13255
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13255/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13255/comments
https://api.github.com/repos/huggingface/transformers/issues/13255/events
https://github.com/huggingface/transformers/issues/13255
978,894,241
MDU6SXNzdWU5Nzg4OTQyNDE=
13,255
Label Smoothing for Question Answering task
{ "login": "ubamba98", "id": 34593214, "node_id": "MDQ6VXNlcjM0NTkzMjE0", "avatar_url": "https://avatars.githubusercontent.com/u/34593214?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ubamba98", "html_url": "https://github.com/ubamba98", "followers_url": "https://api.github.com/users/ubamba98/followers", "following_url": "https://api.github.com/users/ubamba98/following{/other_user}", "gists_url": "https://api.github.com/users/ubamba98/gists{/gist_id}", "starred_url": "https://api.github.com/users/ubamba98/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ubamba98/subscriptions", "organizations_url": "https://api.github.com/users/ubamba98/orgs", "repos_url": "https://api.github.com/users/ubamba98/repos", "events_url": "https://api.github.com/users/ubamba98/events{/privacy}", "received_events_url": "https://api.github.com/users/ubamba98/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sgugger ", "I'm not entirely sure how straightforward this could be, plus it seems like a very narrow use case. I think this should be implemented independently by the user with a custom `compute_loss` function.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,629
1,633
1,633
NONE
null
# 🚀 Feature request Currently label smoothing is only applied when "label" is present in the input in compute_loss function and not in case of question answering default in Trainer class. ## Your contribution I would like to work on this issue and submit a PR by modifying compute_loss to use label_names to apply label smoothing for question answering.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13255/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13255/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13254
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13254/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13254/comments
https://api.github.com/repos/huggingface/transformers/issues/13254/events
https://github.com/huggingface/transformers/issues/13254
978,860,590
MDU6SXNzdWU5Nzg4NjA1OTA=
13,254
请问一下,情感分析的模型只是针对英文的吗
{ "login": "ArlanCooper", "id": 45280520, "node_id": "MDQ6VXNlcjQ1MjgwNTIw", "avatar_url": "https://avatars.githubusercontent.com/u/45280520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArlanCooper", "html_url": "https://github.com/ArlanCooper", "followers_url": "https://api.github.com/users/ArlanCooper/followers", "following_url": "https://api.github.com/users/ArlanCooper/following{/other_user}", "gists_url": "https://api.github.com/users/ArlanCooper/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArlanCooper/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArlanCooper/subscriptions", "organizations_url": "https://api.github.com/users/ArlanCooper/orgs", "repos_url": "https://api.github.com/users/ArlanCooper/repos", "events_url": "https://api.github.com/users/ArlanCooper/events{/privacy}", "received_events_url": "https://api.github.com/users/ArlanCooper/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is a bit intimidating to me haha. I'll use deep learning to translate your issue (😮 ):\r\n\r\nDeepL says:\r\n\r\nclassifier = pipeline('sentiment-analysis')\r\nIs the model encapsulated in this only for English? I found it completely wrong using Chinese, e.g.\r\nwords = 'this is a good service'\r\njudge1 = classifier(words)\r\nwords = 'This is a good service'\r\njudge2 = classifier(words)\r\nprint(judge1,judge2)\r\n\r\n=> answer: yes, the default sentiment analysis pipeline is English-only, as it uses a `DistilBertForSequenceClassification` model fine-tuned on English data (I'm not sure, Ii's a bummer that it's difficult to know what the default model is that is used for each pipeline, see #12845). You can indeed, as Patrick mentions below, use a custom model from the hub.", "You could use the model hub to find sentiment analysis models in Chinese as follows:\r\n\r\nhttps://huggingface.co/models?language=zh&pipeline_tag=text-classification&sort=downloads", "And then do:\r\n\r\n```python\r\nclassifier = pipeline('sentiment-analysis', model=\"uer/roberta-base-finetuned-chinanews-chinese\")\r\n```", "> And then do:\r\n> \r\n> ```python\r\n> classifier = pipeline('sentiment-analysis', model=\"uer/roberta-base-finetuned-chinanews-chinese\")\r\n> ```\r\n\r\nthank you for your answer,that's help a lot", "> This is a bit intimidating to me haha. I'll use deep learning to translate your issue (😮 ):\r\n> \r\n> DeepL says:\r\n> \r\n> classifier = pipeline('sentiment-analysis')\r\n> Is the model encapsulated in this only for English? I found it completely wrong using Chinese, e.g.\r\n> words = 'this is a good service'\r\n> judge1 = classifier(words)\r\n> words = 'This is a good service'\r\n> judge2 = classifier(words)\r\n> print(judge1,judge2)\r\n> \r\n> => answer: yes, the default sentiment analysis pipeline is English-only, as it uses a `DistilBertForSequenceClassification` model fine-tuned on English data (I'm not sure, Ii's a bummer that it's difficult to know what the default model is that is used for each pipeline, see #12845). You can indeed, as Patrick mentions below, use a custom model from the hub.\r\n\r\nsorry i use chinese, and thank you for your answer,that's help a lot" ]
1,629
1,629
1,629
NONE
null
classifier = pipeline('sentiment-analysis') 这里面封装的模型是只针对英文的吗?我使用中文发现完全不对,比如: words = '这是一个不错的服务' judge1 = classifier(words) words = '这是一个很好的服务' judge2 = classifier(words) print(judge1,judge2) 结果展示: [{'label': 'NEGATIVE', 'score': 0.9500061273574829}] [{'label': 'NEGATIVE', 'score': 0.92015540599823}]
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13254/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13254/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13253
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13253/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13253/comments
https://api.github.com/repos/huggingface/transformers/issues/13253/events
https://github.com/huggingface/transformers/issues/13253
978,750,418
MDU6SXNzdWU5Nzg3NTA0MTg=
13,253
Cannot use RemBert
{ "login": "KappalaSaikumar", "id": 65230225, "node_id": "MDQ6VXNlcjY1MjMwMjI1", "avatar_url": "https://avatars.githubusercontent.com/u/65230225?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KappalaSaikumar", "html_url": "https://github.com/KappalaSaikumar", "followers_url": "https://api.github.com/users/KappalaSaikumar/followers", "following_url": "https://api.github.com/users/KappalaSaikumar/following{/other_user}", "gists_url": "https://api.github.com/users/KappalaSaikumar/gists{/gist_id}", "starred_url": "https://api.github.com/users/KappalaSaikumar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KappalaSaikumar/subscriptions", "organizations_url": "https://api.github.com/users/KappalaSaikumar/orgs", "repos_url": "https://api.github.com/users/KappalaSaikumar/repos", "events_url": "https://api.github.com/users/KappalaSaikumar/events{/privacy}", "received_events_url": "https://api.github.com/users/KappalaSaikumar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Make sure to install Transformers from master: `pip install git+https://github.com/huggingface/transformers.git`", "Thanks @NielsRogge It's working fine now" ]
1,629
1,629
1,629
NONE
null
![Screenshot (1508)](https://user-images.githubusercontent.com/65230225/130734536-0d498812-2ee4-45fe-96a0-6c33fc007916.png) When I am trying to use the AutoTokenizer for the newly added RemBert model it's giving me this error ![Screenshot (1509)](https://user-images.githubusercontent.com/65230225/130736371-869e52d9-d4cb-4dc1-a480-8234d658420c.png) When I try importing RemBertTokenizer then it's giving me this error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13253/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13253/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13252
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13252/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13252/comments
https://api.github.com/repos/huggingface/transformers/issues/13252/events
https://github.com/huggingface/transformers/issues/13252
978,739,567
MDU6SXNzdWU5Nzg3Mzk1Njc=
13,252
Add `--max_length` argument in seq2seq trainer.
{ "login": "sbmaruf", "id": 32699797, "node_id": "MDQ6VXNlcjMyNjk5Nzk3", "avatar_url": "https://avatars.githubusercontent.com/u/32699797?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sbmaruf", "html_url": "https://github.com/sbmaruf", "followers_url": "https://api.github.com/users/sbmaruf/followers", "following_url": "https://api.github.com/users/sbmaruf/following{/other_user}", "gists_url": "https://api.github.com/users/sbmaruf/gists{/gist_id}", "starred_url": "https://api.github.com/users/sbmaruf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sbmaruf/subscriptions", "organizations_url": "https://api.github.com/users/sbmaruf/orgs", "repos_url": "https://api.github.com/users/sbmaruf/repos", "events_url": "https://api.github.com/users/sbmaruf/events{/privacy}", "received_events_url": "https://api.github.com/users/sbmaruf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is added by the PR mentioned above.", "Thanks a lot for the new feature. Closing the issue. " ]
1,629
1,630
1,630
NONE
null
# 🚀 Feature request Currently [seq2seq Trainer](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_seq2seq.py) uses `--max_length` for prediction step. However in the trainer there is no argument `--max_length` in [here](https://huggingface.co/transformers/main_classes/trainer.html?highlight=trainingargument#transformers.Seq2SeqTrainingArguments) and [here](https://huggingface.co/transformers/_modules/transformers/training_args_seq2seq.html#Seq2SeqTrainingArguments). During training (with `--predict_with_generate`) when the evaluate function is called, it performs [prediction step ](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_seq2seq.py#L128) with `model.config.max_length` by this [line](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_seq2seq.py#L166). Unless you call the `trainer.evaluate(eval_dataset = eval_dataset, max_length=max_target_length)` manually, in the training time it uses `model.config.max_length`. Also without reviewing the source code, it is difficult to grasp this. So in the training time, for `prediction_loop`, the model performs evaluation based on [this](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_seq2seq.py#L166). It uses `self.model.config.max_length` for doing prediction. It is kind of confusing I would say. Let's look into this, ``` >>> import transformers >>> transformers.__version__ '4.10.0.dev0' >>> model = transformers.AutoModel.from_pretrained("google/mt5-large") Some weights of the model checkpoint at google/mt5-large were not used when initializing MT5Model: ['lm_head.weight'] - This IS expected if you are initializing MT5Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing MT5Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). >>> model.config.max_length 20 ``` A user who is not careful about this argument would totally miss this. Personally I spent quite a few time on this. My `compute_metrics()` values at the training time on dev set was not good but at the end of training prediction on the test dataset score (using my own call `trainer.evaluate()`) was high. ## Motivation Adding `--max_length` in [Seq2SeqTrainer](https://github.com/huggingface/transformers/blob/master/src/transformers/training_args_seq2seq.py#L27) would help the user to be-aware of this parameter. @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13252/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13252/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13251
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13251/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13251/comments
https://api.github.com/repos/huggingface/transformers/issues/13251/events
https://github.com/huggingface/transformers/pull/13251
978,691,349
MDExOlB1bGxSZXF1ZXN0NzE5MjM4Mzc3
13,251
fix `tokenizer_class_from_name` for models with `-` in the name
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Should we turn your code snippet into a test? It's almost complete - just needs an assert.", "Yes, that would be great!", "OK, done. \r\n\r\nIt's not a perfect test as it'll fail on the first invalid entry rather than test them all, but it's probably good enough in this situation.\r\n\r\nThank you for doing the heavy lifting for adding this test, @LysandreJik ", "Hi there! Thanks a lot for fixing this while I was away. There is a `model_type_to_module_name` function defined in `configuration_auto` that already does all what you added. Will make a PR to switch to that. The tests should make sure it doesn't break anything." ]
1,629
1,630
1,629
CONTRIBUTOR
null
https://github.com/huggingface/transformers/pull/13023 breaks for some models with `-` in its name. e.g. `xlm-roberta`, For example: ``` Traceback (most recent call last): File "/mnt/nvme1/code/huggingface/transformers-master/examples/pytorch/language-modeling/run_mlm.py", line 550, in <module> main() File "/mnt/nvme1/code/huggingface/transformers-master/examples/pytorch/language-modeling/run_mlm.py", line 337, in main tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, **tokenizer_kwargs) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/auto/tokenization_auto.py", line 432, in from_pretrained tokenizer_class = tokenizer_class_from_name(tokenizer_class_candidate) File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/models/auto/tokenization_auto.py", line 226, in tokenizer_class_from_name module = importlib.import_module(f".{module_name}", "transformers.models") File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked ModuleNotFoundError: No module named 'transformers.models.xlm-roberta' ``` as you can see it tries to import "`transformers.models.xlm-roberta`", to reproduce: ``` RUN_SLOW=1 pytest tests/deepspeed -k clm_xlm_roberta ``` ``` # module_name, tokenizers debug print: xlm-roberta ('XLMRobertaTokenizer', 'XLMRobertaTokenizerFast') ``` This PR fixes it: ``` module = importlib.import_module(f".{module_name.replace('-', '_')}", "transformers.models") ``` Oddly enough I don't get this problem if I run `xlm-roberta-base`, so this is an edge case. As the core models seems to not trigger the problem. Not sure why. In the deepspeed test suite the 2 failing tests were: ``` RUN_SLOW=1 pytest tests/deepspeed -k clm_xlm_roberta ``` Now it has a core test - thanks @LysandreJik @LysandreJik also pushed a fix for model names that mismatch model files which is the case with `openai-gpt` @LysandreJik, @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13251/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13251/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13251", "html_url": "https://github.com/huggingface/transformers/pull/13251", "diff_url": "https://github.com/huggingface/transformers/pull/13251.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13251.patch", "merged_at": 1629966555000 }
https://api.github.com/repos/huggingface/transformers/issues/13250
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13250/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13250/comments
https://api.github.com/repos/huggingface/transformers/issues/13250/events
https://github.com/huggingface/transformers/pull/13250
978,690,725
MDExOlB1bGxSZXF1ZXN0NzE5MjM3ODY2
13,250
Check None before going through iteration
{ "login": "qqaatw", "id": 24835382, "node_id": "MDQ6VXNlcjI0ODM1Mzgy", "avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qqaatw", "html_url": "https://github.com/qqaatw", "followers_url": "https://api.github.com/users/qqaatw/followers", "following_url": "https://api.github.com/users/qqaatw/following{/other_user}", "gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}", "starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions", "organizations_url": "https://api.github.com/users/qqaatw/orgs", "repos_url": "https://api.github.com/users/qqaatw/repos", "events_url": "https://api.github.com/users/qqaatw/events{/privacy}", "received_events_url": "https://api.github.com/users/qqaatw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,630
1,630
CONTRIBUTOR
null
# What does this PR do? This PR fixes the error mentioned in #13234. This is a quick solution. For long-term development, should we change the default value of `_keys_to_ignore_on_xxx` from `None` to `list`, so we can dismiss the check of whether it is `None` before running any iteration. https://github.com/huggingface/transformers/blob/b1198a8440cc05f569b0bc22038993a1e5e707ab/src/transformers/modeling_utils.py#L444-L450 https://github.com/huggingface/transformers/blob/b1198a8440cc05f569b0bc22038993a1e5e707ab/src/transformers/modeling_tf_utils.py#L626-L629 ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13250/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13250/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13250", "html_url": "https://github.com/huggingface/transformers/pull/13250", "diff_url": "https://github.com/huggingface/transformers/pull/13250.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13250.patch", "merged_at": 1630325931000 }
https://api.github.com/repos/huggingface/transformers/issues/13249
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13249/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13249/comments
https://api.github.com/repos/huggingface/transformers/issues/13249/events
https://github.com/huggingface/transformers/issues/13249
978,678,218
MDU6SXNzdWU5Nzg2NzgyMTg=
13,249
how to finetune mT5 on XGLUE-NTG task
{ "login": "koukoulala", "id": 30341159, "node_id": "MDQ6VXNlcjMwMzQxMTU5", "avatar_url": "https://avatars.githubusercontent.com/u/30341159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/koukoulala", "html_url": "https://github.com/koukoulala", "followers_url": "https://api.github.com/users/koukoulala/followers", "following_url": "https://api.github.com/users/koukoulala/following{/other_user}", "gists_url": "https://api.github.com/users/koukoulala/gists{/gist_id}", "starred_url": "https://api.github.com/users/koukoulala/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/koukoulala/subscriptions", "organizations_url": "https://api.github.com/users/koukoulala/orgs", "repos_url": "https://api.github.com/users/koukoulala/repos", "events_url": "https://api.github.com/users/koukoulala/events{/privacy}", "received_events_url": "https://api.github.com/users/koukoulala/received_events", "type": "User", "site_admin": false }
[ { "id": 1897896961, "node_id": "MDU6TGFiZWwxODk3ODk2OTYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Migration", "name": "Migration", "color": "e99695", "default": false, "description": "" } ]
closed
false
null
[]
[ "From the T5 author (I asked him):\r\n\r\n> since mT5 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix.\r\n\r\nHence, no prefix should be used. However, the performance you get without prefix is similar, you say?", "> From the T5 author (I asked him):\r\n> \r\n> > since mT5 was pre-trained unsupervisedly, there's no real advantage to using a task prefix during single-task fine-tuning. If you are doing multi-task fine-tuning, you should use a prefix.\r\n> \r\n> Hence, no prefix should be used. However, the performance you get without prefix is similar, you say?\r\n\r\nThank you very much for your reply. Does MT5 have any finetuning scripts on multilingual title generation task? Why is it so bad in other languages? Does MT5 have any special hyperparameters that need to be set? \r\nhere is my command: python -u -m torch.distributed.launch --nproc_per_node 4 --use_env examples/pytorch/summarization/run_xglue_no_trainer.py --model_name_or_path=google/mt5-base --dataset_name=ntg - --per_device_train_batch_size=2 --per_device_eval_batch_size=4\"\r\n\r\nThansks!\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "@koukoulala @NielsRogge I had also similar doubt, instead of MT5, I want to finetune M2M100 on more than one language pair. Any leads on how to achieve that? I am able to finetune on single language pair, but how to finetune on more than one pair simultaneously?" ]
1,629
1,640
1,633
NONE
null
# 📚 Migration ## Information <!-- Important information --> Model I am using (Bert, XLNet ...): google/mt5-base Language I am using the model on (English, Chinese ...): multi-language The problem arises when using: * [ ] the official example scripts: (give details below) * my own modified scripts: (give details below) Just a little change in ./examples/pytorch/summarization/run_summarization_no_trainer.py to suit for NTG task and bleu evaluation metric. The tasks I am working on is: * an official GLUE/SQUaD task: (give the name): XGLUE-NTG * [ ] my own task or dataset: (give details below) ## Details <!-- A clear and concise description of the migration issue. If you have code snippets, please provide it here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code. --> When training MT5 with multilingual data, do I need to add the "--source_prefix" argument like T5? If so, " --source_prefix=' Summarize: ' " Is that right? But when this was added, the results were poor in all language but English. Is there a problem with my parameter setting? ![image](https://user-images.githubusercontent.com/30341159/130722965-4911b07b-aee5-4651-8516-be3b8d4a8d0a.png) Also, the result with the parameter "--source_prefix" above is actually the same as the result without the parameter below: ![image](https://user-images.githubusercontent.com/30341159/130723118-a50ae07e-c541-48dc-ab34-839562b7d309.png) Should we set different --source_prefix for different languages, and how to set that? ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: 3.6 - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: <!-- IMPORTANT: which version of the former library do you use? --> * `pytorch-transformers` or `pytorch-pretrained-bert` version (or branch): ## Checklist - I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - I checked if a related official extension example runs on my machine.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13249/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13249/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/13248
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13248/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13248/comments
https://api.github.com/repos/huggingface/transformers/issues/13248/events
https://github.com/huggingface/transformers/pull/13248
978,661,054
MDExOlB1bGxSZXF1ZXN0NzE5MjEzNzc0
13,248
[doc] correct TP implementation resources
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,629
1,630
1,630
CONTRIBUTOR
null
This PR fixes a few implementation links: removes incorrect one, adds a new one. @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13248/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13248/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/13248", "html_url": "https://github.com/huggingface/transformers/pull/13248", "diff_url": "https://github.com/huggingface/transformers/pull/13248.diff", "patch_url": "https://github.com/huggingface/transformers/pull/13248.patch", "merged_at": 1630406843000 }