url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/7821
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7821/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7821/comments
https://api.github.com/repos/huggingface/transformers/issues/7821/events
https://github.com/huggingface/transformers/issues/7821
722,472,461
MDU6SXNzdWU3MjI0NzI0NjE=
7,821
[testing] trainer tests fail - 2 issues
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "All tests are passing on my side and the CI so it's something in your environment that triggers the failures. Could you add what you have inside it?", "This?\r\n> the test should skip and not fail if the user doesn't have `Comet.ml`\r\n", "I don't have `comet.ml` installed in my dev env. I think the problem might be having it installed without the key properly set.", "Right, so this comes from `transformers`:\r\n\r\n```\r\nsrc/transformers/integrations.py: experiment = comet_ml.Experiment(**args)\r\nsrc/transformers/trainer_tf.py: experiment = comet_ml.Experiment(**args)\r\n```\r\nand it fails in:\r\n```\r\n if self.api_key is None:\r\n> raise ValueError(\r\n \"Comet.ml requires an API key. Please provide as the \"\r\n \"first argument to Experiment(api_key) or as an environment\"\r\n \" variable named COMET_API_KEY \"\r\n )\r\n```\r\nand it has a check:\r\n```\r\nif is_comet_available():\r\n import comet_ml\r\n```\r\nso the tests need to do something about it.\r\n\r\nI have never explicitly installed `comet_ml`, but some other package installed it as a dependency. Yet, I don't have this service configured. It's like `wandb` - it needs to gracefully skip tests if `comet_ml` is installed but not configured. To configure you have to go to that service website, sign up, create API key, etc. etc.\r\n\r\nI suppose there needs to be a skip decorator that does something like:\r\n```\r\nfrom .integrations import is_comet_available\r\ndef working_comet_ml(test_case):\r\n # handle the case where comel_ml is installed but not configured\r\n try:\r\n if is_comet_available:\r\n import comet_ml\r\n comet_ml.Experiment()\r\n except:\r\n return unittest.skip(\"test is slow\")(test_case)\r\n return test_case\r\n```\r\nthis is untested. \r\n\r\nMost likely you should be able to reproduce the problem by just installing `comet_ml`\r\n\r\nOr perhaps those 2 places where it's used in `transformers` it should issue a warning, rather then raise an error. Which is probably a better way to handle that.\r\n\r\n**edit**: thinking more, it is the library that shouldn't crash, and not the tests that should skip. It's an optional feature not a requirement, so it should behave gracefully.", "Seems to come from: https://github.com/huggingface/transformers/commit/b923871bb78f538e3c2e4bf36776986c800da1ae\r\nSo I guess we need to ask @dsblank, who made the initial PR and I see you were in the reviewers too.", "Yes I think raising a warning should be better than a hard error since it does not mean the training has to stop, and it blocks the tests. If you can suggest a PR with those changes, I'd happily review it.", "Found a better solution https://github.com/huggingface/transformers/pull/7830\r\n\r\n`comet_ml` needs to have `comet_ml.ensure_configured()` ala `wandb.ensure_configured()` - I am not sure whether ` @dsblank could add/ask for it. Thank you!\r\n\r\nUntil then it'll be emulated by:\r\n```\r\ntry:\r\n # Comet needs to be imported before any ML frameworks\r\n import comet_ml # noqa: F401\r\n\r\n # XXX: there should be comet_ml.ensure_configured(), like `wandb`, for now emulate it\r\n comet_ml.Experiment(project_name=\"ensure_configured\")\r\n _has_comet = True\r\nexcept (ImportError, ValueError):\r\n _has_comet = False\r\n```\r\n", "and wandb needed the latest version - which fixed the `OSError: [Errno 29] Illegal seek` issue" ]
1,602
1,602
1,602
CONTRIBUTOR
null
trainer tests seem to have several issues: ``` USE_CUDA=1 pytest tests/test_trainer.py::TrainerIntegrationTest::test_custom_optimizer ``` 1. ``` if self.api_key is None: > raise ValueError( "Comet.ml requires an API key. Please provide as the " "first argument to Experiment(api_key) or as an environment" " variable named COMET_API_KEY " ) ``` the test should skip and not fail if the user doesn't have Comet.ml 2. wandb seems to still be problematic when there is an error in the test. ``` Traceback (most recent call last): File "/home/stas/anaconda3/envs/main-38/bin/pytest", line 8, in <module> sys.exit(console_main()) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/config/__init__.py", line 180, in console_main code = main() File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/config/__init__.py", line 157, in main ret = config.hook.pytest_cmdline_main( File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/hooks.py", line 286, in __call__ return self._hookexec(self, self.get_hookimpls(), kwargs) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/manager.py", line 93, in _hookexec return self._inner_hookexec(hook, methods, kwargs) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/manager.py", line 84, in <lambda> self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall( File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/callers.py", line 208, in _multicall return outcome.get_result() File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/callers.py", line 80, in get_result raise ex[1].with_traceback(ex[2]) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/pluggy/callers.py", line 187, in _multicall res = hook_impl.function(*args) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/main.py", line 289, in pytest_cmdline_main return wrap_session(config, _main) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/main.py", line 284, in wrap_session config._ensure_unconfigure() File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/config/__init__.py", line 920, in _ensure_unconfigure fin() File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/capture.py", line 632, in stop_global_capturing self._global_capturing.pop_outerr_to_orig() File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/capture.py", line 522, in pop_outerr_to_orig out, err = self.readouterr() File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/capture.py", line 563, in readouterr out = self.out.snap() File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/_pytest/capture.py", line 481, in snap self.tmpfile.seek(0) OSError: [Errno 29] Illegal seek ``` I have to add `WANDB_DISABLED=true` to overcome those. There was an issue about it earlier, but I am not able to find it. Full report: ``` FAILED tests/test_trainer.py::TrainerIntegrationTest::test_can_resume_training - ValueError: Comet.ml requires an API key. Please provide as the first argument... ERROR tests/test_trainer.py::TrainerIntegrationTest::test_can_resume_training - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_custom_optimizer - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_custom_optimizer - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_evaluate - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_evaluate - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_flos_extraction - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_flos_extraction - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_load_best_model_at_end - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_load_best_model_at_end - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_model_init - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_model_init - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_num_train_epochs_in_training - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_num_train_epochs_in_training - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_number_of_steps_in_training - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_number_of_steps_in_training - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_predict - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_predict - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_reproducible_training - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_reproducible_training - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_save_checkpoints - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_save_checkpoints - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_train_and_eval_dataloaders - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_train_and_eval_dataloaders - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_trainer_eval_lm - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_trainer_eval_lm - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_trainer_eval_mrpc - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_trainer_eval_mrpc - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_trainer_iterable_dataset - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_trainer_iterable_dataset - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_trainer_with_datasets - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_trainer_with_datasets - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_training_arguments_are_left_untouched - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer.py::TrainerIntegrationTest::test_training_arguments_are_left_untouched - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer_callback.py::TrainerCallbackTest::test_add_remove_callback - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer_callback.py::TrainerCallbackTest::test_add_remove_callback - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer_callback.py::TrainerCallbackTest::test_event_flow - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer_callback.py::TrainerCallbackTest::test_event_flow - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer_callback.py::TrainerCallbackTest::test_init_callback - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer_callback.py::TrainerCallbackTest::test_init_callback - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer_utils.py::TrainerUtilsTest::test_distributed_tensor_gatherer - OSError: [Errno 29] Illegal seek ERROR tests/test_trainer_utils.py::TrainerUtilsTest::test_distributed_tensor_gatherer - OSError: [Errno 29] Illegal seek ERROR tests/test_utils_check_copies.py::CopyCheckTester::test_find_code_in_transformers - OSError: [Errno 29] Illegal seek ERROR tests/test_utils_check_copies.py::CopyCheckTester::test_find_code_in_transformers - OSError: [Errno 29] Illegal seek ERROR tests/test_utils_check_copies.py::CopyCheckTester::test_is_copy_consistent - OSError: [Errno 29] Illegal seek ERROR tests/test_utils_check_copies.py::CopyCheckTester::test_is_copy_consistent - OSError: [Errno 29] Illegal seek ``` Thank you! ## Environment info ``` - `transformers` version: master - Platform: Linux-4.15.0-118-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.8.0.dev20201014 (True) - Tensorflow version (GPU?): 2.3.1 (True) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7821/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7821/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7820
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7820/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7820/comments
https://api.github.com/repos/huggingface/transformers/issues/7820/events
https://github.com/huggingface/transformers/pull/7820
722,453,734
MDExOlB1bGxSZXF1ZXN0NTA0MjAxODM2
7,820
Fix small type hinting error
{ "login": "AndreaSottana", "id": 48888970, "node_id": "MDQ6VXNlcjQ4ODg4OTcw", "avatar_url": "https://avatars.githubusercontent.com/u/48888970?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AndreaSottana", "html_url": "https://github.com/AndreaSottana", "followers_url": "https://api.github.com/users/AndreaSottana/followers", "following_url": "https://api.github.com/users/AndreaSottana/following{/other_user}", "gists_url": "https://api.github.com/users/AndreaSottana/gists{/gist_id}", "starred_url": "https://api.github.com/users/AndreaSottana/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AndreaSottana/subscriptions", "organizations_url": "https://api.github.com/users/AndreaSottana/orgs", "repos_url": "https://api.github.com/users/AndreaSottana/repos", "events_url": "https://api.github.com/users/AndreaSottana/events{/privacy}", "received_events_url": "https://api.github.com/users/AndreaSottana/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@LysandreJik done, thanks for pointing that out!" ]
1,602
1,603
1,603
CONTRIBUTOR
null
Fix small type hinting error which causes type warnings in some IDEs when using `torch.device` instead of a string
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7820/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7820/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7820", "html_url": "https://github.com/huggingface/transformers/pull/7820", "diff_url": "https://github.com/huggingface/transformers/pull/7820.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7820.patch", "merged_at": 1603088070000 }
https://api.github.com/repos/huggingface/transformers/issues/7819
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7819/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7819/comments
https://api.github.com/repos/huggingface/transformers/issues/7819/events
https://github.com/huggingface/transformers/pull/7819
722,421,929
MDExOlB1bGxSZXF1ZXN0NTA0MTc1MjU1
7,819
Create README.md
{ "login": "MichalPleban", "id": 9946531, "node_id": "MDQ6VXNlcjk5NDY1MzE=", "avatar_url": "https://avatars.githubusercontent.com/u/9946531?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MichalPleban", "html_url": "https://github.com/MichalPleban", "followers_url": "https://api.github.com/users/MichalPleban/followers", "following_url": "https://api.github.com/users/MichalPleban/following{/other_user}", "gists_url": "https://api.github.com/users/MichalPleban/gists{/gist_id}", "starred_url": "https://api.github.com/users/MichalPleban/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MichalPleban/subscriptions", "organizations_url": "https://api.github.com/users/MichalPleban/orgs", "repos_url": "https://api.github.com/users/MichalPleban/repos", "events_url": "https://api.github.com/users/MichalPleban/events{/privacy}", "received_events_url": "https://api.github.com/users/MichalPleban/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,602
1,603
1,603
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7819/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7819/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7819", "html_url": "https://github.com/huggingface/transformers/pull/7819", "diff_url": "https://github.com/huggingface/transformers/pull/7819.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7819.patch", "merged_at": 1603283401000 }
https://api.github.com/repos/huggingface/transformers/issues/7818
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7818/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7818/comments
https://api.github.com/repos/huggingface/transformers/issues/7818/events
https://github.com/huggingface/transformers/pull/7818
722,416,780
MDExOlB1bGxSZXF1ZXN0NTA0MTcwOTc4
7,818
Remove masked_lm_labels from returned dictionary in DataCollatorForNextSentencePrediction
{ "login": "vblagoje", "id": 458335, "node_id": "MDQ6VXNlcjQ1ODMzNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vblagoje", "html_url": "https://github.com/vblagoje", "followers_url": "https://api.github.com/users/vblagoje/followers", "following_url": "https://api.github.com/users/vblagoje/following{/other_user}", "gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}", "starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions", "organizations_url": "https://api.github.com/users/vblagoje/orgs", "repos_url": "https://api.github.com/users/vblagoje/repos", "events_url": "https://api.github.com/users/vblagoje/events{/privacy}", "received_events_url": "https://api.github.com/users/vblagoje/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,602
1,602
CONTRIBUTOR
null
# What does this PR do? This PR removes ```masked_lm_labels``` from dictionary returned in DataCollatorForNextSentencePrediction ```__call__``` method. I noticed the warning while BERT pre-training. I removed the lines (452,453) and the warning was finally gone. See discussion George (@gmihaila) and I already had in the [merged PR](https://github.com/huggingface/transformers/pull/7595) <!-- Remove if not applicable --> ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. **Discussed on forum but on previously merged [PR](https://github.com/huggingface/transformers/pull/7595) ** - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). **Not needed** - [x] Did you write any new necessary tests? **Not needed** ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @LysandreJik, @sgugger, and @gmihaila can review
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7818/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7818/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7818", "html_url": "https://github.com/huggingface/transformers/pull/7818", "diff_url": "https://github.com/huggingface/transformers/pull/7818.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7818.patch", "merged_at": 1602832330000 }
https://api.github.com/repos/huggingface/transformers/issues/7817
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7817/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7817/comments
https://api.github.com/repos/huggingface/transformers/issues/7817/events
https://github.com/huggingface/transformers/pull/7817
722,394,802
MDExOlB1bGxSZXF1ZXN0NTA0MTUyNzkw
7,817
Fix missing reference titles in retrieval evaluation of RAG
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Great!" ]
1,602
1,602
1,602
MEMBER
null
There's currently a bug in the retrieval evaluation script of RAG where for each sample, the first reference title is discarded. Moreover only `k` reference titles were taken into account while we need all of them to compute the Precision @ k. Because of this bug the index provided by the RAG team (hnsw M=128 + SQ8 with the "inner product to L2" trick) only had 35% of Precision @ k. With this fix it is back to 70%, which is consistent with what they had internally as far as I know. Apparently the original parsing script used to add the question before the titles, that's why it was discarding the first entry. cc @ola13
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7817/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7817/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7817", "html_url": "https://github.com/huggingface/transformers/pull/7817", "diff_url": "https://github.com/huggingface/transformers/pull/7817.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7817.patch", "merged_at": 1602836149000 }
https://api.github.com/repos/huggingface/transformers/issues/7816
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7816/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7816/comments
https://api.github.com/repos/huggingface/transformers/issues/7816/events
https://github.com/huggingface/transformers/issues/7816
722,394,158
MDU6SXNzdWU3MjIzOTQxNTg=
7,816
RAG - MissingIndex: Index with index_name 'embeddings' not initialized yet
{ "login": "ioannist", "id": 6544125, "node_id": "MDQ6VXNlcjY1NDQxMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6544125?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ioannist", "html_url": "https://github.com/ioannist", "followers_url": "https://api.github.com/users/ioannist/followers", "following_url": "https://api.github.com/users/ioannist/following{/other_user}", "gists_url": "https://api.github.com/users/ioannist/gists{/gist_id}", "starred_url": "https://api.github.com/users/ioannist/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ioannist/subscriptions", "organizations_url": "https://api.github.com/users/ioannist/orgs", "repos_url": "https://api.github.com/users/ioannist/repos", "events_url": "https://api.github.com/users/ioannist/events{/privacy}", "received_events_url": "https://api.github.com/users/ioannist/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
[ "I was able to get passed this error by using my own RagRetriever instead of RagPyTorchDistributedRetriever inside \"transformers/examples/rag/finetune.py\"\r\n\r\nI am also using my own custom dataset and index #7763 \r\n\r\nThe following changes got me past the missing index error. However, I have no idea if this is efficient or if I am doing something that I shouldn't be doing...\r\n\r\n```\r\nif self.is_rag_model:\r\n if args.prefix is not None:\r\n config.generator.prefix = args.prefix\r\n config.label_smoothing = hparams.label_smoothing\r\n hparams, config.generator = set_extra_model_params(extra_model_params, hparams, config.generator)\r\n\r\n # commented out this line\r\n # retriever = RagPyTorchDistributedRetriever.from_pretrained(hparams.model_name_or_path, config=config)\r\n \r\n ############### new stuff ###############\r\n dataset = load_from_disk(args.passages_path) # to reload the dataset\r\n dataset.load_faiss_index(\"embeddings\", args.index_path) # to reload the index\r\n retriever = RagRetriever.from_pretrained(\r\n hparams.model_name_or_path, index_name=\"custom\", indexed_dataset=dataset\r\n )\r\n ######################################\r\n\r\n model = self.model_class.from_pretrained(hparams.model_name_or_path, config=config, retriever=retriever)\r\n prefix = config.question_encoder.prefix\r\n\r\n```", "Won't have the time in the next 1,2 weeks to take a closer look sadly. Maybe @lhoestq this is interesting to you", "Could you paste the full stacktrace ?", "Thank you @lhoestq .\r\n\r\n```\r\nGPU available: True, used: True\r\nINFO:lightning:GPU available: True, used: True\r\nTPU available: False, using: 0 TPU cores\r\nINFO:lightning:TPU available: False, using: 0 TPU cores\r\nLOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]\r\nINFO:lightning:LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]\r\nUsing native 16bit precision.\r\nINFO:lightning:Using native 16bit precision.\r\nValidation sanity check: 0it [00:00, ?it/s]Traceback (most recent call last):\r\n File \"examples/rag/finetune.py\", line 499, in <module>\r\n main(args)\r\n File \"examples/rag/finetune.py\", line 471, in main\r\n logger=logger,\r\n File \"/home/ioannis/Desktop/transformers-lhoestq-2/transformers/examples/lightning_base.py\", line 384, in generic_train\r\n trainer.fit(model)\r\n File \"/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py\", line 440, in fit\r\n results = self.accelerator_backend.train()\r\n File \"/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py\", line 54, in train\r\n results = self.train_or_test()\r\n File \"/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py\", line 66, in train_or_test\r\n results = self.trainer.train()\r\n File \"/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py\", line 462, in train\r\n self.run_sanity_check(self.get_model())\r\n File \"/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py\", line 648, in run_sanity_check\r\n _, eval_results = self.run_evaluation(test_mode=False, max_batches=self.num_sanity_val_batches)\r\n File \"/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py\", line 568, in run_evaluation\r\n output = self.evaluation_loop.evaluation_step(test_mode, batch, batch_idx, dataloader_idx)\r\n File \"/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py\", line 171, in evaluation_step\r\n output = self.trainer.accelerator_backend.validation_step(args)\r\n File \"/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py\", line 76, in validation_step\r\n output = self.__validation_step(args)\r\n File \"/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py\", line 86, in __validation_step\r\n output = self.trainer.model.validation_step(*args)\r\n File \"examples/rag/finetune.py\", line 240, in validation_step\r\n return self._generative_step(batch)\r\n File \"examples/rag/finetune.py\", line 280, in _generative_step\r\n max_length=self.target_lens[\"val\"],\r\n File \"/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/torch/autograd/grad_mode.py\", line 15, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/transformers/modeling_rag.py\", line 873, in generate\r\n return_tensors=\"pt\",\r\n File \"/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/transformers/retrieval_rag.py\", line 600, in __call__\r\n retrieved_doc_embeds, doc_ids, docs = self.retrieve(question_hidden_states, n_docs)\r\n File \"/home/ioannis/Desktop/transformers-lhoestq-2/transformers/examples/rag/distributed_retriever.py\", line 115, in retrieve\r\n doc_ids, retrieved_doc_embeds = self._main_retrieve(question_hidden_states, n_docs)\r\n File \"/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/transformers/retrieval_rag.py\", line 521, in _main_retrieve\r\n ids, vectors = self.index.get_top_docs(question_hidden_states, n_docs)\r\n File \"/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/transformers/retrieval_rag.py\", line 226, in get_top_docs\r\n _, ids = self.dataset.search_batch(\"embeddings\", question_hidden_states, n_docs)\r\n File \"/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/datasets/search.py\", line 607, in search_batch\r\n self._check_index_is_initialized(index_name)\r\n File \"/home/ioannis/anaconda3/envs/transformers-custom-dataset-in-rag-retriever/lib/python3.7/site-packages/datasets/search.py\", line 358, in _check_index_is_initialized\r\n f\"Index with index_name '{index_name}' not initialized yet. Please make sure that you call `add_faiss_index` or `add_elasticsearch_index` first.\"\r\ndatasets.search.MissingIndex: Index with index_name 'embeddings' not initialized yet. Please make sure that you call `add_faiss_index` or `add_elasticsearch_index` first.\r\n```\r\n", "> I was able to get passed this error by using my own RagRetriever instead of RagPyTorchDistributedRetriever inside \"transformers/examples/rag/finetune.py\"\r\n> \r\n> I am also using my own custom dataset and index #7763\r\n> \r\n> The following changes got me past the missing index error. However, I have no idea if this is efficient or if I am doing something that I shouldn't be doing...\r\n> \r\n> ```\r\n> if self.is_rag_model:\r\n> if args.prefix is not None:\r\n> config.generator.prefix = args.prefix\r\n> config.label_smoothing = hparams.label_smoothing\r\n> hparams, config.generator = set_extra_model_params(extra_model_params, hparams, config.generator)\r\n> \r\n> # commented out this line\r\n> # retriever = RagPyTorchDistributedRetriever.from_pretrained(hparams.model_name_or_path, config=config)\r\n> \r\n> ############### new stuff ###############\r\n> dataset = load_from_disk(args.passages_path) # to reload the dataset\r\n> dataset.load_faiss_index(\"embeddings\", args.index_path) # to reload the index\r\n> retriever = RagRetriever.from_pretrained(\r\n> hparams.model_name_or_path, index_name=\"custom\", indexed_dataset=dataset\r\n> )\r\n> ######################################\r\n> \r\n> model = self.model_class.from_pretrained(hparams.model_name_or_path, config=config, retriever=retriever)\r\n> prefix = config.question_encoder.prefix\r\n> ```\r\n\r\nThe above code seems to work (runs out of GPU memory in my local machine, so I am in the process of testing it on a server - will keep you posted).\r\n\r\nI noticed that the retrieval step 4 in _/examples/rag/use_own_knowledge_dataset.py_ takes a few minutes for every question, so I tried passing in _device=0_ to faiss to move it from cpu to gpu. I got this:\r\n\r\n`Faiss assertion 'blasStatus == CUBLAS_STATUS_SUCCESS' failed in virtual void faiss::gpu::StandardGpuResources::initializeForDevice(int) at gpu/StandardGpuResources.cpp:248`\r\n\r\nThe idea was to speed it up because I don't see how the finetuning can take place with such a slow index, but I might have misunderstood.", "Seems like my attempt to replace RagPyTorchDistributedRetriever with RagRetriever (in an 8 GPU machine) fails too. Too good to be true :)\r\n\r\n```\r\nloading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/question_encoder_tokenizer/tokenizer_config.json from cache at /home/ubuntu/.cache/torch/transformers/8ade9cf561f8c0a47d1c3785e850c57414d776b3795e21bd01e58483399d2de4.11f57497ee659e26f830788489816dbcb678d91ae48c06c50c9dc0e4438ec05b\r\nModel name 'facebook/rag-sequence-base/generator_tokenizer' not found in model shortcut name list (facebook/bart-base, facebook/bart-large, facebook/bart-large-mnli, facebook/bart-large-cnn, facebook/bart-large-xsum, yjernite/bart_eli5). Assuming 'facebook/rag-sequence-base/generator_tokenizer' is a path, a model identifier, or url to a directory containing tokenizer files.\r\nloading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/generator_tokenizer/vocab.json from cache at /home/ubuntu/.cache/torch/transformers/3b9637b6eab4a48cf2bc596e5992aebb74de6e32c9ee660a27366a63a8020557.6a4061e8fc00057d21d80413635a86fdcf55b6e7594ad9e25257d2f99a02f4be\r\nloading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/generator_tokenizer/merges.txt from cache at /home/ubuntu/.cache/torch/transformers/b2a6adcb3b8a4c39e056d80a133951b99a56010158602cf85dee775936690c6a.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda\r\nloading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/generator_tokenizer/added_tokens.json from cache at None\r\nloading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/generator_tokenizer/special_tokens_map.json from cache at /home/ubuntu/.cache/torch/transformers/342599872fb2f45f954699d3c67790c33b574cc552a4b433fedddc97e6a3c58e.6e217123a3ada61145de1f20b1443a1ec9aac93492a4bd1ce6a695935f0fd97a\r\nloading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/generator_tokenizer/tokenizer_config.json from cache at /home/ubuntu/.cache/torch/transformers/e5f72dc4c0b1ba585d7afb7fa5e3e52ff0e1f101e49572e2caaf38fab070d4d6.d596a549211eb890d3bb341f3a03307b199bc2d5ed81b3451618cbcb04d1f1bc\r\nloading file https://s3.amazonaws.com/models.huggingface.co/bert/facebook/rag-sequence-base/generator_tokenizer/tokenizer.json from cache at None\r\nUsing native 16bit precision.\r\nINFO:lightning:Using native 16bit precision.\r\nINFO:__main__:Custom init_ddp_connection.\r\ninitializing ddp: GLOBAL_RANK: 7, MEMBER: 8/8\r\nINFO:lightning:initializing ddp: GLOBAL_RANK: 7, MEMBER: 8/8\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/transformers/examples/rag/finetune.py\", line 519, in <module>\r\n main(args)\r\n File \"/home/ubuntu/transformers/examples/rag/finetune.py\", line 491, in main\r\n logger=logger,\r\n File \"/home/ubuntu/transformers/examples/lightning_base.py\", line 384, in generic_train\r\n trainer.fit(model)\r\n File \"/home/ubuntu/anaconda3/envs/tran/lib/python3.7/site-packages/pytorch_lightning/trainer/states.py\", line 48, in wrapped_fn\r\n result = fn(self, *args, **kwargs)\r\n File \"/home/ubuntu/anaconda3/envs/tran/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py\", line 1046, in fit\r\n self.accelerator_backend.train(model)\r\n File \"/home/ubuntu/anaconda3/envs/tran/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_backend.py\", line 57, in train\r\n self.ddp_train(process_idx=self.task_idx, mp_queue=None, model=model)\r\n File \"/home/ubuntu/anaconda3/envs/tran/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_backend.py\", line 164, in ddp_train\r\n self.trainer.is_slurm_managing_tasks\r\n File \"/home/ubuntu/transformers/examples/rag/finetune.py\", line 180, in init_ddp_connection\r\n self.model.retriever.init_retrieval(self.distributed_port)\r\nTypeError: init_retrieval() takes 1 positional argument but 2 were given\r\nTraceback (most recent call last):\r\n File \"examples/rag/finetune.py\", line 519, in <module>\r\n main(args)\r\n File \"examples/rag/finetune.py\", line 491, in main\r\n logger=logger,\r\n File \"/home/ubuntu/transformers/examples/lightning_base.py\", line 384, in generic_train\r\n trainer.fit(model)\r\n File \"/home/ubuntu/anaconda3/envs/tran/lib/python3.7/site-packages/pytorch_lightning/trainer/states.py\", line 48, in wrapped_fn\r\n result = fn(self, *args, **kwargs)\r\n File \"/home/ubuntu/anaconda3/envs/tran/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py\", line 1058, in fit\r\n results = self.accelerator_backend.spawn_ddp_children(model)\r\n File \"/home/ubuntu/anaconda3/envs/tran/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_backend.py\", line 123, in spawn_ddp_children\r\n results = self.ddp_train(local_rank, mp_queue=None, model=model, is_master=True)\r\n File \"/home/ubuntu/anaconda3/envs/tran/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_backend.py\", line 164, in ddp_train\r\n self.trainer.is_slurm_managing_tasks\r\n File \"examples/rag/finetune.py\", line 180, in init_ddp_connection\r\n self.model.retriever.init_retrieval(self.distributed_port)\r\nTypeError: init_retrieval() takes 1 positional argument but 2 were given\r\n```\r\n", "There are differences between the regular retriever and the distributed retriever:\r\n- the distributed retriever's index is only initialized on rank 0\r\n- the different processes communicate with rank 0 to do retrieval\r\n- the init_retrieval method requires to specify a port on which retrieval communication happen\r\n\r\nLet me know if you figure out a way to make it work in your case", "Will do, though I guess it's easier to go back to trying to make it work with _RagPyTorchDistributedRetriever_.\r\n\r\nTried adding _dataset.load_faiss_index_ inside get_dataset in finetune.py, but... 'Seq2SeqDataset' object has no attribute 'load_faiss_index'\r\n", "The Seq2SeqDataset is the one the model is trained *on*. The knowledge dataset is stored inside the retriever.\r\nThe `MissingIndex` error must come from init_retrieval not being called on the retriever in the process 0, or that the index is not properly loaded.", "Hi @lhoestq @patrickvonplaten any update on this? I'm also running into this issue when running finetune.sh. Though I am able to get the legacy index to work.", "@amogkam I also get the same error when trying to run fine-tuning. I also got an error saying self.opt is not there, but I did solve it. \r\n\r\nWhat do you mean by legacy index?", "I'll investigate this error this week. I'll let you know how it goes", "@lhoestq \r\n\r\nI actually did change the initialization in [this line (retrieval_rag.py)](https://github.com/huggingface/transformers/blob/master/src/transformers/retrieval_rag.py#L276).\r\n\r\nself.dataset_name, with_index=True,index_name=exact, split=self.dataset_split, dummy=self.use_dummy_dataset", "That's good to know thanks !\r\nHowever for the RagPyTorchDistributedRetriever we need to load the index only on the process 0 and keep `with_index=False` for the other processes. Ideally we have `with_index=False` in the `__init__` and `with_index=True` in `init_index`", "Oh get it!\n\nOn Tue, Nov 10, 2020, 04:56 Quentin Lhoest <[email protected]> wrote:\n\n> That's good to know thanks !\n> However for the RagPyTorchDistributedRetriever we need to load the index\n> only on the process 0 and keep with_index=False for the other processes.\n> Ideally we have with_index=False in the __init__ and with_index=True in\n> init_index\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/7816#issuecomment-724102348>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGQAIXISSTMTQW6L6KDSPAGKLANCNFSM4SSCPP5Q>\n> .\n>\n", "Sorry for spamming. I find it hard to understand index_name and index_paths\nwhen loading the datasets with fairsis\n\nOn Tue, Nov 10, 2020, 04:56 Quentin Lhoest <[email protected]> wrote:\n\n> That's good to know thanks !\n> However for the RagPyTorchDistributedRetriever we need to load the index\n> only on the process 0 and keep with_index=False for the other processes.\n> Ideally we have with_index=False in the __init__ and with_index=True in\n> init_index\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/7816#issuecomment-724102348>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGQAIXISSTMTQW6L6KDSPAGKLANCNFSM4SSCPP5Q>\n> .\n>\n", "You can specify index_name if you want to use one the index that comes with the dataset (exact/compressed), OR you can use index_path to use your own local index file.", "So the index name is like a column right ? Which controls whether thah\ncolumn should get loaded in to memory or not ?\n\nOn Wed, Nov 11, 2020, 02:35 Quentin Lhoest <[email protected]> wrote:\n\n> You can specify index_name if you want to use one the index that comes\n> with the dataset (exact/compressed), OR you can use index_path to use your\n> own local index file.\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/7816#issuecomment-724705036>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGUF5V5GO2KCEGIBH7TSPE6RHANCNFSM4SSCPP5Q>\n> .\n>\n", "In the RAG configuration you can specify index_name=\"exact\" or index_name=\"compressed\" for the \"wiki_dpr\" dataset. Wiki_dpr has indeed those two types of index. For more info you can check the docs of the [RagConfig](https://huggingface.co/transformers/model_doc/rag.html#ragconfig)\r\n\r\nOn the other hand in the datasets library and in particular in `Dataset.add_faiss_index` you can also see an \"index_name\" parameter. However this one is different from the one used in the RAG configuration on transformers side. In the datasets library, each dataset can have several indexes that are identified by their names, and by default their names correspond to the column that was used to build the index. See the docs of [the add_faiss_index method](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.add_faiss_index)\r\n\r\nThis is unfortunately the same variable name but not for the same purpose...\r\nDoes that make sense to you ?", "Thanks a lot. I got the idea. \r\n\r\n@lhoestq \r\n*Btw I tried to run the rag fine-tuning script\r\nwith a lower PyTorch lightning (0.9) version and it worked. I think the issue comes\r\nwith version miss-match.*\r\n\r\nOn Wed, Nov 11, 2020, 02:46 Quentin Lhoest <[email protected]> wrote:\r\n\r\n> In the RAG configuration you can specify index_name=\"exact\" or\r\n> index_name=\"compressed\" for the \"wiki_dpr\" dataset. Wiki_dpr has indeed\r\n> those two types of index. For more info you can check the docs of the\r\n> RagConfig\r\n> <https://huggingface.co/transformers/model_doc/rag.html#ragconfig>\r\n>\r\n> On the other hand in the datasets library and in particular in\r\n> Dataset.add_faiss_index you can also see an \"index_name\" parameter.\r\n> However this one is different from the one used in the RAG configuration on\r\n> transformers side. In the datasets library, each dataset can have several\r\n> indexes that are identified by their names, and by default their names\r\n> correspond to the column that was used to build the index. See the docs of the\r\n> add_faiss_index method\r\n> <https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.add_faiss_index>\r\n>\r\n> This is unfortunately the same variable name but not for the same\r\n> purpose...\r\n> Does that make sense to you ?\r\n>\r\n> —\r\n> You are receiving this because you commented.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/7816#issuecomment-724711118>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGUB6XY3LU3EBGEV2D3SPE7Z3ANCNFSM4SSCPP5Q>\r\n> .\r\n>\r\n", "I managed to reproduce the issue, I'm working on a fix", "Perfect.\n\nOn Fri, Nov 13, 2020, 00:12 Quentin Lhoest <[email protected]> wrote:\n\n> I managed to reproduce the issue, I'm working on a fix\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/7816#issuecomment-726012277>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGUUQ6UV4KCAZGRLCWTSPO7LDANCNFSM4SSCPP5Q>\n> .\n>\n", "Update: it looks like it's because pytorch lightning removed the init_ddp_connection hook of their LightningModule.\r\nThe hook was used to initialize the index on process 0.\r\nI'll use something else to initialize the index.", "Ok, that's why the code still works with PL 0.9. \r\n\r\nSo now the problem is the initialization of the index in this [line](https://github.com/huggingface/transformers/blob/master/src/transformers/retrieval_rag.py#L274) ?\r\n\r\n\r\nThanks a lot.", "@lhoestq any update with this, please? \r\n\r\np.s sorry for spamming :) ", "Yes I'm working on a fix ! I'll make a PR tomorrow", "Thanks a lot. :) " ]
1,602
1,605
1,605
NONE
null
## Environment info transformers version: 3.3.1 Platform: Ubuntu Python version:3.6.12 PyTorch version (GPU: yes): 1.6.0 Using GPU in script?: 1 gpu Using distributed or parallel set-up in script?: no ### Who can help @patrickvonplaten @sgugger ## Information model name: facebook/rag-sequence-base The problem arises when using the official example scripts: (give details below) The tasks I am working on is my own task or dataset: (give details below) ## To reproduce 1) Make directory at examples/rag/ioannis-data and add train eval and test files in the directory 2) Run transformers/examples/rag/finetune.sh with following changes: --data_dir examples/rag/ioannis-data \ --output_dir examples/rag/ioannis-output \ --model_name_or_path facebook/rag-sequence-base The script terminates with the following error: `datasets.search.MissingIndex: Index with index_name 'embeddings' not initialized yet. Please make sure that you call 'add_faiss_index' or 'add_elasticsearch_index' first.`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7816/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7816/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7815
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7815/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7815/comments
https://api.github.com/repos/huggingface/transformers/issues/7815/events
https://github.com/huggingface/transformers/issues/7815
722,383,634
MDU6SXNzdWU3MjIzODM2MzQ=
7,815
"Cannot re-initialize CUDA in forked subprocess." error when running "transformers/notebooks/05-benchmark.ipynb" notebook
{ "login": "ClaartjeBarkhof", "id": 25668035, "node_id": "MDQ6VXNlcjI1NjY4MDM1", "avatar_url": "https://avatars.githubusercontent.com/u/25668035?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ClaartjeBarkhof", "html_url": "https://github.com/ClaartjeBarkhof", "followers_url": "https://api.github.com/users/ClaartjeBarkhof/followers", "following_url": "https://api.github.com/users/ClaartjeBarkhof/following{/other_user}", "gists_url": "https://api.github.com/users/ClaartjeBarkhof/gists{/gist_id}", "starred_url": "https://api.github.com/users/ClaartjeBarkhof/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ClaartjeBarkhof/subscriptions", "organizations_url": "https://api.github.com/users/ClaartjeBarkhof/orgs", "repos_url": "https://api.github.com/users/ClaartjeBarkhof/repos", "events_url": "https://api.github.com/users/ClaartjeBarkhof/events{/privacy}", "received_events_url": "https://api.github.com/users/ClaartjeBarkhof/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Might be of interest to @patrickvonplaten ", "Facing same issue here.", "Try this.\r\nYou can check other parameters for PyTorchBenchmarkArguments at [here](https://github.com/huggingface/transformers/blob/acf56408d81fdc04a32af139a45ae7b76e0c5b0d/src/transformers/benchmark/benchmark_args_utils.py#L34-L123).\r\n\r\n```python\r\n# main.py\r\ndef main():\r\n from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments\r\n\r\n args = PyTorchBenchmarkArguments(models=[\"bert-base-uncased\"],\r\n batch_sizes=[8],\r\n sequence_lengths=[8, 32, 128, 512],\r\n multi_process=False)\r\n print(args.do_multi_processing)\r\n benchmark = PyTorchBenchmark(args)\r\n results = benchmark.run()\r\n print(results)\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```\r\n```\r\nCUDA_VISIBLE_DEVICES=0 python main.py\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,602
1,614
1,614
NONE
null
## Environment info I am getting this error on a server, but also on Collab, so giving the Collab specs: - `transformers` version: 3.3.1 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No? And on my server: - `transformers` version: 3.3.1 - Platform: Linux-4.19.0-11-amd64-x86_64-with-debian-10.6 - Python version: 3.6.12 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No? ### Who can help @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): Any model. The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. [Run the colab on benchmarking provided on the transformers GitHub](https://github.com/huggingface/transformers/blob/master/notebooks/05-benchmark.ipynb). ``` 2020-10-15 14:20:38.078717: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 1 / 5 Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method Traceback (most recent call last): File "run_benchmark.py", line 47, in <module> main() File "run_benchmark.py", line 43, in main benchmark.run() File "/usr/local/lib/python3.6/dist-packages/transformers/benchmark/benchmark_utils.py", line 674, in run memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length) ValueError: too many values to unpack (expected 2) ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7815/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7815/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7814
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7814/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7814/comments
https://api.github.com/repos/huggingface/transformers/issues/7814/events
https://github.com/huggingface/transformers/issues/7814
722,383,263
MDU6SXNzdWU3MjIzODMyNjM=
7,814
BART/TFBart: allow decoder_input_ids.shape[-1] > 1 + use_cache = True
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[]
1,602
1,607
1,607
CONTRIBUTOR
null
There is an edge case fixed in https://github.com/huggingface/transformers/pull/4581 Try to apply the same logic to bart/move input_id slicing to prepare_inputs_for_generation
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7814/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7814/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7813
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7813/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7813/comments
https://api.github.com/repos/huggingface/transformers/issues/7813/events
https://github.com/huggingface/transformers/pull/7813
722,375,962
MDExOlB1bGxSZXF1ZXN0NTA0MTM3MTAw
7,813
Small fixes to NotebookProgressCallback
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,602
1,602
COLLABORATOR
null
# What does this PR do? Fixes a few things in the `NotebookProgressCallback`, mainly: - triggering forced updates after an evaluation (as an evaluation is longer than just a training step) - fixing the behavior when an evaluation strategy is not epochs - removing empty except in the `is_in_notebook` test
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7813/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7813/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7813", "html_url": "https://github.com/huggingface/transformers/pull/7813", "diff_url": "https://github.com/huggingface/transformers/pull/7813.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7813.patch", "merged_at": 1602772234000 }
https://api.github.com/repos/huggingface/transformers/issues/7812
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7812/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7812/comments
https://api.github.com/repos/huggingface/transformers/issues/7812/events
https://github.com/huggingface/transformers/issues/7812
722,339,815
MDU6SXNzdWU3MjIzMzk4MTU=
7,812
the results of run_squad.py is terrible
{ "login": "ppyu", "id": 32732750, "node_id": "MDQ6VXNlcjMyNzMyNzUw", "avatar_url": "https://avatars.githubusercontent.com/u/32732750?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ppyu", "html_url": "https://github.com/ppyu", "followers_url": "https://api.github.com/users/ppyu/followers", "following_url": "https://api.github.com/users/ppyu/following{/other_user}", "gists_url": "https://api.github.com/users/ppyu/gists{/gist_id}", "starred_url": "https://api.github.com/users/ppyu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ppyu/subscriptions", "organizations_url": "https://api.github.com/users/ppyu/orgs", "repos_url": "https://api.github.com/users/ppyu/repos", "events_url": "https://api.github.com/users/ppyu/events{/privacy}", "received_events_url": "https://api.github.com/users/ppyu/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Maybe I not use the argument `--version_2_with_negative` while training on Squad V2.0 ?", "Indeed, not using that clarg would result in such bad results as you're not taking into account impossible answers. Can you let us know if you obtain better results with `--version_2_with_negative`?", "> Indeed, not using that clarg would result in such bad results as you're not taking into account impossible answers. Can you let us know if you obtain better results with `--version_2_with_negative`?\r\n\r\nAfter using `--version_2_with_negative`, the result is also not satisfactory.\r\nBoth exact and f1 are not exceed **80%**.\r\n\r\nThe results are as follows:\r\n{\"exact_20000\": 73.88191695443443, \"f1_20000\": 76.9165287314905, \"total_20000\": 11873, \"HasAns_exact_20000\": 68.60661268556005, \"HasAns_f1_20000\": 74.68453873633385, \"HasAns_total_20000\": 5928, \"NoAns_exact_20000\": 79.1421362489487, \"NoAns_f1_20000\": 79.1421362489487, \"NoAns_total_20000\": 5945, \"best_exact_20000\": 73.88191695443443, \"best_exact_thresh_20000\": 0.0, \"best_f1_20000\": 76.91652873149043,}\r\n\r\nAfter the 25000th steps of training, the performance got worse.\r\n", "That's already a lot better! What is this clarg `--squad_model bert_qa_model_squad`? It's not in the `run_squad.py` script?", "\r\n\r\n\r\n> That's already a lot better! What is this clarg `--squad_model bert_qa_model_squad`? It's not in the `run_squad.py` script?\r\n\r\nOh ,I have adjusted some code from `run_squad.py` so that it could load and train my custom model from clarg.\r\nAnd `bert_qa_model_squad` is the model whose code is copied from `BertForQuestionAnswering` in `transformers`.\r\nSo that I can design my custom model and then transmit it to the `run_squad.py` script.", "Does your script differ in any other way to the `run_squad.py` script? Does your model differ in any way to the `BertForQuestionAnswering` model in `transformers`? Have you tried running the standard command on the `transformers` script and model?\r\n\r\n```\r\npython run_squad.py\r\n--model_name_or_path bert-base-uncased\r\n--do_train\r\n--do_eval\r\n--do_lower_case\r\n--train_file $SQUAD_DIR/train-v2.0.json\r\n--predict_file $SQUAD_DIR/dev-v2.0.json\r\n--version_2_with_negative\r\n--per_gpu_train_batch_size 12\r\n--learning_rate 3e-5\r\n--num_train_epochs 2.0\r\n--max_seq_length 384\r\n--doc_stride 128\r\n--output_dir ./models/debug_squad/\r\n```", "> Does your script differ in any other way to the `run_squad.py` script? Does your model differ in any way to the `BertForQuestionAnswering` model in `transformers`? Have you tried running the standard command on the `transformers` script and model?\r\n> \r\n> ```\r\n> python run_squad.py\r\n> --model_name_or_path bert-base-uncased\r\n> --do_train\r\n> --do_eval\r\n> --do_lower_case\r\n> --train_file $SQUAD_DIR/train-v2.0.json\r\n> --predict_file $SQUAD_DIR/dev-v2.0.json\r\n> --version_2_with_negative\r\n> --per_gpu_train_batch_size 12\r\n> --learning_rate 3e-5\r\n> --num_train_epochs 2.0\r\n> --max_seq_length 384\r\n> --doc_stride 128\r\n> --output_dir ./models/debug_squad/\r\n> ```\r\n\r\nsorry, I will try later and tell you the results.", "> Does your script differ in any other way to the `run_squad.py` script? Does your model differ in any way to the `BertForQuestionAnswering` model in `transformers`? Have you tried running the standard command on the `transformers` script and model?\r\n> \r\n> ```\r\n> python run_squad.py\r\n> --model_name_or_path bert-base-uncased\r\n> --do_train\r\n> --do_eval\r\n> --do_lower_case\r\n> --train_file $SQUAD_DIR/train-v2.0.json\r\n> --predict_file $SQUAD_DIR/dev-v2.0.json\r\n> --version_2_with_negative\r\n> --per_gpu_train_batch_size 12\r\n> --learning_rate 3e-5\r\n> --num_train_epochs 2.0\r\n> --max_seq_length 384\r\n> --doc_stride 128\r\n> --output_dir ./models/debug_squad/\r\n> ```\r\nI have run the official `run_squad.py` script,and the result is as follows:\r\n10/16/2020 19:34:00 - INFO - main - Results: {'exact': 72.60170133917292, 'f1': 75.79599520268259, 'total': 11873, 'HasAns_exact': 72.2165991902834, 'HasAns_f1': 78.61434734167507, 'HasAns_total': 5928, 'NoAns_exact': 72.9857022708158, 'NoAns_f1': 72.9857022708158, 'NoAns_total': 5945, 'best_exact': 72.60170133917292, 'best_exact_thresh': 0.0, 'best_f1': 75.79599520268256, 'best_f1_thresh': 0.0}\r\n", "i have ran the official run_squad.py script, but got a 'cached_train_bert-base-uncased_384' not a result. Did i something wrong?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,602
1,610
1,610
NONE
null
# ❓ the results of run_squad.py is terrible ## I runned run_squad.py and the results is as folllows: `Results: {'exact': 40.25098964036048, 'f1': 44.224848486777795, 'total': 11873, 'HasAns_exact': 80.61740890688259, 'HasAns_f1': 88.57652261867625, 'HasAns_total': 5928, 'NoAns_exact': 0.0, 'NoAns_f1': 0.0, 'NoAns_total': 5945, 'best_exact': 50.11370336056599, 'best_exact_thresh': 0.0, 'best_f1': 50.11370336056599, 'best_f1_thresh': 0.0}` _**`exact`,`f1` and `HasAns_exact`,`HasAns_f1` are very different!**_ ## The startup args are as follows: export SQUAD_DIR=./datasets/squad python run_squad.py \ --squad_model bert_qa_model_squad \ --model_type bert \ --model_name_or_path bert-base-uncased \ --do_train \ --do_eval \ --do_lower_case \ --train_file $SQUAD_DIR/train-v2.0.json \ --predict_file $SQUAD_DIR/dev-v2.0.json \ --per_gpu_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir ./models/debug_squad/
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7812/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7812/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7811
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7811/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7811/comments
https://api.github.com/repos/huggingface/transformers/issues/7811/events
https://github.com/huggingface/transformers/pull/7811
722,300,755
MDExOlB1bGxSZXF1ZXN0NTA0MDc0NzIx
7,811
Fix issue #7781
{ "login": "jsilter", "id": 603941, "node_id": "MDQ6VXNlcjYwMzk0MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/603941?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jsilter", "html_url": "https://github.com/jsilter", "followers_url": "https://api.github.com/users/jsilter/followers", "following_url": "https://api.github.com/users/jsilter/following{/other_user}", "gists_url": "https://api.github.com/users/jsilter/gists{/gist_id}", "starred_url": "https://api.github.com/users/jsilter/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jsilter/subscriptions", "organizations_url": "https://api.github.com/users/jsilter/orgs", "repos_url": "https://api.github.com/users/jsilter/repos", "events_url": "https://api.github.com/users/jsilter/events{/privacy}", "received_events_url": "https://api.github.com/users/jsilter/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @jsilter - sorry this fix was already pushed yesterday in #7903. Thanks for spotting the bug!!!" ]
1,602
1,603
1,603
NONE
null
`decoder_config` may not be defined at this line, use `kwargs_decoder["config"]` instead. This is also consistent with the rest of the function. Fixes # 7781 https://github.com/huggingface/transformers/issues/7781 ## Who can review? Anybody can review, though @patrickvonplaten gave an informal approval in the issue discussion.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7811/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7811/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7811", "html_url": "https://github.com/huggingface/transformers/pull/7811", "diff_url": "https://github.com/huggingface/transformers/pull/7811.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7811.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7810
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7810/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7810/comments
https://api.github.com/repos/huggingface/transformers/issues/7810/events
https://github.com/huggingface/transformers/issues/7810
722,295,503
MDU6SXNzdWU3MjIyOTU1MDM=
7,810
Metrics calculate error: can only calculate the mean of floating types. Got Bool instead
{ "login": "lucadiliello", "id": 23355969, "node_id": "MDQ6VXNlcjIzMzU1OTY5", "avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucadiliello", "html_url": "https://github.com/lucadiliello", "followers_url": "https://api.github.com/users/lucadiliello/followers", "following_url": "https://api.github.com/users/lucadiliello/following{/other_user}", "gists_url": "https://api.github.com/users/lucadiliello/gists{/gist_id}", "starred_url": "https://api.github.com/users/lucadiliello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucadiliello/subscriptions", "organizations_url": "https://api.github.com/users/lucadiliello/orgs", "repos_url": "https://api.github.com/users/lucadiliello/repos", "events_url": "https://api.github.com/users/lucadiliello/events{/privacy}", "received_events_url": "https://api.github.com/users/lucadiliello/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Great, this looks like the correct fix! Do you want to open a PR?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,602
1,608
1,608
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Platform: Ubuntu 18.04 - Python version: 3.7.7 - PyTorch version (GPU?): 1.6.0 GPU - Tensorflow version (GPU?): - - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help Someone working on the examples/GLUE benchmarks ## Information Model I am using (Bert, XLNet ...): BERT The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [X] an official GLUE/SQUaD task: (GLUE * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run QNLI experiment Error: ```bash File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/transformers/data/metrics/__init__.py", line 73, in glue_compute_metrics return {"acc": simple_accuracy(preds, labels)} File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/transformers/data/metrics/__init__.py", line 34, in simple_accuracy return (preds == labels).mean() RuntimeError: Can only calculate the mean of floating types. Got Bool instead. Exception ignored in: <function tqdm.__del__ at 0x7f8d1d09ff28> Traceback (most recent call last): File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/tqdm/std.py", line 1086, in __del__ File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/tqdm/std.py", line 1293, in close File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/tqdm/std.py", line 1471, in display File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/tqdm/std.py", line 1089, in __repr__ File "/home/lucadiliello/anaconda3/envs/nlp/lib/python3.7/site-packages/tqdm/std.py", line 1433, in format_dict TypeError: cannot unpack non-iterable NoneType object ``` A simple fix is to modify `transformers/data/metrics/__init__.py", line 34, in simple_accuracy` to: ```python return (preds == labels).to(dtype=torch.float32).mean() ``` ## Expected behavior Metric should be computed without problems
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7810/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7810/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7809
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7809/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7809/comments
https://api.github.com/repos/huggingface/transformers/issues/7809/events
https://github.com/huggingface/transformers/pull/7809
722,295,302
MDExOlB1bGxSZXF1ZXN0NTA0MDcwMjA3
7,809
[Seq2Seq] Allow EncoderDecoderModels to be trained with Seq2Seq
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "LGTM, thanks for aligning it! We just need some way to pass `eval_beams` and `max_gen_length`.\r\n\r\n>We can have a `Seq2SeqTrainingArguments` that subclasses `TrainingArguments` if that helps.\r\n\r\n@sgugger we do have `Seq2SeqTrainingArguments` class\r\nhttps://github.com/huggingface/transformers/blob/d99ed7ad618037ae878f0758157ed0764bd7f935/examples/seq2seq/finetune_trainer.py#L37", "> @sgugger we do have `Seq2SeqTrainingArguments` class\r\n\r\nAh, had forgotten about that :-)", "`eval_beams`/`eval_max_gen_length` reasoning:\r\n@patil-suraj said exactly this LOL, but in my words:\r\nusers are not good at modifying configs locally. We want to have a way to run `num_beams=2` during the generation step, but then end up with a trained model with the default # beams. In general, we try not to manipulate config attributes that would only be desired during training.\r\n", "Also would <3 an encoder decoder test in `examples/seq2seq/test_finetune_trainer.py`. \r\n", "After discussion @sshleifer - changed the `Seq2SeqTrainer` to be fully backwards compatible and to work with EncoderDecoder.\r\n@sshleifer - cannot add EncDec test yet because the complete command line setup is too constrained (requires `prepare_seq2seq_batch` to be defined for all tokenizers, etc...) => will see how to add this in the future. \r\n\r\n@sshleifer , @patil-suraj - could you do another review please? :-) ", "Should be good - I don't really see how -100 would slow down the TPU, but let's wait for @LysandreJik opinion here. ", "Can't seem to reply to the comment, but yes, the line @sshleifer is pointing at will slow down on TPU since it's probably using a `torch.where` behind the scene which does not have an XLA operation AFAIK.", "> Can't seem to reply to the comment, but yes, the line @sshleifer is pointing at will slow down on TPU since it's probably using a `torch.where` behind the scene which does not have an XLA operation AFAIK.\r\n\r\nOkey, I see -> let's move back in the old CE loss function then to keep backward compatibility! \r\n\r\n@sshleifer - one last review please :-) " ]
1,602
1,603
1,603
MEMBER
null
# What does this PR do? This PR changes the Seq2Seq Trainer a bit to: 1) Make it work with `EncoderDecoder` 2) Align its API more with the general `Trainer` @sshleifer @patil-suraj @sgugger - it would be great if you could take a look and give your general opinion on it :-) If this would be ok for you, I will fix the examples test.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7809/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7809/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7809", "html_url": "https://github.com/huggingface/transformers/pull/7809", "diff_url": "https://github.com/huggingface/transformers/pull/7809.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7809.patch", "merged_at": 1603487152000 }
https://api.github.com/repos/huggingface/transformers/issues/7808
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7808/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7808/comments
https://api.github.com/repos/huggingface/transformers/issues/7808/events
https://github.com/huggingface/transformers/issues/7808
722,290,052
MDU6SXNzdWU3MjIyOTAwNTI=
7,808
Pipeline(summarization): CUDA error: an illegal memory access was encountered
{ "login": "avacaondata", "id": 35173563, "node_id": "MDQ6VXNlcjM1MTczNTYz", "avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avacaondata", "html_url": "https://github.com/avacaondata", "followers_url": "https://api.github.com/users/avacaondata/followers", "following_url": "https://api.github.com/users/avacaondata/following{/other_user}", "gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}", "starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions", "organizations_url": "https://api.github.com/users/avacaondata/orgs", "repos_url": "https://api.github.com/users/avacaondata/repos", "events_url": "https://api.github.com/users/avacaondata/events{/privacy}", "received_events_url": "https://api.github.com/users/avacaondata/received_events", "type": "User", "site_admin": false }
[ { "id": 1771187924, "node_id": "MDU6TGFiZWwxNzcxMTg3OTI0", "url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline", "name": "Core: Pipeline", "color": "FF7066", "default": false, "description": "Internals of the library; Pipeline." } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "I would guess that that traceback is either masking OOM or IndexError. You will get a better traceback if you try on CPU." ]
1,602
1,602
1,602
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Platform: Ubuntu 18 - Python version: 3.7 - PyTorch version (GPU?): 1.5.0 (YES) - Tensorflow version (GPU?): 2.2.0 (YES) - Using GPU in script?: YES - Using distributed or parallel set-up in script?: NO ### Who can help @sshleifer ## Information Model I am using (Bert, XLNet ...): BART for Summarization (pipeline) The problem arises when using: ```python class Summarizer: def __init__(self, min_length: int = 10, max_length: int = 120, device=1): self.summarizer = pipeline("summarization", device=device) self.min_length=min_length self.max_length=max_length def summarize(self, articles_df): empty_abstract = articles_df[articles_df["abstract"] == ""] for idx, paper in tqdm(empty_abstract.iterrows(), desc="Iterating over papers with empty abstract"): articles_df.loc[idx, "abstract"] = self._summarize_one_paper(paper) return articles_df def _summarize_one_paper(self, paper): try: summaries = self.summarizer(paper["paragraphs"], min_length=self.min_length, max_length=self.max_length) except: summaries = [] for paragraph in paper["paragraphs"]: summaries.extend( self.summarizer(paragraph, min_length=self.min_length, max_length=self.max_length) ) return ["".join([summary["summary_text"] for summary in summaries])] summarizer = Summarizer() data_summarized = summarizer.summarize(data) ``` The tasks I am working on is: * Summarization The error arises after 848 iterations, which is what surprises me the most... It can access CUDA up until that point. ``` RuntimeError Traceback (most recent call last) <ipython-input-5-098ee40477ee> in <module> ----> 1 data_summarized = summarizer.summarize(data) <ipython-input-3-671b9c11b7bf> in summarize(self, articles_df) 8 empty_abstract = articles_df[articles_df["abstract"] == ""] 9 for idx, paper in tqdm(empty_abstract.iterrows(), desc="Iterating over papers with empty abstract"): ---> 10 articles_df.loc[idx, "abstract"] = self._summarize_one_paper(paper) 11 return articles_df 12 <ipython-input-3-671b9c11b7bf> in _summarize_one_paper(self, paper) 18 for paragraph in paper["paragraphs"]: 19 summaries.extend( ---> 20 self.summarizer(paragraph, min_length=self.min_length, max_length=self.max_length) 21 ) 22 return ["".join([summary["summary_text"] for summary in summaries])] ~/miniconda/envs/transformers_env/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, return_tensors, return_text, clean_up_tokenization_spaces, *documents, **generate_kwargs) 1926 1927 if self.framework == "pt": -> 1928 inputs = self.ensure_tensor_on_device(**inputs) 1929 input_length = inputs["input_ids"].shape[-1] 1930 elif self.framework == "tf": ~/miniconda/envs/transformers_env/lib/python3.7/site-packages/transformers/pipelines.py in ensure_tensor_on_device(self, **inputs) 601 :obj:`Dict[str, torch.Tensor]`: The same as :obj:`inputs` but on the proper device. 602 """ --> 603 return {name: tensor.to(self.device) for name, tensor in inputs.items()} 604 605 def check_model_type(self, supported_models: Union[List[str], dict]): ~/miniconda/envs/transformers_env/lib/python3.7/site-packages/transformers/pipelines.py in <dictcomp>(.0) 601 :obj:`Dict[str, torch.Tensor]`: The same as :obj:`inputs` but on the proper device. 602 """ --> 603 return {name: tensor.to(self.device) for name, tensor in inputs.items()} 604 605 def check_model_type(self, supported_models: Union[List[str], dict]): RuntimeError: CUDA error: an illegal memory access was encountered ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7808/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7808/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7807
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7807/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7807/comments
https://api.github.com/repos/huggingface/transformers/issues/7807/events
https://github.com/huggingface/transformers/issues/7807
722,279,985
MDU6SXNzdWU3MjIyNzk5ODU=
7,807
ValueError: too many values to unpack (expected 4)
{ "login": "h56cho", "id": 52889259, "node_id": "MDQ6VXNlcjUyODg5MjU5", "avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h56cho", "html_url": "https://github.com/h56cho", "followers_url": "https://api.github.com/users/h56cho/followers", "following_url": "https://api.github.com/users/h56cho/following{/other_user}", "gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}", "starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h56cho/subscriptions", "organizations_url": "https://api.github.com/users/h56cho/orgs", "repos_url": "https://api.github.com/users/h56cho/repos", "events_url": "https://api.github.com/users/h56cho/events{/privacy}", "received_events_url": "https://api.github.com/users/h56cho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I solved the issue by myself", "> I solved the issue by myself\r\n\r\nHow?", "> I solved the issue by myself\r\n\r\n@h56cho Would you mind sharing it? Thanks" ]
1,602
1,618
1,602
NONE
null
Hello, I am trying to feed in the hidden output of the embedding layer of the `LongformerForMultipleChoice` model directly into the m-th layer of the same model. Each of my multiple-choice question that has 4 options. I am trying to tweak the HuggingFace code for Longformer to carry out my task, but I am rather perplexed by the ValueError (see below) that I am getting. I know that this can be cumbersome to answer, but could you please help me? When I do: ```Python my_Longformer_multiple_choice_model.encoder.layer[layer_index].forward(hidden_output, attention_mask=my_attention_mask,output_attention=False) ``` , this ValueError is generated: ```Python File "<ipython-input-67-f93c8b17889e>", line 1, in <module> best_model_longformer.longformer.encoder.layer[0].forward(outputs,attention_mask=attention_mask,output_attentions=True) File "/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_longformer.py", line 718, in forward output_attentions=output_attentions, File "/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_longformer.py", line 693, in forward output_attentions, File "/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_longformer.py", line 192, in forward float_mask.new_ones(size=float_mask.size()), float_mask, self.one_sided_attn_window_size File "/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_longformer.py", line 384, in _sliding_chunks_query_key_matmul batch_size, seq_len, num_heads, head_dim = query.size() ValueError: too many values to unpack (expected 4) ``` In other words, the ValueError is generated here: https://github.com/huggingface/transformers/blob/d99ed7ad618037ae878f0758157ed0764bd7f935/src/transformers/modeling_longformer.py#L279 In my code above, `my_attention_mask` is the same attention mask that I would specify under the regular `LongformerForMultipleChoice` command. `my_attention_mask` was generated by: ```Python # I am using the LongformerForMultipleChoice model, where each multiple choice question has 4 options. encoded_dict = longformer_tokenizer(question_list, option_list, return_tensors = 'pt', padding ='max_length') my_attention_mask = {k: v.unsqueeze(0) for k,v in encoded_dict.items()}['attention_mask'] my_attention_mask >>> tensor([[[1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0]]]) # I can use this my_attention_mask in the regular command without an error, as below: longformer_output= my_Longformer_multiple_choice_model(input_ids=input_ids,....,attention_mask=my_attention_mask) ``` Also, the `hidden_output` in used in my code above was generated by the following: ```Python encoded_dict = longformer_tokenizer(question_list, option_list, return_tensors = 'pt', padding ='max_length') hidden_output = my_Longformer_multiple_choice_model(**{k: v.unsqueeze(0) for k,v in encoded_dict.items()}, labels = mc_labels)[2][0][:,:,:] hidden_output.size() >>> torch.Size([4, 4096, 768]) ``` Since the layers of Transformer models in general are designed to take in the hidden output of the previous layer as their input, I am sure that the HuggingFace code for `LongformerForMultipleChoice` also somehow allows each layer to take the hidden vectors as their input. This is why I think what I am trying to do (feeding hidden output of embedding layer as an input to the m-th layer) is completely achievable....I do not get why I am getting the ValueError. I am suspecting the form of `my_attention_mask` is causing the ValueError. Is there any way that I can avoid this ValueError without re-writing the whole function? What exactly is the `attention_mask` that is used as a parameter for the following function?: https://github.com/huggingface/transformers/blob/d99ed7ad618037ae878f0758157ed0764bd7f935/src/transformers/modeling_longformer.py#L225 What should I pass for the `attention_mask` parameter in the command `my_Longformer_multiple_choice_model.encoder.layer[layer_index].forward(hidden_output, attention_mask,output_attention=False)`? PS: The following seemed to suggest that I need to pass in the `extended_attention_mask` rather than the `attention_mask` itself: https://github.com/huggingface/transformers/blob/d99ed7ad618037ae878f0758157ed0764bd7f935/src/transformers/modeling_longformer.py#L1262 So I tried `my_Longformer_multiple_choice_model.encoder.layer[layer_index].forward(hidden_output, extended_attention_mask,output_attention=False)`, but I am still getting the same ValueError.... Please help. Thank you,
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7807/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7807/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7806
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7806/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7806/comments
https://api.github.com/repos/huggingface/transformers/issues/7806/events
https://github.com/huggingface/transformers/issues/7806
722,260,738
MDU6SXNzdWU3MjIyNjA3Mzg=
7,806
Import error when fine-tuning mbart from master branch
{ "login": "thevasudevgupta", "id": 53136577, "node_id": "MDQ6VXNlcjUzMTM2NTc3", "avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thevasudevgupta", "html_url": "https://github.com/thevasudevgupta", "followers_url": "https://api.github.com/users/thevasudevgupta/followers", "following_url": "https://api.github.com/users/thevasudevgupta/following{/other_user}", "gists_url": "https://api.github.com/users/thevasudevgupta/gists{/gist_id}", "starred_url": "https://api.github.com/users/thevasudevgupta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thevasudevgupta/subscriptions", "organizations_url": "https://api.github.com/users/thevasudevgupta/orgs", "repos_url": "https://api.github.com/users/thevasudevgupta/repos", "events_url": "https://api.github.com/users/thevasudevgupta/events{/privacy}", "received_events_url": "https://api.github.com/users/thevasudevgupta/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you fill-in the issue template for bugs so that we may help you?", "Please checkout now.", "you need to upgrade tokenizers.\r\nIt should happen for you if you run `pip install -e \".[dev]\"` from the root of the repo.", "Thanks, it worked.", "> Thanks, it worked.\r\n\r\ncan you tell me how did you solve it?thanks\r\n\r\n", "> > Thanks, it worked.\r\n> \r\n> can you tell me how did you solve it?thanks\r\n\r\nJust run the command which @sshleifer suggested from the directory having setup.py file. ", "Hello,\r\nWhich directory are you talking about ? The transformer site package directory, the project directory ?" ]
1,602
1,607
1,602
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (True) - Tensorflow version (GPU?): 2.3.0 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. tokenizers: @mfuntowicz Translation: @sshleifer Bart: @sshleifer --> @mfuntowicz @sshleifer ## Information Model I am using (mBART): The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) My script involves fine-tuning mbart for multilingual-translation. Problem is arising when importing transformers from master branch. The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) IITB Hindi-english dataset ## To reproduce Steps to reproduce the behavior: 1. `import transformers` from master branch ``` from transformers import ( File "/content/drive/My Drive/hin-eng/transformers/__init__.py", line 68, in <module> from .data import ( File "/content/drive/My Drive/hin-eng/transformers/data/__init__.py", line 6, in <module> from .processors import ( File "/content/drive/My Drive/hin-eng/transformers/data/processors/__init__.py", line 6, in <module> from .squad import SquadExample, SquadFeatures, SquadV1Processor, SquadV2Processor, squad_convert_examples_to_features File "/content/drive/My Drive/hin-eng/transformers/data/processors/squad.py", line 10, in <module> from ...tokenization_bart import BartTokenizer File "/content/drive/My Drive/hin-eng/transformers/tokenization_bart.py", line 18, in <module> from .tokenization_roberta import RobertaTokenizer, RobertaTokenizerFast File "/content/drive/My Drive/hin-eng/transformers/tokenization_roberta.py", line 20, in <module> from .tokenization_gpt2 import GPT2Tokenizer, GPT2TokenizerFast File "/content/drive/My Drive/hin-eng/transformers/tokenization_gpt2.py", line 27, in <module> from .tokenization_utils_fast import PreTrainedTokenizerFast File "/content/drive/My Drive/hin-eng/transformers/tokenization_utils_fast.py", line 29, in <module> from .convert_slow_tokenizer import convert_slow_tokenizer File "/content/drive/My Drive/hin-eng/transformers/convert_slow_tokenizer.py", line 25, in <module> from tokenizers.models import BPE, Unigram, WordPiece ImportError: cannot import name 'Unigram' ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Importing transformers normally.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7806/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7806/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7805
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7805/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7805/comments
https://api.github.com/repos/huggingface/transformers/issues/7805/events
https://github.com/huggingface/transformers/issues/7805
722,196,914
MDU6SXNzdWU3MjIxOTY5MTQ=
7,805
pip3 install issue
{ "login": "Subfly", "id": 33986248, "node_id": "MDQ6VXNlcjMzOTg2MjQ4", "avatar_url": "https://avatars.githubusercontent.com/u/33986248?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Subfly", "html_url": "https://github.com/Subfly", "followers_url": "https://api.github.com/users/Subfly/followers", "following_url": "https://api.github.com/users/Subfly/following{/other_user}", "gists_url": "https://api.github.com/users/Subfly/gists{/gist_id}", "starred_url": "https://api.github.com/users/Subfly/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Subfly/subscriptions", "organizations_url": "https://api.github.com/users/Subfly/orgs", "repos_url": "https://api.github.com/users/Subfly/repos", "events_url": "https://api.github.com/users/Subfly/events{/privacy}", "received_events_url": "https://api.github.com/users/Subfly/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This seems to be an error when installing `regex`, can you install it on its own? If you cannot, please open an issue on [their repo](https://bitbucket.org/mrabarnett/mrab-regex/issues?status=new&status=open).", "Thank you, I was not aware of that even I looked into trace twice. Closing the issue." ]
1,602
1,602
1,602
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Platform: macOS - Python version: 3.8.2 - PyTorch version (GPU?):- - Tensorflow version (GPU?):- - Using GPU in script?:- - Using distributed or parallel set-up in script?:- ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Just do pip3 install transformers in a machine installed macOS Catalina 10.15.7 and XCode version 12.0.1 <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ``` Alis-MacBook-Pro:ali_taha_dincer_2020 alitahadincer$ sudo pip3 install transformers Password: WARNING: The directory '/Users/alitahadincer/Library/Caches/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. Collecting transformers Downloading transformers-3.3.1-py3-none-any.whl (1.1 MB) |████████████████████████████████| 1.1 MB 571 kB/s Collecting tokenizers==0.8.1.rc2 Downloading tokenizers-0.8.1rc2-cp38-cp38-macosx_10_14_x86_64.whl (2.1 MB) |████████████████████████████████| 2.1 MB 938 kB/s Collecting regex!=2019.12.17 Downloading regex-2020.10.11.tar.gz (690 kB) |████████████████████████████████| 690 kB 1.0 MB/s Collecting sacremoses Downloading sacremoses-0.0.43.tar.gz (883 kB) |████████████████████████████████| 883 kB 1.2 MB/s Requirement already satisfied: numpy in /Library/Python/3.8/site-packages (from transformers) (1.18.5) Collecting sentencepiece!=0.1.92 Downloading sentencepiece-0.1.91-cp38-cp38-macosx_10_6_x86_64.whl (1.0 MB) |████████████████████████████████| 1.0 MB 1.5 MB/s Requirement already satisfied: requests in /Library/Python/3.8/site-packages (from transformers) (2.24.0) Requirement already satisfied: tqdm>=4.27 in /Library/Python/3.8/site-packages (from transformers) (4.50.2) Requirement already satisfied: filelock in /Library/Python/3.8/site-packages (from transformers) (3.0.12) Collecting packaging Downloading packaging-20.4-py2.py3-none-any.whl (37 kB) Requirement already satisfied: six in /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/site-packages (from sacremoses->transformers) (1.15.0) Requirement already satisfied: click in /Library/Python/3.8/site-packages (from sacremoses->transformers) (7.1.2) Collecting joblib Downloading joblib-0.17.0-py3-none-any.whl (301 kB) |████████████████████████████████| 301 kB 1.4 MB/s Requirement already satisfied: chardet<4,>=3.0.2 in /Library/Python/3.8/site-packages (from requests->transformers) (3.0.4) Requirement already satisfied: idna<3,>=2.5 in /Library/Python/3.8/site-packages (from requests->transformers) (2.10) Requirement already satisfied: certifi>=2017.4.17 in /Library/Python/3.8/site-packages (from requests->transformers) (2020.6.20) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /Library/Python/3.8/site-packages (from requests->transformers) (1.25.10) Collecting pyparsing>=2.0.2 Downloading pyparsing-2.4.7-py2.py3-none-any.whl (67 kB) |████████████████████████████████| 67 kB 880 kB/s Building wheels for collected packages: regex, sacremoses Building wheel for regex (setup.py) ... error ERROR: Command errored out with exit status 1: command: /Applications/Xcode.app/Contents/Developer/usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/tmp/pip-install-j20cf16z/regex/setup.py'"'"'; __file__='"'"'/private/tmp/pip-install-j20cf16z/regex/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /private/tmp/pip-wheel-461le_2e cwd: /private/tmp/pip-install-j20cf16z/regex/ Complete output (114 lines): running bdist_wheel running build running build_py creating build creating build/lib.macosx-10.14.6-x86_64-3.8 creating build/lib.macosx-10.14.6-x86_64-3.8/regex copying regex_3/__init__.py -> build/lib.macosx-10.14.6-x86_64-3.8/regex copying regex_3/regex.py -> build/lib.macosx-10.14.6-x86_64-3.8/regex copying regex_3/_regex_core.py -> build/lib.macosx-10.14.6-x86_64-3.8/regex copying regex_3/test_regex.py -> build/lib.macosx-10.14.6-x86_64-3.8/regex running build_ext building 'regex._regex' extension creating build/temp.macosx-10.14.6-x86_64-3.8 creating build/temp.macosx-10.14.6-x86_64-3.8/regex_3 xcrun -sdk macosx clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -iwithsysroot/System/Library/Frameworks/System.framework/PrivateHeaders -iwithsysroot/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/Headers -arch arm64 -arch x86_64 -I/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8 -c regex_3/_regex.c -o build/temp.macosx-10.14.6-x86_64-3.8/regex_3/_regex.o In file included from regex_3/_regex.c:48: In file included from /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:11: In file included from /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/12.0.0/include/limits.h:21: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/limits.h:63: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/cdefs.h:807:2: error: Unsupported architecture #error Unsupported architecture ^ In file included from regex_3/_regex.c:48: In file included from /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:11: In file included from /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/12.0.0/include/limits.h:21: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/limits.h:64: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/machine/limits.h:8:2: error: architecture not supported #error architecture not supported ^ In file included from regex_3/_regex.c:48: In file included from /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/stdio.h:64: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_stdio.h:71: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_types.h:27: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:33: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/machine/_types.h:34:2: error: architecture not supported #error architecture not supported ^ In file included from regex_3/_regex.c:48: In file included from /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/stdio.h:64: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_stdio.h:71: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_types.h:27: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:55:9: error: unknown type name '__int64_t' typedef __int64_t __darwin_blkcnt_t; /* total blocks */ ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:56:9: error: unknown type name '__int32_t'; did you mean '__int128_t'? typedef __int32_t __darwin_blksize_t; /* preferred block size */ ^ note: '__int128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:57:9: error: unknown type name '__int32_t'; did you mean '__int128_t'? typedef __int32_t __darwin_dev_t; /* dev_t */ ^ note: '__int128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:60:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_gid_t; /* [???] process and group IDs */ ^ note: '__uint128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:61:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_id_t; /* [XSI] pid_t, uid_t, or gid_t*/ ^ note: '__uint128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:62:9: error: unknown type name '__uint64_t' typedef __uint64_t __darwin_ino64_t; /* [???] Used for 64 bit inodes */ ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:68:9: error: unknown type name '__darwin_natural_t' typedef __darwin_natural_t __darwin_mach_port_name_t; /* Used by mach */ ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:70:9: error: unknown type name '__uint16_t'; did you mean '__uint128_t'? typedef __uint16_t __darwin_mode_t; /* [???] Some file attributes */ ^ note: '__uint128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:71:9: error: unknown type name '__int64_t' typedef __int64_t __darwin_off_t; /* [???] Used for file sizes */ ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:72:9: error: unknown type name '__int32_t'; did you mean '__int128_t'? typedef __int32_t __darwin_pid_t; /* [???] process and group IDs */ ^ note: '__int128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:73:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_sigset_t; /* [???] signal set */ ^ note: '__uint128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:74:9: error: unknown type name '__int32_t'; did you mean '__int128_t'? typedef __int32_t __darwin_suseconds_t; /* [???] microseconds */ ^ note: '__int128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:75:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_uid_t; /* [???] user IDs */ ^ note: '__uint128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:76:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_useconds_t; /* [???] microseconds */ ^ note: '__uint128_t' declared here In file included from regex_3/_regex.c:48: In file included from /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/stdio.h:64: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_stdio.h:71: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_types.h:43:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_wctype_t; ^ note: '__uint128_t' declared here In file included from regex_3/_regex.c:48: In file included from /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/stdio.h:64: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_stdio.h:75: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types/_va_list.h:31: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/machine/types.h:37:2: error: architecture not supported #error architecture not supported ^ fatal error: too many errors emitted, stopping now [-ferror-limit=] 20 errors generated. error: command 'xcrun' failed with exit status 1 ---------------------------------------- ERROR: Failed building wheel for regex Running setup.py clean for regex Building wheel for sacremoses (setup.py) ... done Created wheel for sacremoses: filename=sacremoses-0.0.43-py3-none-any.whl size=893259 sha256=b122e1b7fed3e4255e25cd3028672a60cb443062e9136993553b8ed40d2fd193 Stored in directory: /private/tmp/pip-ephem-wheel-cache-pguj75nn/wheels/7b/78/f4/27d43a65043e1b75dbddaa421b573eddc67e712be4b1c80677 Successfully built sacremoses Failed to build regex Installing collected packages: tokenizers, regex, joblib, sacremoses, sentencepiece, pyparsing, packaging, transformers Running setup.py install for regex ... error ERROR: Command errored out with exit status 1: command: /Applications/Xcode.app/Contents/Developer/usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/tmp/pip-install-j20cf16z/regex/setup.py'"'"'; __file__='"'"'/private/tmp/pip-install-j20cf16z/regex/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/tmp/pip-record-cgmi3mph/install-record.txt --single-version-externally-managed --compile --install-headers /Library/Python/3.8/include/regex cwd: /private/tmp/pip-install-j20cf16z/regex/ Complete output (114 lines): running install running build running build_py creating build creating build/lib.macosx-10.14.6-x86_64-3.8 creating build/lib.macosx-10.14.6-x86_64-3.8/regex copying regex_3/__init__.py -> build/lib.macosx-10.14.6-x86_64-3.8/regex copying regex_3/regex.py -> build/lib.macosx-10.14.6-x86_64-3.8/regex copying regex_3/_regex_core.py -> build/lib.macosx-10.14.6-x86_64-3.8/regex copying regex_3/test_regex.py -> build/lib.macosx-10.14.6-x86_64-3.8/regex running build_ext building 'regex._regex' extension creating build/temp.macosx-10.14.6-x86_64-3.8 creating build/temp.macosx-10.14.6-x86_64-3.8/regex_3 xcrun -sdk macosx clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -iwithsysroot/System/Library/Frameworks/System.framework/PrivateHeaders -iwithsysroot/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/Headers -arch arm64 -arch x86_64 -I/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8 -c regex_3/_regex.c -o build/temp.macosx-10.14.6-x86_64-3.8/regex_3/_regex.o In file included from regex_3/_regex.c:48: In file included from /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:11: In file included from /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/12.0.0/include/limits.h:21: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/limits.h:63: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/cdefs.h:807:2: error: Unsupported architecture #error Unsupported architecture ^ In file included from regex_3/_regex.c:48: In file included from /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:11: In file included from /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/12.0.0/include/limits.h:21: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/limits.h:64: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/machine/limits.h:8:2: error: architecture not supported #error architecture not supported ^ In file included from regex_3/_regex.c:48: In file included from /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/stdio.h:64: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_stdio.h:71: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_types.h:27: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:33: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/machine/_types.h:34:2: error: architecture not supported #error architecture not supported ^ In file included from regex_3/_regex.c:48: In file included from /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/stdio.h:64: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_stdio.h:71: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_types.h:27: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:55:9: error: unknown type name '__int64_t' typedef __int64_t __darwin_blkcnt_t; /* total blocks */ ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:56:9: error: unknown type name '__int32_t'; did you mean '__int128_t'? typedef __int32_t __darwin_blksize_t; /* preferred block size */ ^ note: '__int128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:57:9: error: unknown type name '__int32_t'; did you mean '__int128_t'? typedef __int32_t __darwin_dev_t; /* dev_t */ ^ note: '__int128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:60:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_gid_t; /* [???] process and group IDs */ ^ note: '__uint128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:61:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_id_t; /* [XSI] pid_t, uid_t, or gid_t*/ ^ note: '__uint128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:62:9: error: unknown type name '__uint64_t' typedef __uint64_t __darwin_ino64_t; /* [???] Used for 64 bit inodes */ ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:68:9: error: unknown type name '__darwin_natural_t' typedef __darwin_natural_t __darwin_mach_port_name_t; /* Used by mach */ ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:70:9: error: unknown type name '__uint16_t'; did you mean '__uint128_t'? typedef __uint16_t __darwin_mode_t; /* [???] Some file attributes */ ^ note: '__uint128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:71:9: error: unknown type name '__int64_t' typedef __int64_t __darwin_off_t; /* [???] Used for file sizes */ ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:72:9: error: unknown type name '__int32_t'; did you mean '__int128_t'? typedef __int32_t __darwin_pid_t; /* [???] process and group IDs */ ^ note: '__int128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:73:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_sigset_t; /* [???] signal set */ ^ note: '__uint128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:74:9: error: unknown type name '__int32_t'; did you mean '__int128_t'? typedef __int32_t __darwin_suseconds_t; /* [???] microseconds */ ^ note: '__int128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:75:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_uid_t; /* [???] user IDs */ ^ note: '__uint128_t' declared here /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types.h:76:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_useconds_t; /* [???] microseconds */ ^ note: '__uint128_t' declared here In file included from regex_3/_regex.c:48: In file included from /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/stdio.h:64: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_stdio.h:71: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_types.h:43:9: error: unknown type name '__uint32_t'; did you mean '__uint128_t'? typedef __uint32_t __darwin_wctype_t; ^ note: '__uint128_t' declared here In file included from regex_3/_regex.c:48: In file included from /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/include/python3.8/Python.h:25: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/stdio.h:64: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/_stdio.h:75: In file included from /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/sys/_types/_va_list.h:31: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk/usr/include/machine/types.h:37:2: error: architecture not supported #error architecture not supported ^ fatal error: too many errors emitted, stopping now [-ferror-limit=] 20 errors generated. error: command 'xcrun' failed with exit status 1 ---------------------------------------- ERROR: Command errored out with exit status 1: /Applications/Xcode.app/Contents/Developer/usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/tmp/pip-install-j20cf16z/regex/setup.py'"'"'; __file__='"'"'/private/tmp/pip-install-j20cf16z/regex/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /private/tmp/pip-record-cgmi3mph/install-record.txt --single-version-externally-managed --compile --install-headers /Library/Python/3.8/include/regex Check the logs for full command output. ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7805/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7805/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7804
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7804/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7804/comments
https://api.github.com/repos/huggingface/transformers/issues/7804/events
https://github.com/huggingface/transformers/issues/7804
722,147,025
MDU6SXNzdWU3MjIxNDcwMjU=
7,804
a
{ "login": "JohnPFL", "id": 56122239, "node_id": "MDQ6VXNlcjU2MTIyMjM5", "avatar_url": "https://avatars.githubusercontent.com/u/56122239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JohnPFL", "html_url": "https://github.com/JohnPFL", "followers_url": "https://api.github.com/users/JohnPFL/followers", "following_url": "https://api.github.com/users/JohnPFL/following{/other_user}", "gists_url": "https://api.github.com/users/JohnPFL/gists{/gist_id}", "starred_url": "https://api.github.com/users/JohnPFL/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JohnPFL/subscriptions", "organizations_url": "https://api.github.com/users/JohnPFL/orgs", "repos_url": "https://api.github.com/users/JohnPFL/repos", "events_url": "https://api.github.com/users/JohnPFL/events{/privacy}", "received_events_url": "https://api.github.com/users/JohnPFL/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,603
1,603
NONE
null
a
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7804/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7804/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7803
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7803/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7803/comments
https://api.github.com/repos/huggingface/transformers/issues/7803/events
https://github.com/huggingface/transformers/issues/7803
722,132,138
MDU6SXNzdWU3MjIxMzIxMzg=
7,803
TPU pod training with BERT
{ "login": "AmitChaulwar", "id": 69140048, "node_id": "MDQ6VXNlcjY5MTQwMDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/69140048?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitChaulwar", "html_url": "https://github.com/AmitChaulwar", "followers_url": "https://api.github.com/users/AmitChaulwar/followers", "following_url": "https://api.github.com/users/AmitChaulwar/following{/other_user}", "gists_url": "https://api.github.com/users/AmitChaulwar/gists{/gist_id}", "starred_url": "https://api.github.com/users/AmitChaulwar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitChaulwar/subscriptions", "organizations_url": "https://api.github.com/users/AmitChaulwar/orgs", "repos_url": "https://api.github.com/users/AmitChaulwar/repos", "events_url": "https://api.github.com/users/AmitChaulwar/events{/privacy}", "received_events_url": "https://api.github.com/users/AmitChaulwar/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi, I believe there is an error in your pytorch/xla install. I would follow the docs shared on [pytorch/xla](https://github.com/pytorch/xla) for best results.", "Actually, I am using Google cloud for running the training job. And it comes already with torch-xla-1.6 environment and the imagenet example works file. \r\n\r\nAs fas as I understand the problem is here https://github.com/pytorch/xla/blob/53eb3fc5e701cb8cc79a45dd0f79c6e585a27a41/torch_xla/core/xla_model.py#L18. \r\nI do not understand what is the problem though. ", "I don't really understand how you're launching your training. Our `xla_spawn.py` already takes care of parallelizing. I would do the following:\r\n\r\n```\r\nexport TPU_IP_ADDRESS=xxx.xxx.xxx.xxx\r\nexport XRT_TPU_CONFIG=\"tpu_worker;0;$TPU_IP_ADDRESS:8470\"\r\n\r\npython xla_spawn.py --num_cores 8 /home/developer/XainBERT/train.py\r\n```", "This works when I am using 8 core TPU. However, when I try to use TPUv3-128, then I get the error about cluster resolver. So I follow the instructions from here https://cloud.google.com/tpu/docs/tutorials/pytorch-pod#configure-gcloud. Also, the PyTorch lightning website also refers to the same page https://pytorch-lightning.readthedocs.io/en/latest/tpu.html. ", "Ah, I see. Unfortunately, I don't have sufficient experience with TPU pods to help you debug this. Can you try to open an issue on pytorch/xla's github?", "That's sad. Am I right, that Huggingface does not have TF implementation of Language models? I could try to switch to TF and try. Anyways, I have opened an issue on PyTorch/xla.", "No, we do have TF implementations of language models. Most of our models are available in both PyTorch and TensorFlow.", "However, this link says that there is not TFtrainer support for language-modelling using raw data.\r\nhttps://huggingface.co/transformers/examples.html \r\n\r\nIs there any example available for TFtrainer?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,602
1,608
1,608
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version:3.3.1 - Platform: TPU - Python version: - PyTorch version (GPU?): 1.6 TPU v3-32 - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: Yes ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert): The problem arises when using: * [x] the official example scripts: (give details below) * [] my own modified scripts: (give details below) https://huggingface.co/blog/how-to-train The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) After starting training on TPU Pod using Pytorch XLA, I get following long message ```## ❓ Questions and Help 2020-10-14 14:14:33 10.164.0.60 [2] Traceback (most recent call last): 2020-10-14 14:14:33 10.164.0.60 [2] File "/home/developer/XainBERT/train.py", line 111, in <module> 2020-10-14 14:14:33 10.164.0.60 [2] prediction_loss_only=True, 2020-10-14 14:14:33 10.164.0.60 [2] File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/transformers/trainer.py", line 248, in __init__ 2020-10-14 14:14:33 10.164.0.60 [2] self.model = model.to(args.device) if model is not None else None 2020-10-14 14:14:33 10.164.0.60 [2] File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/transformers/file_utils.py", line 944, in wrapper 2020-10-14 14:14:33 10.164.0.60 [2] return func(*args, **kwargs) 2020-10-14 14:14:33 10.164.0.60 [2] File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/transformers/training_args.py", line 408, in device 2020-10-14 14:14:33 10.164.0.60 [2] return self._setup_devices[0] 2020-10-14 14:14:33 10.164.0.60 [2] File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/transformers/file_utils.py", line 934, in __get__ 2020-10-14 14:14:33 10.164.0.60 [2] cached = self.fget(obj) 2020-10-14 14:14:33 10.164.0.60 [2] File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/transformers/file_utils.py", line 944, in wrapper 2020-10-14 14:14:33 10.164.0.60 [2] return func(*args, **kwargs) 2020-10-14 14:14:33 10.164.0.60 [2] File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/transformers/training_args.py", line 379, in _setup_devices 2020-10-14 14:14:33 10.164.0.60 [2] device = xm.xla_device() 2020-10-14 14:14:33 10.164.0.60 [2] File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 167, in xla_device 2020-10-14 14:14:33 10.164.0.60 [2] devkind=[devkind] if devkind is not None else None) 2020-10-14 14:14:33 10.164.0.60 [2] File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 72, in get_xla_supported_devices 2020-10-14 14:14:33 10.164.0.60 [2] xla_devices = _DEVICES.value 2020-10-14 14:14:33 10.164.0.60 [2] File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/utils/utils.py", line 30, in value 2020-10-14 14:14:33 10.164.0.60 [2] self._value = self._gen_fn() 2020-10-14 14:14:33 10.164.0.60 [2] File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 18, in <lambda> 2020-10-14 14:14:33 10.164.0.60 [2] _DEVICES = xu.LazyProperty(lambda: torch_xla._XLAC._xla_get_devices()) 2020-10-14 14:14:33 10.164.0.60 [2] RuntimeError: tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:244 : Check failed: default_device_target != options_.global_device_map.end() 2020-10-14 14:14:33 10.164.0.60 [2] *** Begin stack trace *** 2020-10-14 14:14:33 10.164.0.60 [2] tensorflow::CurrentStackTrace() 2020-10-14 14:14:33 10.164.0.60 [2] xla::XrtComputationClient::XrtComputationClient(xla::XrtComputationClient::Options, std::unique_ptr<tensorflow::tpu::TopologyProto, std::default_delete<tensorflow::tpu::TopologyProto> >) 2020-10-14 14:14:33 10.164.0.60 [2] xla::ComputationClient::Create() 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] xla::ComputationClient::Get() 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] _PyCFunction_FastCallDict 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] _PyEval_EvalFrameDefault 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] _PyEval_EvalFrameDefault 2020-10-14 14:14:33 10.164.0.60 [2] PyEval_EvalCodeEx 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] PyObject_Call 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] _PyObject_GenericGetAttrWithDict 2020-10-14 14:14:33 10.164.0.60 [2] _PyEval_EvalFrameDefault 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] _PyEval_EvalFrameDefault 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] _PyEval_EvalFrameDefault 2020-10-14 14:14:33 10.164.0.60 [2] PyEval_EvalCodeEx 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] PyObject_Call 2020-10-14 14:14:33 10.164.0.60 [2] _PyEval_EvalFrameDefault 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] _PyEval_EvalFrameDefault 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] _PyFunction_FastCallDict 2020-10-14 14:14:33 10.164.0.60 [2] _PyObject_FastCallDict 2020-10-14 14:14:33 10.164.0.60 [2] PyObject_CallFunctionObjArgs 2020-10-14 14:14:33 10.164.0.60 [2] _PyObject_GenericGetAttrWithDict 2020-10-14 14:14:33 10.164.0.60 [2] _PyEval_EvalFrameDefault 2020-10-14 14:14:33 10.164.0.60 [2] PyEval_EvalCodeEx 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] PyObject_Call 2020-10-14 14:14:33 10.164.0.60 [2] _PyEval_EvalFrameDefault 2020-10-14 14:14:33 10.164.0.60 [2] PyEval_EvalCodeEx 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] PyObject_Call 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] _PyObject_GenericGetAttrWithDict 2020-10-14 14:14:33 10.164.0.60 [2] _PyEval_EvalFrameDefault 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] _PyFunction_FastCallDict 2020-10-14 14:14:33 10.164.0.60 [2] _PyObject_FastCallDict 2020-10-14 14:14:33 10.164.0.60 [2] _PyObject_Call_Prepend 2020-10-14 14:14:33 10.164.0.60 [2] PyObject_Call 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] _PyObject_FastCallDict 2020-10-14 14:14:33 10.164.0.60 [2] _PyObject_FastCallKeywords 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] _PyEval_EvalFrameDefault 2020-10-14 14:14:33 10.164.0.60 [2] PyEval_EvalCodeEx 2020-10-14 14:14:33 10.164.0.60 [2] PyEval_EvalCode 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] PyRun_FileExFlags 2020-10-14 14:14:33 10.164.0.60 [2] PyRun_SimpleFileExFlags 2020-10-14 14:14:33 10.164.0.60 [2] Py_Main 2020-10-14 14:14:33 10.164.0.60 [2] main 2020-10-14 14:14:33 10.164.0.60 [2] __libc_start_main 2020-10-14 14:14:33 10.164.0.60 [2] 2020-10-14 14:14:33 10.164.0.60 [2] *** End stack trace *** 020-10-14 14:27:11 10.164.0.61 [0] /anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/transformers/trainer.py:267: FutureWarning: Passing `prediction_loss_only` as a keyword argument is deprecated and won't be possible in a future version. Use `args.prediction_loss_only` instead. 2020-10-14 14:27:11 10.164.0.61 [0] FutureWarning, 2020-10-14 14:27:11 10.164.0.61 [0] Using deprecated `--per_gpu_train_batch_size` argument which will be removed in a future version. Using `--per_device_train_batch_size` is preferred. 2020-10-14 14:27:11 10.164.0.61 [0] Using deprecated `--per_gpu_train_batch_size` argument which will be removed in a future version. Using `--per_device_train_batch_size` is preferred. 2020-10-14 14:27:11 10.164.0.61 [0] Using deprecated `--per_gpu_train_batch_size` argument which will be removed in a future version. Using `--per_device_train_batch_size` is preferred. Epoch: 0%| | 0/1 [00:00<?, ?it/s] 2020-10-14 14:28:09 10.164.0.61 [0] 2020-10-14 14:28:20 10.164.0.61 [0] 2020-10-14 14:28:32 10.164.0.61 [0] 2020-10-14 14:28:33 10.164.0.61 [0] 2020-10-14 14:28:38 10.164.0.61 [0] 2020-10-14 14:28:38 10.164.0.61 [0] 2020-10-14 14:28:38 10.164.0.61 [0] 2020-10-14 14:28:40 10.164.0.61 [0] 2020-10-14 14:28:42 10.164.0.61 [0] 2020-10-14 14:28:54 10.164.0.61 [0] 2020-10-14 14:28:54 10.164.0.61 [0] 2020-10-14 14:28:56 10.164.0.61 [0] 2020-10-14 14:28:57 10.164.0.61 [0] 2020-10-14 14:28:58 10.164.0.61 [0] 2020-10-14 14:28:58 10.164.0.61 [0] 2020-10-14 14:28:59 10.164.0.61 [0] 2020-10-14 14:29:01 10.164.0.61 [0] 2020-10-14 14:29:01 10.164.0.61 [0] 2020-10-14 14:29:01 10.164.0.61 [0] 2020-10-14 14:29:02 10.164.0.61 [0] 2020-10-14 14:29:02 10.164.0.61 [0] ``` I think the problem lies here ```2020-10-14 14:14:33 10.164.0.60 [2] RuntimeError: tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:244 : Check failed: default_device_target != options_.global_device_map.end() ``` But I do not understand what is wrong. I basically use the script from here https://huggingface.co/blog/how-to-train with some modifications, own data but Bert vocabulary. I have used the same script for training on TPUv3-8 as well which works fine.The transformer library version is 3.3.1 and I am using BERT tokeniser and BERT config. The distributed training on TPU pod worked with ImageNet example mentioned in https://cloud.google.com/tpu/docs/tutorials/pytorch-pod#configure-gcloud. ## To reproduce I can't really provide the whole script. But the training command is as follow: python -m torch_xla.distributed.xla_dist --tpu=$TPU_NAME --conda-env=torch-xla-1.6 -- python /home/developer/XainBERT/xla_spawn.py --num_cores 8 /home/developer/XainBERT/train.py Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7803/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7803/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7802
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7802/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7802/comments
https://api.github.com/repos/huggingface/transformers/issues/7802/events
https://github.com/huggingface/transformers/pull/7802
721,993,866
MDExOlB1bGxSZXF1ZXN0NTAzODIwMDky
7,802
simple fix for spurious PyTorch->TF BERT weight conversion warning
{ "login": "dslim23", "id": 3118412, "node_id": "MDQ6VXNlcjMxMTg0MTI=", "avatar_url": "https://avatars.githubusercontent.com/u/3118412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dslim23", "html_url": "https://github.com/dslim23", "followers_url": "https://api.github.com/users/dslim23/followers", "following_url": "https://api.github.com/users/dslim23/following{/other_user}", "gists_url": "https://api.github.com/users/dslim23/gists{/gist_id}", "starred_url": "https://api.github.com/users/dslim23/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dslim23/subscriptions", "organizations_url": "https://api.github.com/users/dslim23/orgs", "repos_url": "https://api.github.com/users/dslim23/repos", "events_url": "https://api.github.com/users/dslim23/events{/privacy}", "received_events_url": "https://api.github.com/users/dslim23/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "@LysandreJik yeah I think you're right, thanks! made the change", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,602
1,614
1,614
CONTRIBUTOR
null
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/7797 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7802/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7802/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7802", "html_url": "https://github.com/huggingface/transformers/pull/7802", "diff_url": "https://github.com/huggingface/transformers/pull/7802.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7802.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7801
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7801/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7801/comments
https://api.github.com/repos/huggingface/transformers/issues/7801/events
https://github.com/huggingface/transformers/issues/7801
721,924,384
MDU6SXNzdWU3MjE5MjQzODQ=
7,801
Can not convert the the custom trained BERT model to pytorch model for further use which should give me .bin file
{ "login": "anidiatm41", "id": 52837723, "node_id": "MDQ6VXNlcjUyODM3NzIz", "avatar_url": "https://avatars.githubusercontent.com/u/52837723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anidiatm41", "html_url": "https://github.com/anidiatm41", "followers_url": "https://api.github.com/users/anidiatm41/followers", "following_url": "https://api.github.com/users/anidiatm41/following{/other_user}", "gists_url": "https://api.github.com/users/anidiatm41/gists{/gist_id}", "starred_url": "https://api.github.com/users/anidiatm41/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anidiatm41/subscriptions", "organizations_url": "https://api.github.com/users/anidiatm41/orgs", "repos_url": "https://api.github.com/users/anidiatm41/repos", "events_url": "https://api.github.com/users/anidiatm41/events{/privacy}", "received_events_url": "https://api.github.com/users/anidiatm41/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1897896961, "node_id": "MDU6TGFiZWwxODk3ODk2OTYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Migration", "name": "Migration", "color": "e99695", "default": false, "description": "" } ]
closed
false
null
[]
[ "Hello! It seems you're running the BART conversion script for a BERT model?", "Yeah, but I got to fix this.\n-Anirban\n\nOn Thu, 15 Oct, 2020, 2:52 PM Lysandre Debut, <[email protected]>\nwrote:\n\n> Hello! It seems you're running the BART conversion script for a BERT model?\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/7801#issuecomment-709032258>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AMTD2W4GWGTXTL5DFR3DHPDSK25O3ANCNFSM4SRMD3AQ>\n> .\n>\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,602
1,608
1,608
NONE
null
# 📚 Migration ## Information I pre trained a bert model from scratch with custom corpus and now want to use that model for further purpose like Qand A, masked word prediction etc. After pre- training I got below files: bert_config.json bert_model.ckpt.data-00000-of-00001 bert_model.ckpt.index vocab.txt bert_model.ckpt.meta Now , I need to convert the the model to pytorch model for further use which will give me .bin file . I am running below :+1: %cd /content/drive/My Drive/Anirban_test_pytorch/ !python convert_bart_original_pytorch_checkpoint_to_pytorch.py "/content/drive/My Drive/Anirban_test_pytorch/model.ckpt.index" "/content/sample_data"\ But getting below error: File "convert_bart_original_pytorch_checkpoint_to_pytorch.py", line 75, in load_xsum_checkpoint sd = torch.load(checkpoint_path, map_location="cpu") File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 585, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/usr/local/lib/python3.6/dist-packages/torch/serialization.py", line 755, in _legacy_load magic_number = pickle_module.load(f, **pickle_load_args) _pickle.UnpicklingError: invalid load key, '\x00'. Please help me to fix the same.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7801/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7801/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7800
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7800/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7800/comments
https://api.github.com/repos/huggingface/transformers/issues/7800/events
https://github.com/huggingface/transformers/issues/7800
721,915,851
MDU6SXNzdWU3MjE5MTU4NTE=
7,800
Empty Conversation Responses
{ "login": "QuantumEntangledAndy", "id": 13386481, "node_id": "MDQ6VXNlcjEzMzg2NDgx", "avatar_url": "https://avatars.githubusercontent.com/u/13386481?v=4", "gravatar_id": "", "url": "https://api.github.com/users/QuantumEntangledAndy", "html_url": "https://github.com/QuantumEntangledAndy", "followers_url": "https://api.github.com/users/QuantumEntangledAndy/followers", "following_url": "https://api.github.com/users/QuantumEntangledAndy/following{/other_user}", "gists_url": "https://api.github.com/users/QuantumEntangledAndy/gists{/gist_id}", "starred_url": "https://api.github.com/users/QuantumEntangledAndy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/QuantumEntangledAndy/subscriptions", "organizations_url": "https://api.github.com/users/QuantumEntangledAndy/orgs", "repos_url": "https://api.github.com/users/QuantumEntangledAndy/repos", "events_url": "https://api.github.com/users/QuantumEntangledAndy/events{/privacy}", "received_events_url": "https://api.github.com/users/QuantumEntangledAndy/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Also pinging @patrickvonplaten for info", "Hey @QuantumEntangledAndy, I updated the config parameters of all DialoGPT models to `max_length=1000`, see here: https://github.com/huggingface/transformers/issues/7764 -> this problem should now be solved for the DialoGPT models.\r\n\r\nI think this is the correct way to tackle this problem - we don't want to change the `generation` code logic here just for Conversational models.", "I do not believe this will resolve the issue with min_length. Surly the expected behavior of min_length in a conversation model would be the min length of the new utterence not the length of context + new utterence.\r\n\r\nLets say I set min length to 2, because I want my bot to always say something and I say, \"What shall we watch?\" This has a length of 4 (five with with eos) this is added into the context and counts towards the length for min_length. The bot therefore has every chance of saying nothing.\r\n\r\nWhat would be the ideal way to deal with this situation?", "Sorry I misread your issue a bit here. \r\n> Surly the expected behavior of min_length in a conversation model would be the min length of the new utterence not the length of context + new utterence\r\n\r\nI understand your reasoning here. Are you using the pipelines or the \"normal\" generate() function?\r\n\r\nI think we could tweak the `ConversationPipeline` a bit to better handle the `min_length` parameter. \r\n\r\n Note that because the `input_ids` get longer with every conversation you have with the bot, `min_length` only works for the very first conversation in pipelines.\r\n\r\nIf you directly use `generate()` you could just set `min_length=input_ids.shape[-1] + 2` to solve your problem.\r\n", "I use the ConversationPipeline. I think perhaps the ConversationPipeline should update the min_length on call like this:\r\n\r\n```python\r\nmin_length=input_ids.shape[-1] + self.min_length_for_response\r\n```\r\n\r\nSince we do have such a member in the class although it is set to 32 which seems a little high\r\n\r\nhttps://github.com/huggingface/transformers/blob/0911b6bd86b39d55ddeae42fbecef75a1244ea85/src/transformers/pipelines.py#L2375-L2382\r\n\r\nHowever I think it is high as this is the member that decides how many old responses need to be cleared to make room for new input.", "If you want to test this out for yourself and see the blank responses this should work\r\n\r\n```python\r\n#! /usr/bin/env python3\r\n\"\"\"Small example of conversational pipeline in python.\"\"\"\r\n\r\nfrom transformers.pipelines import (\r\n Conversation,\r\n ConversationalPipeline,\r\n)\r\nfrom transformers import (\r\n AutoConfig,\r\n AutoModelForCausalLM,\r\n AutoTokenizer,\r\n)\r\n\r\ncache_dir = \"cached\"\r\nmodel_name_or_path = \"microsoft/DialoGPT-medium\"\r\nconfig_name = \"microsoft/DialoGPT-medium\"\r\ntokenizer_name = \"microsoft/DialoGPT-medium\"\r\n\r\nconfig = AutoConfig.from_pretrained(\r\n config_name, cache_dir=cache_dir,\r\n)\r\ntokenizer = AutoTokenizer.from_pretrained(\r\n tokenizer_name, cache_dir=cache_dir,\r\n)\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n model_name_or_path,\r\n from_tf=False,\r\n config=config,\r\n cache_dir=cache_dir,\r\n)\r\n\r\nconfig.min_length = 2\r\nconfig.max_length = 1000\r\n\r\nprint(f\"min_length: {config.min_length}\")\r\nprint(f\"max_length: {config.max_length}\")\r\n\r\nconversation = Conversation()\r\nconversation_manager = ConversationalPipeline(model=model,\r\n tokenizer=tokenizer)\r\n\r\nconversation.add_user_input(\"Is it an action movie?\")\r\nconversation_manager([conversation])\r\nprint(f\"Response: {conversation.generated_responses[-1]}\")\r\n\r\nconversation.add_user_input(\"Is it a love movie?\")\r\nconversation_manager([conversation])\r\nprint(f\"Response: {conversation.generated_responses[-1]}\")\r\n\r\nconversation.add_user_input(\"What is it about?\")\r\nconversation_manager([conversation])\r\nprint(f\"Response: {conversation.generated_responses[-1]}\")\r\n\r\nconversation.add_user_input(\"Would you recommend it?\")\r\nconversation_manager([conversation])\r\nprint(f\"Response: {conversation.generated_responses[-1]}\")\r\n\r\nconversation.add_user_input(\"If not what would you recommend?\")\r\nconversation_manager([conversation])\r\nprint(f\"Response: {conversation.generated_responses[-1]}\")\r\n\r\nconversation.add_user_input(\"I think you need to think about it more.\")\r\nconversation_manager([conversation])\r\nprint(f\"Response: {conversation.generated_responses[-1]}\")\r\n\r\nconversation.add_user_input(\"After all action is the best.\")\r\nconversation_manager([conversation])\r\nprint(f\"Response: {conversation.generated_responses[-1]}\")\r\n\r\nconversation.add_user_input(\"But maybe not.\")\r\nconversation_manager([conversation])\r\nprint(f\"Response: {conversation.generated_responses[-1]}\")\r\n\r\nconversation.add_user_input(\"What really matters is quality.\")\r\nconversation_manager([conversation])\r\nprint(f\"Response: {conversation.generated_responses[-1]}\")\r\n\r\nconversation.add_user_input(\"Quality over all other things.\")\r\nconversation_manager([conversation])\r\nprint(f\"Response: {conversation.generated_responses[-1]}\")\r\n\r\nconversation.add_user_input(\"But not at the expense of tradition.\")\r\nconversation_manager([conversation])\r\nprint(f\"Response: {conversation.generated_responses[-1]}\")\r\n\r\nconversation.add_user_input(\"For advancement for advancments sake must\"\r\n \" be curtailed.\")\r\nconversation_manager([conversation])\r\nprint(f\"Response: {conversation.generated_responses[-1]}\")\r\n\r\nconversation.add_user_input(\"Unethical practises must be trimmed.\")\r\nconversation_manager([conversation])\r\nprint(f\"Response: {conversation.generated_responses[-1]}\")\r\n\r\nconversation.add_user_input(\"In truth nothing is of any good.\")\r\nconversation_manager([conversation])\r\nprint(f\"Response: {conversation.generated_responses[-1]}\")\r\n\r\nconversation.add_user_input(\"Unless it is traditional.\")\r\nconversation_manager([conversation])\r\nprint(f\"Response: {conversation.generated_responses[-1]}\")\r\n\r\nconversation.add_user_input(\"And sometimes not even then.\")\r\nconversation_manager([conversation])\r\nprint(f\"Response: {conversation.generated_responses[-1]}\")\r\n```", "> I use the ConversationPipeline. I think perhaps the ConversationPipeline should update the min_length on call like this:\r\n> \r\n> ```python\r\n> min_length=input_ids.shape[-1] + self.min_length_for_response\r\n> ```\r\n> \r\n> Since we do have such a member in the class although it is set to 32 which seems a little high\r\n> \r\n> https://github.com/huggingface/transformers/blob/0911b6bd86b39d55ddeae42fbecef75a1244ea85/src/transformers/pipelines.py#L2375-L2382\r\n> \r\n> However I think it is high as this is the member that decides how many old responses need to be cleared to make room for new input.\r\n\r\nI think I would be fine to add this to the Conversation Pipeline. Do you want to open a PR and we see how to integrate it? ", "Before I do any PR Id like some input on design choices.\n\n- Should I also set max_length?\n \n - If I do set it too I believe it will no longer be necessary to remove old conversations to have room for new content.\n \n - However for very long chats perhaps a chat bot that saves and reloads it's memory this may become computational expensive\n\n - In light of this would a convenience function that trims memory down to n last inputs be acceptable?\n\n- I am thinking to make min_length an optional parameter to init. It defaults to None, when None is given as input it sets min_length to that of the model at init time. With similar behaviour for max_length", "Thank you @QuantumEntangledAndy for sharing the issue here, as I believe it affects both implementation.\r\n\r\nIf I may, I'd like to add my view on the issue, which I believe is not tied to the `ConversationPipeline`, but rather on how `min_length` and `max_length` and handled for non-\"encoder-decoder\" architectures.\r\n\r\nI would like to question the validity of setting the `cur_len` to the input sequence for pure decoder architectures:\r\nhttps://github.com/huggingface/transformers/blob/15a189049e01fbd3bef902848a09f58a5f006c37/src/transformers/generation_utils.py#L451.\r\n\r\nI would argue for setting the `cur_len` to 1 (or 0?) for pure-decoder architectures as well for a few reasons:\r\n- I would believe that user looking to generate a sequence would be generally looking to set how many tokens they would like to generate, and not the length of the context + generated inputs. It would be great if you could share some use-cases where that is typically not the case.\r\n- This definition of `cur_len` leads to somewhat \"hacky\" workarounds when the context needs to be extended by a prefix (for example, XLNet). Setting `min_length` and `max_length` to refer to generated content would make it independent of any context pre-processing\r\n- The current behaviour differs between \"encoder-decoder\" and \"decoders\", and I am not entirely sure why. A definition based on the length generated would bring the behaviour of both together.\r\n- Lastly, and in my opinion more importantly, the current solution does not work for batched generation. Let's say the input is made of 2 sentences, with initial length of 3 and 8. For batched generation, the input prompt will be padded as follows (`0` indicates a padded token, `x` an input token):\r\n```\r\n[ x x x x x x x x]\r\n[ 0 0 0 0 0 x x x] \r\n```\r\nthe `cur_len` will be set to 8 (`input_ids.shape[-1]`). Let's assume the `min_len` is 12. The model could generate the following sequence (`g` indicates a generated token):\r\n```\r\n[ x x x x x x x x g g g g]\r\n[ 0 0 0 0 0 x x x g g g g] \r\n```\r\nThis shows that while the first sequence respects the `min_len` of 12, the effective length of the second sequence is below the minimum value. Using `min_length` and `max_length` to refer to generated content would lead to valid constraints on all of the sequences in the batch, regardless of padding. For example, with the previous example if `min_length` is 6, both sequence would have at a minimum 6 generated token. In summary, I believe the current handling of `min_length` and `max_length` make them misleading as soon as inputs are passed as batches - but working on generated sequences would prevent that.\r\n\r\nI would be more in favor of initializing`cur_len` to 1 (0) for decoders as well - I would be interested in your thoughts on that.", "Hmmm I think I see what you mean with batch generation.\n\nI am wondering what the use case of treating non encoder-decoders differently is. If for some reason we cannot reasonably change the definition perhaps we should consider adding different types of new lengths.\n\n- minmax document length\n - This is current definition of min/max\n- minmax generated length\n - This is the effective length the model generates.\n - This of course would mean new config variables. But we could default them to none and then ignore that condition in this case.", "Hey @guillaume-be,\r\n\r\nI fully understand your point of view and I also tend to agree that the handling of `min_length` and `max_length` as described by you and I tend to agree with you (actually @sshleifer suggested this change a while back as well).\r\n\r\nI guess the disadvantages of changing `max_length's` logic (making it \"max added tokens\" vs. \"max total tokens\") is the following:\r\n\r\n- For models like GPT2, users might want to generate multiple articles < 100 words including the prefix. The output of decoder models is always prefix + generated text so it makes sense to me that `max_length` is the maximum length of the output. It's also safer regarding the `max_embedding_positions` provided by the model - by setting `max_length=512`, there will never be an error independent of the prefix. I guess all arguments are based on the advantages one might have from knowing that `max_length` is independent of the input.\r\n\r\nThat's the main argument.\r\n- Changing the logic now would break backward compatibility quite heavily. Beam search is highly dependent on max_length *e.g.* - not sure whether we want that.\r\n\r\nLet's put @LysandreJik and @yjernite and @sshleifer in cc as well to see their opinions on changing the `max_length` logic.", "We could make it an option like `use_relative` lengths. It defaults to `None` meaning auto (the current behavour) but can be set to `True` of `False` in order to explicitly override the current auto logic. Ultimately I think the auto logic is not perfect and there will always be situations where it choices incorrectly, making it configurable will at least allow the library user to choose the appropiate choice.", "My opinion is that I have never seen \"users might want to generate multiple articles < 100 words including the prefix\", and I have seen, \"Why does prefix count?\" a number of times and had the same misunderstanding the first few times I used it. So I think we should change the behavior to not count `decoder_start_token_id` or tokens in `input_ids`. \r\nAnother argument that might land: you would expect a function called `generate` to `generate` N tokens. Not generate N tokens where N varies based on your inputs.\r\n\r\nThe `use_relative` compromise is my second favorite option.\r\nThe status quo is my least favorite.", "Was there any progress on making these changes?", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,602
1,614
1,614
NONE
null
## Environment info Mac OS 10.14 My work is in rust, and I have an issue open in guillaume-be/rust-bert#87 however the authour of the repo asked that I also open it here to get HugginFace's opinion as it pertains to code that matches the intention of that in this repo. - `transformers` version: rust-bert 0.10.0 - Platform: Mac OS - PyTorch version (GPU?): No - Tensorflow version (GPU?): No - Using GPU in script?: No - Using distributed or parallel set-up in script?: Yes ### Who can help TextGeneration: @TevenLeScao (This is in how min_ and max_length are treated during text generation of the conversation model) ## Information Model I am using: DialgoGPT with the Conversation Moel The problem arises when using: * [x] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The problem occurs in the offical example of the rust and the owner of the the rust code assures me that there is exact same behaviour in this code too The tasks I am working on is: * [x] my own task or dataset: (give details below) I am making a small chatbot, I have my own fine tuned model but the same behaviour is obseved with the stock DiagloGPT model ## To reproduce Steps to reproduce the behavior: 1. Create a conversation model with `min_length` set 2. Talk with the conversation for about 10-12 responses 3. Responses will be zero length despite min_length begin set ## Expected behavior Min length to be upheld during conversation ## Details The root cause of this is how cur_len and min/max_len are handled in the code https://github.com/huggingface/transformers/blob/15a189049e01fbd3bef902848a09f58a5f006c37/src/transformers/generation_utils.py#L86-L88 https://github.com/huggingface/transformers/blob/15a189049e01fbd3bef902848a09f58a5f006c37/src/transformers/generation_utils.py#L533 The cur_len is initalised with the length of current input which contains all previous dialogue with the bot as context https://github.com/huggingface/transformers/blob/15a189049e01fbd3bef902848a09f58a5f006c37/src/transformers/generation_utils.py#L451 This means that min_length of the new utterence from the bot is already satisfied. It also means that max_length can be exceeded if a long conversation is held. cur_len should perhaps be initialised differently in the ConverstaionModel
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7800/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7800/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7799
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7799/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7799/comments
https://api.github.com/repos/huggingface/transformers/issues/7799/events
https://github.com/huggingface/transformers/pull/7799
721,903,725
MDExOlB1bGxSZXF1ZXN0NTAzNzQ1ODEw
7,799
model card for bert-base-NER
{ "login": "dslim23", "id": 3118412, "node_id": "MDQ6VXNlcjMxMTg0MTI=", "avatar_url": "https://avatars.githubusercontent.com/u/3118412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dslim23", "html_url": "https://github.com/dslim23", "followers_url": "https://api.github.com/users/dslim23/followers", "following_url": "https://api.github.com/users/dslim23/following{/other_user}", "gists_url": "https://api.github.com/users/dslim23/gists{/gist_id}", "starred_url": "https://api.github.com/users/dslim23/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dslim23/subscriptions", "organizations_url": "https://api.github.com/users/dslim23/orgs", "repos_url": "https://api.github.com/users/dslim23/repos", "events_url": "https://api.github.com/users/dslim23/events{/privacy}", "received_events_url": "https://api.github.com/users/dslim23/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,602
1,602
1,602
CONTRIBUTOR
null
@julien-c Model card with some details on training, eval, dataset for my bert-base-NER model
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7799/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7799/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7799", "html_url": "https://github.com/huggingface/transformers/pull/7799", "diff_url": "https://github.com/huggingface/transformers/pull/7799.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7799.patch", "merged_at": 1602791701000 }
https://api.github.com/repos/huggingface/transformers/issues/7798
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7798/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7798/comments
https://api.github.com/repos/huggingface/transformers/issues/7798/events
https://github.com/huggingface/transformers/pull/7798
721,826,486
MDExOlB1bGxSZXF1ZXN0NTAzNjgxMjUx
7,798
Herbert polish model
{ "login": "rmroczkowski", "id": 64909124, "node_id": "MDQ6VXNlcjY0OTA5MTI0", "avatar_url": "https://avatars.githubusercontent.com/u/64909124?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rmroczkowski", "html_url": "https://github.com/rmroczkowski", "followers_url": "https://api.github.com/users/rmroczkowski/followers", "following_url": "https://api.github.com/users/rmroczkowski/following{/other_user}", "gists_url": "https://api.github.com/users/rmroczkowski/gists{/gist_id}", "starred_url": "https://api.github.com/users/rmroczkowski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rmroczkowski/subscriptions", "organizations_url": "https://api.github.com/users/rmroczkowski/orgs", "repos_url": "https://api.github.com/users/rmroczkowski/repos", "events_url": "https://api.github.com/users/rmroczkowski/events{/privacy}", "received_events_url": "https://api.github.com/users/rmroczkowski/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Great point! I've already simplified the code and left only the tokenizer. Ideally, `XLMTokenizer` should have a converter to the appropriate `Fast` class. On second thought, I can see the problem would be to program the Moses pretokenization in the `tokenizers` library.", "Hi @rmroczkowski \r\n\r\nAs far as I know, you employed fastBPE to train a tokenizer: https://github.com/huggingface/transformers/tree/b23d3a5ad4aa08decd10671f85be5950767dd052/model_cards/allegro/herbert-klej-cased-tokenizer-v1\r\n\r\nI also employed fastBPE for the Vietnamese BERT-based tokenizer (i.e. PhoBERTTokenizer https://github.com/huggingface/transformers/pull/6129 ), but I am still struggling to implement a fast tokenizer based on fastBPE, e.g. handling the suffix \"@@\" of subword tokens. In particular, given https://huggingface.co/vinai/phobert-base/tree/main I can convert \"bpe.codes\" into a \"merge.txt\"-style file, but I am not sure about how to convert our \"vocab.txt\" into your \"vocab.json\"-style file.\r\n\r\nHow can you convert your fastBPE's code and vocab outputs into HuggingFace's tokenizers? So that you can call the tokenizer with the use_fast=True option. \r\n\r\ncc: @LysandreJik is there any idea for implementing a fast version of a fastBPE-based slow one?\r\n\r\nThank you both.\r\n\r\n\r\n\r\n" ]
1,602
1,651
1,602
CONTRIBUTOR
null
The HerBERT model is a transformer model pretrained using masked language modeling (MLM) and Sentence Structural (SSO) objectives for the Polish language. It was added to the library in PyTorch with the following checkpoints: - `allegro/herbert-base-cased` - `allegro/herbert-large-cased` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @patrickvonplaten @julien-c @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7798/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7798/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7798", "html_url": "https://github.com/huggingface/transformers/pull/7798", "diff_url": "https://github.com/huggingface/transformers/pull/7798.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7798.patch", "merged_at": 1602832012000 }
https://api.github.com/repos/huggingface/transformers/issues/7797
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7797/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7797/comments
https://api.github.com/repos/huggingface/transformers/issues/7797/events
https://github.com/huggingface/transformers/issues/7797
721,792,036
MDU6SXNzdWU3MjE3OTIwMzY=
7,797
BertForSequenceClassification -> TFBertForSequenceClassification causes 'bert.embeddings.position_ids' not used error
{ "login": "dslim23", "id": 3118412, "node_id": "MDQ6VXNlcjMxMTg0MTI=", "avatar_url": "https://avatars.githubusercontent.com/u/3118412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dslim23", "html_url": "https://github.com/dslim23", "followers_url": "https://api.github.com/users/dslim23/followers", "following_url": "https://api.github.com/users/dslim23/following{/other_user}", "gists_url": "https://api.github.com/users/dslim23/gists{/gist_id}", "starred_url": "https://api.github.com/users/dslim23/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dslim23/subscriptions", "organizations_url": "https://api.github.com/users/dslim23/orgs", "repos_url": "https://api.github.com/users/dslim23/repos", "events_url": "https://api.github.com/users/dslim23/events{/privacy}", "received_events_url": "https://api.github.com/users/dslim23/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hi, I'm still getting this warning on version 4.3.2." ]
1,602
1,614
1,608
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Platform: Linux-4.9.0-13-amd64-x86_64-with-debian-9.13 - Python version: 3.7.8 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): 2.3.1 (True) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik --> ## Information Model I am using (Bert, XLNet ...): ``` from transformers import BertTokenizer, TFBertModel, TFBertForSequenceClassification,BertForSequenceClassification import tensorflow as tf tokenizer = BertTokenizer.from_pretrained('bert-base-cased') model = TFBertForSequenceClassification.from_pretrained('output', from_pt=True) ``` I'm loading TFBertForSequenceClassification from a BertForSequenceClassification pytorch SavedModel and get this error: `Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFBertForSequenceClassification: ['bert.embeddings.position_ids']` I've done this before with this exact model and not had these issues.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7797/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7797/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7796
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7796/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7796/comments
https://api.github.com/repos/huggingface/transformers/issues/7796/events
https://github.com/huggingface/transformers/issues/7796
721,791,374
MDU6SXNzdWU3MjE3OTEzNzQ=
7,796
T5 finetune outputting gibberish
{ "login": "jsrozner", "id": 1113285, "node_id": "MDQ6VXNlcjExMTMyODU=", "avatar_url": "https://avatars.githubusercontent.com/u/1113285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jsrozner", "html_url": "https://github.com/jsrozner", "followers_url": "https://api.github.com/users/jsrozner/followers", "following_url": "https://api.github.com/users/jsrozner/following{/other_user}", "gists_url": "https://api.github.com/users/jsrozner/gists{/gist_id}", "starred_url": "https://api.github.com/users/jsrozner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jsrozner/subscriptions", "organizations_url": "https://api.github.com/users/jsrozner/orgs", "repos_url": "https://api.github.com/users/jsrozner/repos", "events_url": "https://api.github.com/users/jsrozner/events{/privacy}", "received_events_url": "https://api.github.com/users/jsrozner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "`some other parameter to change?`: BINGO\r\n\r\n\r\nthere is a `min_length`/`max_length` parameter you can pass to beam search (in many ways) that is affecting your generations.\r\nIf you eval offline with min_length=0, max_length=3 it should work.\r\n", "Cool! Sorry for the n00biness. \r\n1. Is there somewhere I can read about when / why this happens? (or in brief, why does it happen?) \r\n2. min_length and max_length will just limit how long the output sequence can be? Where's the best place to input them? Just directly from finetune.py?\r\n3. Is there a different way to have the model learn when to stop outputting? (i.e to learn by itself that it should only be outputting one \"word\" since that's what all the train examples show)", "1) you can read the [docstring for `generate`](https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L135)\r\n2) I would edit `finetune.py ` around [here](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.py#L215)\r\n3) It should learn good lengths within the hardcoded range. It's simply not allowed to go out of the hardcoded range.\r\nIf you set `min_length=0`, `max_length=10` I would guess it will learn to always generate word followed by `</s>` (This \"eos\" symbol is automatically added to input sequences by the `T5Tokenizer`.)\r\n\r\n\r\n\r\n\r\n", "Thanks! I am rerunning with the max length (I didn't see a spot for min length). \r\n\r\nI'm still a little confused as to why this happens though. For example, \r\n* why doesn't it get penalized for the gibberish? (is padding somehow affecting what it gets penalized for?)\r\n* why isn't the gibberish at all linguistic, even? I would expect it at least to add mostly english-like tokens? These strings seem entirely non-lingustic.\r\n\r\nRelated: is there an easy flag to change so that I could view part of the validation outputs at each epoch to keep track of when it learns to truncate? Right now I'm just waiting until end of training to look at the test generations.\r\n", "+ You need the min_length, just pass min_length=0 to `model.generate`\r\n+ re padding, yes. There is no loss for pad tokens.\r\n+ no flag to see intermediate generations, but https://github.com/huggingface/transformers/blob/master/examples/seq2seq/callbacks.py#L83 should maybe work.", "Okay thanks, I will work on these.\r\n\r\nI realize these are unrelated T5 issues, but before I file other feature requests /bugs I just wanted to run them by you:\r\n* auto_lr_find and auto_scale_batch_size (pytorch lightning flags) when used from the finetune.sh script throw errors. Should these be usable? (I can debug and figure out why they're not working; but I want to know if they should be working)\r\n* I am unable to get the finetune.sh script to resume from a checkpoint (I played around with this for ~2 hours last night) and was unable to make it resume. Should this be supported?", "auto*: Would be nice if they worked!\r\nit should work with `--resume_from_checkpoint`, but that part of lightning has been very flaky.\r\n\r\n\r\nI probably won't fix either of these but would definitely accept a PR that allow clargs that currently don't work. If you can't fix, you could also make separate issues for clargs that don't work, label them \"Help Wanted\" and see what happens.\r\nIf you make issues, make sure to include your PL version.\r\n\r\n", "@jsrozner did you `finetune.py` work for fine-tuning T5? \r\n\r\nWe're also having [some difficulties](https://discuss.huggingface.co/t/issue-with-finetuning-a-seq-to-seq-model/1680/2). Wanted to make sure if it has worked for someone else, at least. ", "@danyaljj will be fixed by #8435", "Thanks, @jsrozner for the update! \r\nDoes this address the issue [here](https://discuss.huggingface.co/t/issue-with-finetuning-a-seq-to-seq-model/1680/27?u=danyaljj)? Mainly your observation that: \r\n\r\n> But even after setting eval_beams=1, eval_max_gen_length=40, it still continues to generate many more tokens than it should ", "Did you pass `min_length=0` to generate?", "See issue #5142 for resolution" ]
1,602
1,605
1,602
CONTRIBUTOR
null
## Environment info - `transformers` version: 3.3.1 - Platform: Linux-4.4.0-116-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: (tried with both 1 and 2 gpus) ### Who can help Summarization: @sshleifer T5: @patrickvonplaten examples/seq2seq: @sshleifer ## Information I am trying to finetune on a custom dataset. I posted about my specific use case here in the forums: https://discuss.huggingface.co/t/t5-tips-for-finetuning-on-crossword-clues-clue-answer/1514 The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X ] my own task or dataset: (give details below) ## To reproduce * clone transformers from master * pip install -e . ; pip install -r requirements.txt * cd exampls/seq2seq * modify finetune_t5.sh script to run with a local data set (data_set/[val|test|train].[source|target]) (Note that I have changed nothing else) `python finetune.py \ --model_name_or_path=t5-small \ --tokenizer_name=t5-small \ --data_dir=${HOME}/data_set \ --learning_rate=3e-4 \ --output_dir=$OUTPUT_DIR \ --max_source_length=100 \ --max_target_length=100 \ --num_train_epochs=300 \ --train_batch_size=64 \ --eval_batch_size=64 \ --gpus=1 \ --auto_select_gpus=True \ --save_top_k=3 \ --output_dir=$OUTPUT_DIR \ --do_train \ --do_predict \ "$@" ` As a baseline "does the T5 work", my input outputs are of the form (one per line) (this is one line in train.source): This is a sentence (this is corresponding line in train.target): This The lines are exactly as above, with a new line after each example, but with no other punctuation. I have not modified tokens or the model. ## Expected behavior Expect T5 to learn to output the first word. ## Observed T5 outputs first word followed by gibberish: After 300 epochs, here is what we see for the first 5 lines of source vs test_generation (test.target is just the first word of each line in test.source) Test.source: We raised a bloom, a monster I let Satan corrupt and torment Chapter in play is an old piece Old skin disease liable to drain confidence Keep a riot going inside a musical academy test_generations: We vsahmoastuosastostassymbossa Issahrastahmoormentostormentastoshomment Chapter vshygie'ny-futtahraffahtaftast Old hygienohmahrastassahuasairtia Keep'astifiahuassaivrasastoshygiesana I wonder if any of the following could be affecting this: * choice of loss function * a corrupted character somewhere in one of the input/output * choice of task (I think it defaults to summarization) * need more epochs? * some other parameter to change?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7796/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7796/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7795
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7795/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7795/comments
https://api.github.com/repos/huggingface/transformers/issues/7795/events
https://github.com/huggingface/transformers/pull/7795
721,742,284
MDExOlB1bGxSZXF1ZXN0NTAzNjA4NDM0
7,795
Fix TF savedmodel in Roberta
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,686
1,602
CONTRIBUTOR
null
# What does this PR do? This PR fixes an issue in the TensorFlow version of Roberta. The issue prevented to save any Roberta model in SavedModel format. Fixes #7783
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7795/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7795/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7795", "html_url": "https://github.com/huggingface/transformers/pull/7795", "diff_url": "https://github.com/huggingface/transformers/pull/7795.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7795.patch", "merged_at": 1602712131000 }
https://api.github.com/repos/huggingface/transformers/issues/7794
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7794/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7794/comments
https://api.github.com/repos/huggingface/transformers/issues/7794/events
https://github.com/huggingface/transformers/pull/7794
721,713,252
MDExOlB1bGxSZXF1ZXN0NTAzNTg0MjAx
7,794
Updated Tokenizer to 0.9.1 from prerelease version
{ "login": "dciborow", "id": 9027725, "node_id": "MDQ6VXNlcjkwMjc3MjU=", "avatar_url": "https://avatars.githubusercontent.com/u/9027725?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dciborow", "html_url": "https://github.com/dciborow", "followers_url": "https://api.github.com/users/dciborow/followers", "following_url": "https://api.github.com/users/dciborow/following{/other_user}", "gists_url": "https://api.github.com/users/dciborow/gists{/gist_id}", "starred_url": "https://api.github.com/users/dciborow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dciborow/subscriptions", "organizations_url": "https://api.github.com/users/dciborow/orgs", "repos_url": "https://api.github.com/users/dciborow/repos", "events_url": "https://api.github.com/users/dciborow/events{/privacy}", "received_events_url": "https://api.github.com/users/dciborow/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Why not `>= 0.9.1` or `>=0.8.1.rc2`?", "Hi, we have a strict requirement on `tokenizers==0.8.1rc2`. We're updating it in https://github.com/huggingface/transformers/pull/7659 but the current `transformers` `master` branch will stay pinned until that PR is merged.\r\n\r\nBoth libraries evolve quickly and generally evolve together, so having a strict `==` dependency is necessary until tokenizers version 1.0.0 is released." ]
1,602
1,602
1,602
NONE
null
Use latest stable version instead of RC prerelease. # What does this PR do? Upgrades Tokenizer to update release version, from pre-release version. <!-- Remove if not applicable --> Fixes # (issue) #7794 ## Before submitting - [ x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7794/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7794/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7794", "html_url": "https://github.com/huggingface/transformers/pull/7794", "diff_url": "https://github.com/huggingface/transformers/pull/7794.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7794.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7793
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7793/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7793/comments
https://api.github.com/repos/huggingface/transformers/issues/7793/events
https://github.com/huggingface/transformers/pull/7793
721,693,363
MDExOlB1bGxSZXF1ZXN0NTAzNTY3OTM5
7,793
Add specific notebook ProgressCalback
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,602
1,602
COLLABORATOR
null
# What does this PR do? This PR introduces a new `NotebookProgressCallback` that is more suitable for trainings in notebook. There are two problems with the current use of tqdm: - tqdm uses widgets, which disappear when you close and reopen your notebook, or download it from github. Instead of seeing the full progress bar, a message "A Jupyter widget could not be displayed because the widget state could not be found. This could happen if the kernel storing the widget is no longer available, or if the widget state was not saved in the notebook. You may be able to create the widget by running the appropriate cells." appears - tqdm creates a new widget each time you open a new progress bar, which, when closed, leaves a blank line there is absolutely no way to remove (and I have tried!) This means we have one such blank line for every evaluation. What's more, notebooks can properly render html code, so we can structure the output displayed during and at the end of training a bit better, using a table for instance. This PR aims at tackling the issues above by: - writing its own progress bar in pure HTML and using the `IPython.display` module to display and update it - adding a table of results also in pure HTML to that progress bar. It adds no dependency and just add a test (taken from tqdm.auto) to determine if the user is executing code in a notebook environment or not, and picking the best `ProgressCallback` accordingly. It goes from the previous results: ![](https://i.ibb.co/W5zqv6n/old-progress.png) To those new ones: ![](https://i.ibb.co/m8zpmmk/new-progress.png) With a bit more work, it's also possible to add a graph of the losses/metrics that gets updated as the training progresses.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7793/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7793/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7793", "html_url": "https://github.com/huggingface/transformers/pull/7793", "diff_url": "https://github.com/huggingface/transformers/pull/7793.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7793.patch", "merged_at": 1602752709000 }
https://api.github.com/repos/huggingface/transformers/issues/7792
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7792/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7792/comments
https://api.github.com/repos/huggingface/transformers/issues/7792/events
https://github.com/huggingface/transformers/issues/7792
721,679,819
MDU6SXNzdWU3MjE2Nzk4MTk=
7,792
[stas/sam] Newsroom dataset wierdness
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Here is part of the problem\r\n\r\n![image](https://user-images.githubusercontent.com/6045025/96030008-6e2fb380-0e29-11eb-96c6-994955fa588d.png)\r\n\r\nall one line (221 in vim)\r\n", "Oh, I know - it's \\cM characters. Let me take care of it.\r\n\r\n```\r\nGoogle is still the best company to work for, according to Fortune\r\n.^M<n>^M<n>The Mountain View-based tech giant earned the top \r\n^^^^^^^^^^^^^\r\n```", "Easiest way to clarify will be to say how I fixed (in vim)\r\n\r\n(1) `%s/^M//g` # This is not ctrl-m it must be typed by following these [instructions](https://stackoverflow.com/questions/5843495/what-does-m-character-mean-in-vim#:~:text=Windows%20uses%20a%20combination%20of,letter%20in%20the%20English%20alphabet).&text=Where%20%5EM%20is%20entered%20by,m%20%2C%20and%20then%20releasing%20Ctrl%20.)\r\n \r\n(2)`%s/<n><n>/<n>/g` (probably not necesarry, but did it anyway).\r\n", "```\r\ndos2unix filename\r\n```", "I will fix that in the build script", "```\r\n src = re.sub(r'[\\r\\n]+', '<n>', src)\r\n tgt = re.sub(r'[\\r\\n]+', '<n>', tgt)\r\n```" ]
1,602
1,602
1,602
CONTRIBUTOR
null
#### get data ```bash cd examples/seq2seq/ curl -L -o stas_data.tgz https://www.dropbox.com/sh/ctpx2pflb9nmt0n/AABRTDak-W06RD8KxuCOUdXla\?dl\=0 && unzip stas_data.tgz tar -xzvf newsroom-test.tgz ``` ```python from utils import Seq2SeqDataset tok = PegasusTokenizer.from_pretrained('google/pegasus-newsroom') ds = Seq2SeqDataset(tok, 'newsroom/data', tok.model_max_length, tok.model_max_length, type_path='test') ds[659]['tgt_texts'] # "Insomniac's Pasquale Rotella has gone from throwing illegal raves in warehouses to throwing the nation's most iconic dance music festival in Las Vegas' Electric Daisy Carnival. " ds[660] --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <ipython-input-17-7fbeab38f815> in <module> ----> 1 ds[660] ~/transformers_fork/examples/seq2seq/utils.py in __getitem__(self, index) 248 tgt_line = linecache.getline(str(self.tgt_file), index).rstrip("\n") 249 assert source_line, f"empty source line for index {index}" --> 250 assert tgt_line, f"empty tgt line for index {index}" 251 return {"tgt_texts": tgt_line, "src_texts": source_line, "id": index - 1} 252 AssertionError: empty tgt line for index 661 ``` Clue: In vim, the "Pasquale Rotella" line is 654 (off by 7/possible other bug), but it is 659/660 in the ds. similarly, `linecache` disagrees with `wc -l` about file lengths. ```python import linecache src_lns = linecache.getlines(str(ds.src_file)) tgt_lns = linecache.getlines(str(ds.tgt_file)) assert len(src_lns) == len(tgt_lns),f'{ len(src_lns)} != {len(tgt_lns)}' AssertionError: 108717 != 110412 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7792/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7792/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7791
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7791/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7791/comments
https://api.github.com/repos/huggingface/transformers/issues/7791/events
https://github.com/huggingface/transformers/issues/7791
721,677,219
MDU6SXNzdWU3MjE2NzcyMTk=
7,791
T5 Conversion from Original Tensorflow Produce rubbish Text
{ "login": "agemagician", "id": 6087313, "node_id": "MDQ6VXNlcjYwODczMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/agemagician", "html_url": "https://github.com/agemagician", "followers_url": "https://api.github.com/users/agemagician/followers", "following_url": "https://api.github.com/users/agemagician/following{/other_user}", "gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}", "starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/agemagician/subscriptions", "organizations_url": "https://api.github.com/users/agemagician/orgs", "repos_url": "https://api.github.com/users/agemagician/repos", "events_url": "https://api.github.com/users/agemagician/events{/privacy}", "received_events_url": "https://api.github.com/users/agemagician/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Hey @agemagician - did you train your model using the \"newer\" T5 model (see here https://github.com/huggingface/transformers/issues/6285) for reference or is it the \"original\" T5 model?", "No, this is the original T5 model.\r\n\r\nI just doubled checked the training script as well as the operative_config :\r\nhttps://storage.googleapis.com/t5_convert_tranformers/model/operative_config.gin ", "Ok! From a first check of your google colab it looks like the model was correctly converted to PT (the `\"Weights not copied to PyTorch model:` message is empty meaning that all PT weights are initialiazed). \r\n\r\nDo you think you could check if it might be the tokenizer that does not work correctly? Could you maybe run an integration test for some `input_ids` to check if original t5 implementation yields same output as the PT version?", "I have loaded the original T5 tokenizer then encoded the data and performed generation using Pytorch to make sure the input is the same for both original T5 script and Pytorch script, and the results is still rubbish.\r\n\r\nI have checked the original T5 tokenizer and Pytorch tokenizer and they produce the same encoding/decoding. The only difference is that Pytorch tokenizer doesn't append Eos.\r\n\r\nI have added a new section on the Colab \"Part IIII: Check tokenizers\" which perform these tests.", "Since the input is the same to both original T5 script and Pytorch script, I think the issue should be in one of the following:\r\n1. The conversion process.\r\n2. The generation process.\r\n3. The loading process.", "Thanks, I hope to be able to take a look at this soon!", "@patrickvonplaten Any update for fixing this issue ?\r\n\r\nWe started to release our models for the following tasks:\r\n\r\n1. api generation\r\n2. code comment generation\r\n3. commit generation\r\n4. function documentation generation\r\n5. program synthesis\r\n6. source code summarization\r\n7. Code generation\r\n\r\nfor the following languages:\r\n\r\n1. go\r\n2. java\r\n3. javascript\r\n4. php\r\n5. python\r\n6. ruby\r\n7. c#\r\n8. SQL\r\n9. LISP\r\n\r\nhttps://github.com/agemagician/CodeTrans\r\n\r\nHowever, we are using T5 original library for now, as huggingface transformers is still producing rubbish text after conversion.\r\n\r\nIt will be really useful if we can integrate and use huggingface transformers for this project too.\r\n", "Will take a look today!", "@agemagician - I looked into it. It's quite nightmarish to debug in mesh tensorflow ... :-/ I couldn't find the bug sadly and it's getting very time-consuming. I'll gonna spend some time now to integrate mt5 and T5v1.1, so I'll still be working with the mesh tensorflow library. I hope to be able to come back to this problem! A couple of things I found out:\r\n\r\n1) The `input_ids` passed to the Encoder for \r\n```\r\n\"Code: function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }\r\nDocumentation: Returns true if the browser is a native element .\"\r\n```\r\nis actually not the same for Hugging Face T5 and Mesh TF T5. => I suspect the tokenizers to behave differently here or mesh tf to do something under the hood with the input text\r\n\r\n2) Sadly even if I pass the exact same `input_ids` to the encoder of both models, the encoder outputs are still different => this means that there is a different in the architecture. I suspect that mesh TensorFlow handles the `relative_attention_bias` different for the `EncoderDecoderSelfAttention`. In the mesh tensorflow's gin it's set no `None`, but in our code its definitely used. Did not manage to check it here in more detail. \r\n\r\n=> Overall the problem is that `mesh_tensorflow` is constantly adding new features that are configurable with the gin config, but some of these new features are not implemented in HF and are therefore not used. So what is probably happening is that a mesh tensorflow trained model has the exact same weights as the HF implementation but has a slightly different architecture that cannot be configured with the HF T5 model...it's very hard for us to make sure that mesh tensorflow is kept constantly compatible with HF and we probably won't have the time to make sure it is. The only real solution is to use a HF pre-trained and train it within our environment or make sure that before mesh tensorflow training that the model is compatible with HF (checking the output of the pretrained models).\r\n\r\nIn case you want to take a deeper look here are my simplified scripts I used for debugging:\r\n\r\nfor mesh tf model:\r\n\r\n```python\r\nimport t5\r\nfrom t5.data.sentencepiece_vocabulary import SentencePieceVocabulary\r\n\r\nt5_model = t5.models.MtfModel(\r\n model_dir=\"./checkpoint\",\r\n batch_size=16,\r\n sequence_length={\"inputs\": 128, \"targets\": 32},\r\n learning_rate_schedule=0.003,\r\n save_checkpoints_steps=5000,\r\n keep_checkpoint_max=None,\r\n iterations_per_loop=100,\r\n tpu=None\r\n)\r\n\r\nvocab_model_path = 'gs://t5_convert_tranformers/spm/code_spm_unigram_40M.model'\r\nvocab = SentencePieceVocabulary(vocab_model_path, extra_ids=100)\r\n\r\nt5_model.predict(\r\n input_file=\"input.txt\",\r\n output_file=\"output.txt\",\r\n vocabulary=vocab,\r\n temperature=0\r\n)\r\n```\r\n\r\nand HF:\r\n\r\n```python\r\nfrom transformers import T5ForConditionalGeneration, T5Tokenizer\r\nimport torch\r\n\r\ninput_text = \"javascript documentation generation: function isStandardBrowserEnv ( ) { if ( typeof navigator !== 'undefined' && ( navigator . product === 'ReactNative' || navigator . product === 'NativeScript' || navigator . product === 'NS' ) ) { return false ; } return ( typeof window !== 'undefined' && typeof document !== 'undefined' ) ; }\"\r\n\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"./pytorch_model\").to(\"cuda\")\r\ntok = T5Tokenizer.from_pretrained(\"./pytorch_model\")\r\n\r\n#input_ids = tok(input_text, return_tensors=\"pt\").input_ids.to(\"cuda\")\r\ninput_ids = torch.tensor([[69, 8316, 3952, 12059, 171, 69, 34, 11451, 7798,\r\n 6614, 5, 6, 12, 29, 5, 644, 16747, 494,\r\n 20, 3910, 36, 129, 5, 16747, 4, 1668, 232,\r\n 20, 23435, 6462, 36, 194, 16747, 4, 1668, 232,\r\n 20, 6462, 2769, 36, 194, 16747, 4, 1668, 232,\r\n 20, 4759, 36, 6, 6, 12, 30, 181, 9,\r\n 16, 30, 5, 644, 1066, 494, 20, 3910, 36,\r\n 129, 644, 722, 494, 20, 3910, 36, 6, 9,\r\n 16, 1]], dtype=torch.long, device=\"cuda\")\r\n\r\n\r\noutput = model.generate(input_ids, num_beams=4)\r\n\r\nprint(tok.batch_decode(output))\r\n```\r\n\r\nThen my folders had the following files (same as in your notebook). \r\n```bash\r\nls checkpoint\r\ncheckpoint code_spm_unigram_40M.model graph.pbtxt model.ckpt-16000.data-00000-of-00002 model.ckpt-16000.data-00001-of-00002 model.ckpt-16000.index model.ckpt-16000.meta operative_config.gin\r\n```\r\nand\r\n```bash\r\nls pytorch_model\r\nconfig.json pytorch_model.bin special_tokens_map.json spiece.model tokenizer_config.json\r\n```\r\n\r\nwith all the pytorch models converted from the mesh tf spm and mesh tf checkpoint (as you've done in the colab).\r\n\r\n\r\nAnd then one has to put a lot of `mtf.print(x, [x], \"output: \", summarize=-1)` statements in the mesh tensorflow code - here e.g.: https://github.com/tensorflow/mesh/blob/165d3dc7b4186ee5b6d31c9b17b3df4f7571cf42/mesh_tensorflow/transformer/transformer_layers.py#L729, but that's very painful ;-) \r\n\r\nAlso, see here for debugging advice: https://github.com/tensorflow/mesh/issues/235\r\n\r\nMaybe by some miracle I find the problem over the next two weeks while further looking into mesh tensorflow.\r\n\r\nSorry, to be not too much of help here.\r\n\r\n", "Hi @patrickvonplaten ,\r\n\r\nThanks a lot for looking into this issue.\r\nWe highly appreciate your effort and sorry if it wasted your time.\r\n\r\nI have also tested our protein model \"prot_t5_xl_bfd\" for protein sequence generation and it has the same issue. Also our next 11B model for protein sequences \"prot_t5_xxl_bfd\" will have the same issue.\r\nThis means the current results that we have from all our T5 models are not correct.\r\n\r\nDo you know if this issue exist in only the decoder or both the encoder and the decoder ?\r\nbecause currently we are only using the encoder on \"prot_t5_xl_bfd\" for feature extraction.\r\n\r\nI have also checked MT5 and T5v1.1 and they seem to have the same issue as our current models, so if you will work on T5v1.1, you will highly likely find the issue and the solution for path ProtTrans models and ProtCode models.\r\n\r\nThanks again for your time, and I will leave this issue open, until you finish T5v1.1 implementation. ", "It's both encoder and decoder. Even the same encoder input yielded a different encoder output", "This is really bad for the ProtTrans project.\r\nThanks a lot Patrick for your clear reply.\r\nI will try to debug it from my side, and I will update you if I found the issue.", " I got T5v1.1 working now I think: https://github.com/huggingface/transformers/pull/8488. But this code will certainly not work with your example since the Feed-Forward layer has different weights...\r\n\r\nLet me take a look again at this issue in a bit. Could you maybe provide me with a code example where I just need to download 1) of your pretrained checkpoints\r\n2) run a code snippet of the following format:\r\n\r\n```python\r\n#!/usr/bin/env python3\r\nimport os\r\n\r\nos.environ[\"TF_CPP_MIN_LOG_LEVEL\"] = \"3\" # or any {'0', '1', '2'}\r\n\r\nimport t5 # noqa: E402\r\nfrom t5.data.sentencepiece_vocabulary import SentencePieceVocabulary # noqa: E402\r\nfrom transformers import T5Tokenizer # noqa: E402\r\nfrom transformers.convert_t5_v1_1_original_tf_checkpoint_to_pytorch import ( # noqa: E402\r\n convert_tf_checkpoint_to_pytorch,\r\n)\r\nfrom transformers.modeling_t5v2 import T5Config, T5v2ForConditionalGeneration # noqa: E402\r\n\r\n\r\npath_to_tf_checkpoint = \"/home/patrick/hugging_face/t5v1.1/t5_mesh_checkpoints\"\r\n\r\n\r\ntok = T5Tokenizer.from_pretrained(\"t5-small\")\r\ntok.save_pretrained(path_to_tf_checkpoint)\r\nconfig = T5Config.from_pretrained(\"t5-small\")\r\nconfig.d_ff = 1024\r\nconfig.num_decoder_layers = 8\r\nconfig.num_layers = 8\r\nconfig.num_heads = 6\r\n\r\nconfig.save_pretrained(path_to_tf_checkpoint)\r\n\r\nconvert_tf_checkpoint_to_pytorch(path_to_tf_checkpoint, path_to_tf_checkpoint + \"/config.json\", path_to_tf_checkpoint)\r\n\r\nt5_model = t5.models.MtfModel(\r\n model_dir=path_to_tf_checkpoint,\r\n batch_size=1,\r\n tpu=None,\r\n sequence_length={\"inputs\": 4, \"targets\": 4},\r\n)\r\n\r\nvocab_model_path = path_to_tf_checkpoint + \"/sentencepiece.model\"\r\nvocab = SentencePieceVocabulary(vocab_model_path, extra_ids=100)\r\n\r\nscore = t5_model.score(\r\n inputs=[\"Hello there\"],\r\n targets=[\"Hi I am\"],\r\n vocabulary=vocab,\r\n)\r\n\r\nmodel = T5v2ForConditionalGeneration.from_pretrained(path_to_tf_checkpoint, return_dict=True)\r\n\r\ninput_ids = tok(\"Hello there\", return_tensors=\"pt\").input_ids\r\nlabels = tok(\"Hi I am\", return_tensors=\"pt\").input_ids\r\n\r\n# input_ids and labels are ok!\r\nloss = model(input_ids, labels=labels).loss\r\n\r\nassert -(labels.shape[-1] * loss.item()) - score[0][0] < 1e-4\r\n```\r\n\r\nIf all the code would be in one file -> this would really help me save time in debugging. Otherwise, maybe we can have a quick call early next week (Monday maybe?) to discuss how to best tackle the error. I got a bit lost in all the colab notebook. I'm sure it's not that hard to fix actually.", "Great @patrickvonplaten , \"du bist der Beste\" :\r\n\r\nI have created a Colab that runs your code and download one of the CodeTrans models:\r\nhttps://colab.research.google.com/drive/149F64wSOjm5O-HdLWpdWJE4dAMUA-Waa?usp=sharing\r\n\r\nImportant notes:\r\n1. This model is using the original T5 model not v1.1. ie (word embedding is tied, uses dropout, uses RELU)\r\n2. It is the base model.\r\n\r\nLet me know if anything else is required.", "should be fixed now. Everything is explained in the PR.", "Woohoo, thanks a lot @patrickvonplaten, you are the best 😄 " ]
1,602
1,605
1,605
CONTRIBUTOR
null
## Environment info - `transformers` version: 3.0.2 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help Text Generation: @TevenLeScao T5: @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): T5 The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: https://colab.research.google.com/drive/112Jt7VFwHHT-QmMxFPJ764GNJBn0d5eX?usp=sharing ## Expected behavior We have started a big project for source code tasks (generation, summarisation, documentation, etc.) using language models. Using T5 text to text library, the model can predict the input correctly, However, after we converted the Tensorflow checkpoint to huggingface the output text is rubbish. I am not sure if we are doing something wrong during conversion or there is a problem in loading and converting the weights from the original Tensorflow checkpoint to Pytorch. The above Colab re-produce the issue. Important Note: We are using a copy of "adapt_t5_for_covid_19_3b" branch which should fix the conversion problem with only one small modification, setting is_tied to false. Your help is highly appreciated.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7791/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7791/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7790
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7790/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7790/comments
https://api.github.com/repos/huggingface/transformers/issues/7790/events
https://github.com/huggingface/transformers/pull/7790
721,604,419
MDExOlB1bGxSZXF1ZXN0NTAzNDk0NzY4
7,790
updated bangla-bert-base model card with evaluation results
{ "login": "sagorbrur", "id": 10723655, "node_id": "MDQ6VXNlcjEwNzIzNjU1", "avatar_url": "https://avatars.githubusercontent.com/u/10723655?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sagorbrur", "html_url": "https://github.com/sagorbrur", "followers_url": "https://api.github.com/users/sagorbrur/followers", "following_url": "https://api.github.com/users/sagorbrur/following{/other_user}", "gists_url": "https://api.github.com/users/sagorbrur/gists{/gist_id}", "starred_url": "https://api.github.com/users/sagorbrur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sagorbrur/subscriptions", "organizations_url": "https://api.github.com/users/sagorbrur/orgs", "repos_url": "https://api.github.com/users/sagorbrur/repos", "events_url": "https://api.github.com/users/sagorbrur/events{/privacy}", "received_events_url": "https://api.github.com/users/sagorbrur/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,602
1,602
1,602
CONTRIBUTOR
null
Hi, I just updated bangla-bert-base model card with evolution results. Also fixed some minor typo. Please check, if possible please merge. thanks and regards Sagor
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7790/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7790/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7790", "html_url": "https://github.com/huggingface/transformers/pull/7790", "diff_url": "https://github.com/huggingface/transformers/pull/7790.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7790.patch", "merged_at": 1602694243000 }
https://api.github.com/repos/huggingface/transformers/issues/7789
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7789/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7789/comments
https://api.github.com/repos/huggingface/transformers/issues/7789/events
https://github.com/huggingface/transformers/issues/7789
721,593,102
MDU6SXNzdWU3MjE1OTMxMDI=
7,789
Recommended Adafactor settings for T5 cause error
{ "login": "OyvindTafjord", "id": 6453366, "node_id": "MDQ6VXNlcjY0NTMzNjY=", "avatar_url": "https://avatars.githubusercontent.com/u/6453366?v=4", "gravatar_id": "", "url": "https://api.github.com/users/OyvindTafjord", "html_url": "https://github.com/OyvindTafjord", "followers_url": "https://api.github.com/users/OyvindTafjord/followers", "following_url": "https://api.github.com/users/OyvindTafjord/following{/other_user}", "gists_url": "https://api.github.com/users/OyvindTafjord/gists{/gist_id}", "starred_url": "https://api.github.com/users/OyvindTafjord/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/OyvindTafjord/subscriptions", "organizations_url": "https://api.github.com/users/OyvindTafjord/orgs", "repos_url": "https://api.github.com/users/OyvindTafjord/repos", "events_url": "https://api.github.com/users/OyvindTafjord/events{/privacy}", "received_events_url": "https://api.github.com/users/OyvindTafjord/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "\r\nI think the doc should recommend\r\n```python\r\nAdafactor(model.parameters(), relative_step=True, warmup_init=True, lr=None)\r\n```\r\nwant to fix it?", "I think what corresponds to the original T5 training code is `Adafactor(model.parameters(), lr=1e-3, relative_step=False, warmup_init=False)`, however that didn't work great for me so far (much slower than Adam, and giving me NaN's even in FP32).", "Hello @OyvindTafjord, have you been able to fine-tune T5 with Adafactor? Thanks, Sonali", "No, I haven't investigated further regarding the slowness and NaN's I was getting.", "This issue persists (i.e. the suggested defaults still produce the error).\r\n\r\nI can confirm that `Adafactor(lr=1e-3, relative_step=False, warmup_init=False)` seems to break training (i.e. I observe no learning over 4 epochs, whereas `Adafactor(model.parameters(), relative_step=True, warmup_init=True, lr=None)` works well (much better than adam)" ]
1,602
1,617
1,617
CONTRIBUTOR
null
## Environment info - `transformers` version: 3.3.1 - Platform: Darwin-19.6.0-x86_64-i386-64bit - Python version: 3.7.7 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @sshleifer (from activity on Adafactor PRs) ## Information Model I am using (Bert, XLNet ...): T5 The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce The Adafactor docs recommend the following for T5 : `Adafactor(model.parameters(), lr=1e-3, relative_step=False, warmup_init=True)` However, the init code then has: ``` if lr is not None and relative_step: raise ValueError("Cannot combine manual lr and relative_step options") if warmup_init and not relative_step: raise ValueError("warmup_init requires relative_step=True") ``` which makes this setting impossible (as well as just changing to `relative_step=True`). So something seems to be missing either in the recommendations or in the implementation. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7789/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7789/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7788
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7788/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7788/comments
https://api.github.com/repos/huggingface/transformers/issues/7788/events
https://github.com/huggingface/transformers/issues/7788
721,579,969
MDU6SXNzdWU3MjE1Nzk5Njk=
7,788
error when using the forward() function of the LongformerLayer class from the LongformerForMultipleChoice model
{ "login": "h56cho", "id": 52889259, "node_id": "MDQ6VXNlcjUyODg5MjU5", "avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h56cho", "html_url": "https://github.com/h56cho", "followers_url": "https://api.github.com/users/h56cho/followers", "following_url": "https://api.github.com/users/h56cho/following{/other_user}", "gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}", "starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h56cho/subscriptions", "organizations_url": "https://api.github.com/users/h56cho/orgs", "repos_url": "https://api.github.com/users/h56cho/repos", "events_url": "https://api.github.com/users/h56cho/events{/privacy}", "received_events_url": "https://api.github.com/users/h56cho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,602
1,602
NONE
null
Hello, Sorry if my question sounds a bit silly, but I just have a question: I am trying to feed in the hidden output of the embedding layer of the `LongformerForMultipleChoice` model directly into the m-th layer of the same model. Each of my multiple-choice question that has 4 options. When I do: ```Python my_Longformer_multiple_choice_model.encoder.layer[layer_index].forward(hidden_output, attention_mask=my_attention_mask,output_attention=False) ``` , an this error is generated: ```Python File "/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_longformer.py", line 384, in _sliding_chunks_query_key_matmul batch_size, seq_len, num_heads, head_dim = query.size() ValueError: too many values to unpack (expected 4) ``` Here, `my_attention_mask` is the same attention mask that I would specify under the regular `LongformerForMultipleChoice` command. `my_attention_mask` was generated by: ```Python # I am using the LongformerForMultipleChoice model, where each multiple choice question has 4 options. encoded_dict = longformer_tokenizer(question_list, option_list, return_tensors = 'pt', padding ='max_length') my_attention_mask = {k: v.unsqueeze(0) for k,v in encoded_dict.items()}['attention_mask'] my_attention_mask >>> tensor([[[1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0]]]) # I can use this my_attention_mask in the regular command without an error, as below: longformer_output= my_Longformer_multiple_choice_model(input_ids=input_ids,....,attention_mask=my_attention_mask) ``` Also, the `hidden_output` in my command was generated by the following: ```Python encoded_dict = longformer_tokenizer(question_list, option_list, return_tensors = 'pt', padding ='max_length') hidden_output = my_Longformer_multiple_choice_model(**{k: v.unsqueeze(0) for k,v in encoded_dict.items()}, labels = mc_labels)[2][0][:,:,:] hidden_output.size() >>> torch.Size([4, 4096, 768]) ``` I am suspecting the value error is generated because the form of `my_attention_mask` is wrong. What should I pass for the `attention_mask` parameter in the command `my_Longformer_multiple_choice_model.encoder.layer[layer_index].forward(hidden_output, attention_mask,output_attention=False)`? Thank you, @LysandreJik @NielsRogge @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7788/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7788/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7787
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7787/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7787/comments
https://api.github.com/repos/huggingface/transformers/issues/7787/events
https://github.com/huggingface/transformers/pull/7787
721,578,897
MDExOlB1bGxSZXF1ZXN0NTAzNDczNTE3
7,787
Fixing beam search output shapes
{ "login": "nicola-decao", "id": 9703100, "node_id": "MDQ6VXNlcjk3MDMxMDA=", "avatar_url": "https://avatars.githubusercontent.com/u/9703100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nicola-decao", "html_url": "https://github.com/nicola-decao", "followers_url": "https://api.github.com/users/nicola-decao/followers", "following_url": "https://api.github.com/users/nicola-decao/following{/other_user}", "gists_url": "https://api.github.com/users/nicola-decao/gists{/gist_id}", "starred_url": "https://api.github.com/users/nicola-decao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nicola-decao/subscriptions", "organizations_url": "https://api.github.com/users/nicola-decao/orgs", "repos_url": "https://api.github.com/users/nicola-decao/repos", "events_url": "https://api.github.com/users/nicola-decao/events{/privacy}", "received_events_url": "https://api.github.com/users/nicola-decao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,602
1,602
CONTRIBUTOR
null
# What does this PR do? `generate` in `generation_utils.py` returns a list of size `batch_size * num_beams` where it is much more practical if it returns a list of lists. The first list of size `batch_size` and all internal lists of size `num_beams`. In this way, one can generate using mini-batches of variable size and no do reshapes each time. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @patrickvonplaten , @TevenLeScao
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7787/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7787/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7787", "html_url": "https://github.com/huggingface/transformers/pull/7787", "diff_url": "https://github.com/huggingface/transformers/pull/7787.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7787.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7786
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7786/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7786/comments
https://api.github.com/repos/huggingface/transformers/issues/7786/events
https://github.com/huggingface/transformers/pull/7786
721,572,348
MDExOlB1bGxSZXF1ZXN0NTAzNDY4MDYw
7,786
Don't use `store_xxx` on optional bools
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,648
1,602
COLLABORATOR
null
# What does this PR do? Optional bool fields in `TrainingArguments` are given the `store_true` attribute by `HFArgumentParser` which can yield to bugs (as highlighted in #7755). This PR fixes this and to avoid breaking existing scripts, removes the optional in the `evaluate_during_training` argument. It also fixes a few instances with the right argument called (since that argument is deprecated). Fixes #7755
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7786/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7786/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7786", "html_url": "https://github.com/huggingface/transformers/pull/7786", "diff_url": "https://github.com/huggingface/transformers/pull/7786.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7786.patch", "merged_at": 1602691503000 }
https://api.github.com/repos/huggingface/transformers/issues/7785
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7785/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7785/comments
https://api.github.com/repos/huggingface/transformers/issues/7785/events
https://github.com/huggingface/transformers/issues/7785
721,565,107
MDU6SXNzdWU3MjE1NjUxMDc=
7,785
[RAG] RagTokenizer failing in decoding RAG Generator output
{ "login": "lalitpagaria", "id": 19303690, "node_id": "MDQ6VXNlcjE5MzAzNjkw", "avatar_url": "https://avatars.githubusercontent.com/u/19303690?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lalitpagaria", "html_url": "https://github.com/lalitpagaria", "followers_url": "https://api.github.com/users/lalitpagaria/followers", "following_url": "https://api.github.com/users/lalitpagaria/following{/other_user}", "gists_url": "https://api.github.com/users/lalitpagaria/gists{/gist_id}", "starred_url": "https://api.github.com/users/lalitpagaria/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lalitpagaria/subscriptions", "organizations_url": "https://api.github.com/users/lalitpagaria/orgs", "repos_url": "https://api.github.com/users/lalitpagaria/repos", "events_url": "https://api.github.com/users/lalitpagaria/events{/privacy}", "received_events_url": "https://api.github.com/users/lalitpagaria/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Sorry it was my mistake, I need to get generated ids calling `model.generate` instead of `model`.\r\n\r\nAdding fix here if anyone search for the same issue -\r\n\r\n```\r\ngenerated_ids = model.generate(input_ids=input_ids, labels=input_dict[\"labels\"])\r\n\r\ngenerated_string = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)\r\nprint(generated_string)\r\n``` \r\n" ]
1,602
1,602
1,602
CONTRIBUTOR
null
## Environment info - `transformers` version: 3.3.1 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> @patrickvonplaten @LysandreJik ## Information Model I am using (Bert, XLNet ...): RAG The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [X] an official GLUE/SQUaD task: `dummy_dataset` * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run example code snippet from https://huggingface.co/transformers/master/model_doc/rag.html on `dummy_dataset` 2. Generate string from model output (Tried both `rag-sequence-nq` and `rag-token-nq` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ``` !pip install git+https://github.com/huggingface/transformers.git !pip install datasets !pip install faiss-cpu from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration, RagSequenceForGeneration import torch import faiss tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") model = RagSequenceForGeneration.from_pretrained("facebook/rag-token-nq", use_dummy_dataset=True) retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True) input_dict = tokenizer.prepare_seq2seq_batch("How many people live in Paris?", "In Paris, there are 10 million people.", return_tensors="pt") input_ids = input_dict["input_ids"] model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever) outputs = model(input_ids=input_ids, labels=input_dict["labels"]) generated_string = tokenizer.batch_decode(outputs, skip_special_tokens=True) print(generated_string) ``` Error returned is on executing `tokenizer.batch_decode(outputs, skip_special_tokens=True)` - ``` /usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py in convert_ids_to_tokens(self, ids, skip_special_tokens) 721 tokens = [] 722 for index in ids: --> 723 index = int(index) 724 if skip_special_tokens and index in self.all_special_ids: 725 continue ValueError: invalid literal for int() with base 10: 'l' ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Tokenizer should decode to string. Not sure but closely related to https://github.com/huggingface/transformers/pull/4836
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7785/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7785/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7784
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7784/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7784/comments
https://api.github.com/repos/huggingface/transformers/issues/7784/events
https://github.com/huggingface/transformers/pull/7784
721,561,637
MDExOlB1bGxSZXF1ZXN0NTAzNDU5MTQ0
7,784
Adding prefix constrained beam search
{ "login": "nicola-decao", "id": 9703100, "node_id": "MDQ6VXNlcjk3MDMxMDA=", "avatar_url": "https://avatars.githubusercontent.com/u/9703100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nicola-decao", "html_url": "https://github.com/nicola-decao", "followers_url": "https://api.github.com/users/nicola-decao/followers", "following_url": "https://api.github.com/users/nicola-decao/following{/other_user}", "gists_url": "https://api.github.com/users/nicola-decao/gists{/gist_id}", "starred_url": "https://api.github.com/users/nicola-decao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nicola-decao/subscriptions", "organizations_url": "https://api.github.com/users/nicola-decao/orgs", "repos_url": "https://api.github.com/users/nicola-decao/repos", "events_url": "https://api.github.com/users/nicola-decao/events{/privacy}", "received_events_url": "https://api.github.com/users/nicola-decao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The failed tests are not from this pull request but from RAG. Can someone review this, please?\r\n@patrickvonplaten , @TevenLeScao", "Hey @nicola-decao - thanks a lot for your PR here! The failing tests are actually due to this pull request. See a part of the error message here:\r\n\r\n```\r\n )\r\nE TypeError: _generate_beam_search() missing 1 required positional argument: 'prefix_allowed_tokens_fn'\r\n\r\nsrc/transformers/modeling_rag.py:1400: TypeError\r\n_______________________ RagDPRT5Test.test_model_generate _______________________\r\n[gw6] linux -- Python 3.7.9 /usr/local/bin/python\r\n\r\nself = <tests.test_modeling_rag.RagDPRT5Test testMethod=test_model_generate>\r\n\r\n def test_model_generate(self):\r\n inputs_dict = self.config_and_inputs\r\n> self.check_model_generate(**inputs_dict)\r\n```\r\n\r\nCould you fix the errors by slightly adapting the `generate` method in RAG to make it pass with your example? \r\n\r\nIn general I'm fine with this PR :-) ", "@patrickvonplaten now It should be ready for the merge :)", "Awesome! There is probably going to be a merge conflict with the big `generate()` refactor PR that will be merged today: https://github.com/huggingface/transformers/pull/6949 . \r\n\r\nWe changed the design for these kinds of \"logits processing\" methods so that we'll probalby have to change the PR here a bit (and should also add a test). But let's put the PR on hold for a day and then I can help you merge it! ", "@nicola-decao, also what would be a typical use case of this function? *E.g.* could you give a quick example of a function that one would use as this function? \r\n\r\nAlso @sshleifer could you check if this is useful for Fairseq's Blenderbot? And @yjernite this might be interesting to you as well :-) ", "@patrickvonplaten we do have a few use cases in mind internally already :) \r\n- prefix-triggered multi task *a la* T5: in many cases having the prefix in the output sequence makes more sense than in the input\r\n- seq2seq model evaluation: some metrics (e.g. ROUGE-20 in the [ELI5 paper](https://arxiv.org/abs/1907.09190)) measure the model's ability to \"continue\" a generation, which can correlate better with human judgments of quality that full generation ROUGE\r\n- seq2seq diagnostics: being able to measure the the effect of the input vs local context\r\n\r\n@nicola-decao did you have something along those lines in mind?", "> @nicola-decao, also what would be a typical use case of this function? _E.g._ could you give a quick example of a function that one would use as this function?\r\n> \r\n> Also @sshleifer could you check if this is useful for Fairseq's Blenderbot? And @yjernite this might be interesting to you as well :-)\r\n\r\n@patrickvonplaten an example would be using a seq2seq model to predict a wikipedia title as in **Autoregressive Entity Retrieval** (https://arxiv.org/abs/2010.00904) (I am the fist author and I want to release my models 😊 - that is why I did this PR). In this case the possible outputs are all the 6M wikipedia titles so one can create a prefix tree and constrain the generation only on these 6M strings. Here an example of what I have:\r\n```python\r\n# custom class that creates a prefix tree where only \"London\" and \"Rome\" are possible outputs\r\ntrie = PrefixTree([\"London\", \"Rome\"])\r\n\r\n# from a batch of tokens (torch.Tensor) returns a list of lists of allowed tokens (batches/ beams)\r\n# `trie.get` returns the possible next tokens (leaves if any) given a prefix\r\ndef prefix_allowed_tokens_fn(batch_tokens):\r\n return [\r\n [\r\n trie.get(tokens.tolist())\r\n for tokens in beam_tokens\r\n ]\r\n for beam_tokens in batch_tokens\r\n ]\r\n\r\n# encoding inputs\r\ninput_args = {\r\n k: v.to(model.device) for k, v in tokenizer.batch_encode_plus(\r\n [\"[START_ENT] London [END_ENT] is the capital of the UK.\"],\r\n return_tensors=\"pt\"\r\n ).items()\r\n}\r\n\r\n# generating and decoding\r\ntokenizer.batch_decode(\r\n model.generate(\r\n **input_args,\r\n min_length=0,\r\n num_beams=2,\r\n num_return_sequences=2,\r\n prefix_allowed_tokens_fn=prefix_allowed_tokens_fn,\r\n ),\r\n skip_special_tokens=True\r\n)\r\n>> ['London', 'Rome']\r\n```\r\n\r\n", "> Awesome! There is probably going to be a merge conflict with the big `generate()` refactor PR that will be merged today: #6949 .\r\n> \r\n> We changed the design for these kinds of \"logits processing\" methods so that we'll probalby have to change the PR here a bit (and should also add a test). But let's put the PR on hold for a day and then I can help you merge it!\r\n\r\n@patrickvonplaten I guess one way to do it is to implement a `logits_processor`. But what I want to touch the log-probabilities directly instead of the logits (I observed that this works better in practice). Maybe we can also add this logic too to the general method? ", "@nicola-decao - I think you should be able to directly touch the log-probs with a `LogitsProcessor` since there are applied after the `log_softmax` here: https://github.com/huggingface/transformers/blob/7abc1d96d114873d9c3c2f1bc81343fb1407cec4/src/transformers/generation_utils.py#L967 and from looking at this version of the PR it seems to work with a `LogitsProcessor`\r\n \r\nOr do you need to apply it to `log_probs + beam_score` ? so after this line: https://github.com/huggingface/transformers/blob/7abc1d96d114873d9c3c2f1bc81343fb1407cec4/src/transformers/generation_utils.py#L968? his would be more difficult then and we would have to see how to deal with it ... maybe introduce `logits_warper as well for `beam_search`... not sure yet!\r\n\r\nIt would be great if you could add a `LogitProcessor` - I kinda did the whole refactor to keep the functions clean :sweat_smile: . \r\n\r\nI'm sorry that the big generate refactor means that we have to change this PR now. Do you want to give it a shot with the new `generate()` design? Otherwise I'm happy to help :-) ", "@patrickvonplaten I can do it :) I'll make another PR today.\r\n", "@patrickvonplaten here the new PR: https://github.com/huggingface/transformers/pull/8529" ]
1,602
1,605
1,605
CONTRIBUTOR
null
# What does this PR do? This pull request adds a new decoding strategy that constrains the next token to generate based on a callable function. It mirrors https://github.com/pytorch/fairseq/pull/2646 for fairseq. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @patrickvonplaten , @TevenLeScao
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7784/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7784/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7784", "html_url": "https://github.com/huggingface/transformers/pull/7784", "diff_url": "https://github.com/huggingface/transformers/pull/7784.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7784.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7783
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7783/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7783/comments
https://api.github.com/repos/huggingface/transformers/issues/7783/events
https://github.com/huggingface/transformers/issues/7783
721,556,200
MDU6SXNzdWU3MjE1NTYyMDA=
7,783
Unable to serialize/save TF2.3.1 RobertaSequenceClassification model to saved model format
{ "login": "felipeTypeform", "id": 22076205, "node_id": "MDQ6VXNlcjIyMDc2MjA1", "avatar_url": "https://avatars.githubusercontent.com/u/22076205?v=4", "gravatar_id": "", "url": "https://api.github.com/users/felipeTypeform", "html_url": "https://github.com/felipeTypeform", "followers_url": "https://api.github.com/users/felipeTypeform/followers", "following_url": "https://api.github.com/users/felipeTypeform/following{/other_user}", "gists_url": "https://api.github.com/users/felipeTypeform/gists{/gist_id}", "starred_url": "https://api.github.com/users/felipeTypeform/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/felipeTypeform/subscriptions", "organizations_url": "https://api.github.com/users/felipeTypeform/orgs", "repos_url": "https://api.github.com/users/felipeTypeform/repos", "events_url": "https://api.github.com/users/felipeTypeform/events{/privacy}", "received_events_url": "https://api.github.com/users/felipeTypeform/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,602
1,602
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Platform: Linux-3.13.0-158-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik @jplu ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ### Steps to reproduce the behavior: from transformers import RobertaTokenizer, TFRobertaForSequenceClassification import tensorflow as tf import wget import os, sys local_path = os.path.abspath(os.path.join(__file__, "..", "resources/")) tokenizer = RobertaTokenizer.from_pretrained("roberta-large-mnli") model = TFRobertaForSequenceClassification.from_pretrained("roberta-large-mnli") tf.keras.models.save_model(model, local_path, overwrite=True, include_optimizer=False, save_format='tf') ### Error `WARNING:tensorflow:From /opt/conda/lib/python3.8/site-packages/tensorflow/python/training/tracking/tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version. Instructions for updating: This property should not be used in TensorFlow 2.0, as updates are applied automatically. WARNING:tensorflow:From /opt/conda/lib/python3.8/site-packages/tensorflow/python/training/tracking/tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version. Instructions for updating: This property should not be used in TensorFlow 2.0, as updates are applied automatically. TypeError Traceback (most recent call last) <ipython-input-4-1a9d4ccbf378> in <module> 8 model = TFRobertaForSequenceClassification.from_pretrained("roberta-large-mnli") 9 ---> 10 tf.keras.models.save_model(model, local_path, overwrite=True, include_optimizer=False, save_format='tf') /opt/conda/lib/python3.8/site-packages/tensorflow/python/keras/saving/save.py in save_model(model, filepath, overwrite, include_optimizer, save_format, signatures, options) 131 model, filepath, overwrite, include_optimizer) 132 else: --> 133 saved_model_save.save(model, filepath, overwrite, include_optimizer, 134 signatures, options) 135 /opt/conda/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/save.py in save(model, filepath, overwrite, include_optimizer, signatures, options) 78 # we use the default replica context here. 79 with distribution_strategy_context._get_default_replica_context(): # pylint: disable=protected-access ---> 80 save_lib.save(model, filepath, signatures, options) 81 82 if not include_optimizer: /opt/conda/lib/python3.8/site-packages/tensorflow/python/saved_model/save.py in save(obj, export_dir, signatures, options) 973 meta_graph_def = saved_model.meta_graphs.add() 974 --> 975 _, exported_graph, object_saver, asset_info = _build_meta_graph( 976 obj, export_dir, signatures, options, meta_graph_def) 977 saved_model.saved_model_schema_version = constants.SAVED_MODEL_SCHEMA_VERSION /opt/conda/lib/python3.8/site-packages/tensorflow/python/saved_model/save.py in _build_meta_graph(obj, export_dir, signatures, options, meta_graph_def) 1073 function_aliases[fdef.name] = alias 1074 -> 1075 object_graph_proto = _serialize_object_graph(saveable_view, 1076 asset_info.asset_index) 1077 meta_graph_def.object_graph_def.CopyFrom(object_graph_proto) /opt/conda/lib/python3.8/site-packages/tensorflow/python/saved_model/save.py in _serialize_object_graph(saveable_view, asset_file_def_index) 718 719 for obj, obj_proto in zip(saveable_view.nodes, proto.nodes): --> 720 _write_object_proto(obj, obj_proto, asset_file_def_index, 721 saveable_view.function_name_map) 722 return proto /opt/conda/lib/python3.8/site-packages/tensorflow/python/saved_model/save.py in _write_object_proto(obj, proto, asset_file_def_index, function_name_map) 759 version=versions_pb2.VersionDef( 760 producer=1, min_consumer=1, bad_consumers=[]), --> 761 metadata=obj._tracking_metadata) 762 # pylint:enable=protected-access 763 proto.user_object.CopyFrom(registered_type_proto) /opt/conda/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py in _tracking_metadata(self) 3009 @property 3010 def _tracking_metadata(self): -> 3011 return self._trackable_saved_model_saver.tracking_metadata 3012 3013 def _list_extra_dependencies_for_serialization(self, serialization_cache): /opt/conda/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/base_serialization.py in tracking_metadata(self) 52 # TODO(kathywu): check that serialized JSON can be loaded (e.g., if an 53 # object is in the python property) ---> 54 return json_utils.Encoder().encode(self.python_properties) 55 56 def list_extra_dependencies_for_serialization(self, serialization_cache): /opt/conda/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/json_utils.py in encode(self, obj) 42 43 def encode(self, obj): ---> 44 return super(Encoder, self).encode(_encode_tuple(obj)) 45 46 /opt/conda/lib/python3.8/json/encoder.py in encode(self, o) 197 # exceptions aren't as detailed. The list call should be roughly 198 # equivalent to the PySequence_Fast that ''.join() would do. --> 199 chunks = self.iterencode(o, _one_shot=True) 200 if not isinstance(chunks, (list, tuple)): 201 chunks = list(chunks) /opt/conda/lib/python3.8/json/encoder.py in iterencode(self, o, _one_shot) 255 self.key_separator, self.item_separator, self.sort_keys, 256 self.skipkeys, _one_shot) --> 257 return _iterencode(o, 0) 258 259 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr, /opt/conda/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/json_utils.py in default(self, obj) 39 items = obj.as_list() if obj.rank is not None else None 40 return {'class_name': 'TensorShape', 'items': items} ---> 41 return serialization.get_json_type(obj) 42 43 def encode(self, obj): /opt/conda/lib/python3.8/site-packages/tensorflow/python/util/serialization.py in get_json_type(obj) 70 return obj.__wrapped__ 71 ---> 72 raise TypeError('Not JSON Serializable:', obj) TypeError: ('Not JSON Serializable:', RobertaConfig { "_num_labels": 3, "architectures": [ "RobertaForSequenceClassification" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "eos_token_id": 2, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 1024, "id2label": { "0": "CONTRADICTION", "1": "NEUTRAL", "2": "ENTAILMENT" }, "initializer_range": 0.02, "intermediate_size": 4096, "label2id": { "CONTRADICTION": 0, "ENTAILMENT": 2, "NEUTRAL": 1 }, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 16, "num_hidden_layers": 24, "pad_token_id": 1, "type_vocab_size": 1, "vocab_size": 50265 } ) ` ## Expected behavior Save the model correctly as a tf.keras model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7783/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7783/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7782
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7782/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7782/comments
https://api.github.com/repos/huggingface/transformers/issues/7782/events
https://github.com/huggingface/transformers/issues/7782
721,555,588
MDU6SXNzdWU3MjE1NTU1ODg=
7,782
RAG finetuning - unexpected keyword argument 'early_stop_callback'
{ "login": "ioannist", "id": 6544125, "node_id": "MDQ6VXNlcjY1NDQxMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6544125?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ioannist", "html_url": "https://github.com/ioannist", "followers_url": "https://api.github.com/users/ioannist/followers", "following_url": "https://api.github.com/users/ioannist/following{/other_user}", "gists_url": "https://api.github.com/users/ioannist/gists{/gist_id}", "starred_url": "https://api.github.com/users/ioannist/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ioannist/subscriptions", "organizations_url": "https://api.github.com/users/ioannist/orgs", "repos_url": "https://api.github.com/users/ioannist/repos", "events_url": "https://api.github.com/users/ioannist/events{/privacy}", "received_events_url": "https://api.github.com/users/ioannist/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "If you don't need early stopping just comment out `early_stop_callback=early_stopping_callback` on line 379 of `/home/ioannis/Desktop/transformers/examples/lightning_base.py`. You should be able to run your script.\r\n\r\nI think lightning may have changed their api\r\nYou can also just uninstall your pytorch lightning and do `pip install pytorch_lightning==0.9.0` and script should work", "Awesome! Installing 0.9.0 worked.\r\n\r\nI manually installed pytorch_lightning and gitpython as they were not included in the transformers installation and the rag requirements file." ]
1,602
1,602
1,602
NONE
null
## Environment info transformers version: 3.3.1 Platform: Ubuntu Python version:3.6.12 PyTorch version (GPU: yes): 1.6.0 Using GPU in script?: 1 gpu Using distributed or parallel set-up in script?: no ### Who can help @patrickvonplaten @sgugger ## Information model name: facebook/rag-token-base The problem arises when using: * [x ] the official example scripts: (give details below) The tasks I am working on is: * [x ] my own task or dataset: (give details below) ## To reproduce Call finetune ona rag model `python examples/rag/finetune.py --data_dir=$(pwd)/examples/rag/ioannis-data --output_dir $(pwd)/examples/rag/ioannis-output --model_name_or_path=facebook/rag-token-base --model_type rag_sequence --fp16 --gpus 1` ``` Traceback (most recent call last): File "examples/rag/finetune.py", line 469, in <module> main(args) File "examples/rag/finetune.py", line 442, in main logger=logger, File "/home/ioannis/Desktop/transformers/examples/lightning_base.py", line 379, in generic_train **train_params, File "/home/ioannis/anaconda3/envs/transformers/lib/python3.6/site-packages/pytorch_lightning/trainer/properties.py", line 122, in from_argparse_args return argparse_utils.from_argparse_args(cls, args, **kwargs) File "/home/ioannis/anaconda3/envs/transformers/lib/python3.6/site-packages/pytorch_lightning/utilities/argparse_utils.py", line 50, in from_argparse_args return cls(**trainer_kwargs) File "/home/ioannis/anaconda3/envs/transformers/lib/python3.6/site-packages/pytorch_lightning/trainer/connectors/env_vars_connector.py", line 41, in overwrite_by_env_vars return fn(self, **kwargs) TypeError: __init__() got an unexpected keyword argument 'early_stop_callback' ``` I noticed a variable named early_stopping_callback in finetune.py. A typo perhaps?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7782/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7782/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7781
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7781/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7781/comments
https://api.github.com/repos/huggingface/transformers/issues/7781/events
https://github.com/huggingface/transformers/issues/7781
721,532,315
MDU6SXNzdWU3MjE1MzIzMTU=
7,781
`decoder_config` variable not defined in EncoderDecoderModel.from_encoder_decoder_pretrained
{ "login": "jsilter", "id": 603941, "node_id": "MDQ6VXNlcjYwMzk0MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/603941?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jsilter", "html_url": "https://github.com/jsilter", "followers_url": "https://api.github.com/users/jsilter/followers", "following_url": "https://api.github.com/users/jsilter/following{/other_user}", "gists_url": "https://api.github.com/users/jsilter/gists{/gist_id}", "starred_url": "https://api.github.com/users/jsilter/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jsilter/subscriptions", "organizations_url": "https://api.github.com/users/jsilter/orgs", "repos_url": "https://api.github.com/users/jsilter/repos", "events_url": "https://api.github.com/users/jsilter/events{/privacy}", "received_events_url": "https://api.github.com/users/jsilter/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "@jsilter - great catch! I agree 100% with your suggestion! Do you want to open a PR to fix it? :-) ", "https://github.com/huggingface/transformers/pull/7903 this is already fixed yesterday @jsilter " ]
1,602
1,603
1,603
NONE
null
https://github.com/huggingface/transformers/blob/890e790e16084e58a1ecb9329c98ec3e76c45994/src/transformers/modeling_encoder_decoder.py#L330 Using this function results in an error: `UnboundLocalError: local variable 'decoder_config' referenced before assignment` Suggest changing `decoder_config.add_cross_attention` to `kwargs_decoder["config"].add_cross_attention`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7781/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7781/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7780
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7780/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7780/comments
https://api.github.com/repos/huggingface/transformers/issues/7780/events
https://github.com/huggingface/transformers/issues/7780
721,490,197
MDU6SXNzdWU3MjE0OTAxOTc=
7,780
How can I tweak the `Longformer` code to control the input of a `Longformer`'s layer?
{ "login": "h56cho", "id": 52889259, "node_id": "MDQ6VXNlcjUyODg5MjU5", "avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h56cho", "html_url": "https://github.com/h56cho", "followers_url": "https://api.github.com/users/h56cho/followers", "following_url": "https://api.github.com/users/h56cho/following{/other_user}", "gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}", "starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h56cho/subscriptions", "organizations_url": "https://api.github.com/users/h56cho/orgs", "repos_url": "https://api.github.com/users/h56cho/repos", "events_url": "https://api.github.com/users/h56cho/events{/privacy}", "received_events_url": "https://api.github.com/users/h56cho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The answer you got on the forum is pretty much the only one we have: copy paste the model code and customize it to your needs. If you're not a programmer, then you will need to learn a bit of Python/PyTorch to do this, but apart from making sure each model file contains the full code of each model, there is little more we can do to help you. There is an API to use the models in the common cases, users are expected to make the customizations they want if that does not suit their needs.", "Hello, thank you for your reply.\r\nSorry if my question sounds a bit silly, but I just have a question:\r\n\r\nWhen I do `my_Longformer_model.encoder.layer[layer_index].forward(hidden_output,my_attention_mask,output_attention=False)`, this error is generated:\r\n```python\r\n\r\n File \"/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_longformer.py\", line 384, in _sliding_chunks_query_key_matmul\r\n batch_size, seq_len, num_heads, head_dim = query.size()\r\n\r\nValueError: too many values to unpack (expected 4)\r\n```\r\n\r\nHere, `my_attention_mask` is the same attention mask that I would specify under the regular\r\n```python\r\nlongformer_output= my_longformer_model(input_ids=input_ids,....,attention_mask=my_attention_mask)\r\n```\r\n\r\nwhy exactly is the above error generated, and how can I remedy it?\r\n\r\n\r\nThank you," ]
1,602
1,602
1,602
NONE
null
Hello, I have asked a similar question on the HuggingFace forum, but I didn't get a clear answer that I was hoping for. I tried the following to control the input of a `Longformer` layer: `best_model_longformer.longformer.encoder.layer[layer_index](my_input_hidden_vector)` which of course, does not work. How can I tweak the `Longformer` (or `BERT`) code to control the input of a `Longformer`'s layer? Would this do the trick?: ```Python self_attention_outputs = Longformer_model.LongformerLayer.forward(hidden_states, attention_mask=None, output_attentions=False) Longformer_model.LongformerLayer.ff_chunk(self, self_attention_outputs) ``` But in this case, I don't know how to properly access the `LongformerLayer` class. I don't know if `Longformer_model.LongformerLayer` is the proper way. I am not a programmer and I really do need more help on this. I know this can be a bit cumbersome to answer, but could you please help me on this? Your help is much appreciated. Thank you,
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7780/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7780/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7779
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7779/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7779/comments
https://api.github.com/repos/huggingface/transformers/issues/7779/events
https://github.com/huggingface/transformers/issues/7779
721,479,074
MDU6SXNzdWU3MjE0NzkwNzQ=
7,779
I'm getting "nan" value for loss, while following a tutorial from the documentation
{ "login": "sunnyville01", "id": 33743210, "node_id": "MDQ6VXNlcjMzNzQzMjEw", "avatar_url": "https://avatars.githubusercontent.com/u/33743210?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sunnyville01", "html_url": "https://github.com/sunnyville01", "followers_url": "https://api.github.com/users/sunnyville01/followers", "following_url": "https://api.github.com/users/sunnyville01/following{/other_user}", "gists_url": "https://api.github.com/users/sunnyville01/gists{/gist_id}", "starred_url": "https://api.github.com/users/sunnyville01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sunnyville01/subscriptions", "organizations_url": "https://api.github.com/users/sunnyville01/orgs", "repos_url": "https://api.github.com/users/sunnyville01/repos", "events_url": "https://api.github.com/users/sunnyville01/events{/privacy}", "received_events_url": "https://api.github.com/users/sunnyville01/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "@sunnyville01 \r\nhi, facing the same issue, did you manage to solve this?" ]
1,602
1,608
1,608
CONTRIBUTOR
null
# ❓ Questions & Help ## Details Hi, I’m following the “Fine-tuning with Custom Dataset” tutorial for Question Answering on the SQuaD dataset tutorial: [available here](https://huggingface.co/transformers/custom_datasets.html#question-answering-with-squad-2-0). I’ve copy-pasted all the code shown in the tutorial step by step. However, when my model starts training, I don’t get the expected metric values for loss as I normally would, instead I get “nan”. Here is the code for training `model.fit(train_dataset.shuffle(1000).batch(16), epochs=3, batch_size=16)` Here is the output with the "nan" values for the losses. ``` Epoch 1/3 5427/5427 [==============================] - 4604s 848ms/step - loss: nan - output_1_loss: nan - output_2_loss: nan Epoch 2/3 365/5427 [=>…] - ETA: 1:11:28 - loss: nan - output_1_loss: nan - output_2_loss: nan ``` I don’t know what is wrong, and I don’t think this output is what is supposed to be. Would appreciate any help with this regard. Thank you. **A link to original question on the forum/Stack Overflow**: https://discuss.huggingface.co/t/im-getting-nan-value-for-loss-while-following-a-tutorial-from-the-documentatin/1530
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7779/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7779/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7778
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7778/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7778/comments
https://api.github.com/repos/huggingface/transformers/issues/7778/events
https://github.com/huggingface/transformers/pull/7778
721,396,322
MDExOlB1bGxSZXF1ZXN0NTAzMzIxNjk4
7,778
multi task roberta
{ "login": "oriyor", "id": 39461788, "node_id": "MDQ6VXNlcjM5NDYxNzg4", "avatar_url": "https://avatars.githubusercontent.com/u/39461788?v=4", "gravatar_id": "", "url": "https://api.github.com/users/oriyor", "html_url": "https://github.com/oriyor", "followers_url": "https://api.github.com/users/oriyor/followers", "following_url": "https://api.github.com/users/oriyor/following{/other_user}", "gists_url": "https://api.github.com/users/oriyor/gists{/gist_id}", "starred_url": "https://api.github.com/users/oriyor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/oriyor/subscriptions", "organizations_url": "https://api.github.com/users/oriyor/orgs", "repos_url": "https://api.github.com/users/oriyor/repos", "events_url": "https://api.github.com/users/oriyor/events{/privacy}", "received_events_url": "https://api.github.com/users/oriyor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,602
1,602
NONE
null
add a RobertaForMultiTask model. the model has both RobertaLMHead and RobertaClassificationHead. when doing the forward pass, we provide an extra 'task' field. when doing the 'mlm' task, we use RobertaLMHead similarly to RobertaForMaskedLM. when doing the 'classification' task we use the RobertaClassificationHead, similarly to RobertaForSequenceClassification. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7778/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7778/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7778", "html_url": "https://github.com/huggingface/transformers/pull/7778", "diff_url": "https://github.com/huggingface/transformers/pull/7778.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7778.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7777
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7777/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7777/comments
https://api.github.com/repos/huggingface/transformers/issues/7777/events
https://github.com/huggingface/transformers/issues/7777
721,299,317
MDU6SXNzdWU3MjEyOTkzMTc=
7,777
Adding RAG to text-generation pipeline
{ "login": "lalitpagaria", "id": 19303690, "node_id": "MDQ6VXNlcjE5MzAzNjkw", "avatar_url": "https://avatars.githubusercontent.com/u/19303690?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lalitpagaria", "html_url": "https://github.com/lalitpagaria", "followers_url": "https://api.github.com/users/lalitpagaria/followers", "following_url": "https://api.github.com/users/lalitpagaria/following{/other_user}", "gists_url": "https://api.github.com/users/lalitpagaria/gists{/gist_id}", "starred_url": "https://api.github.com/users/lalitpagaria/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lalitpagaria/subscriptions", "organizations_url": "https://api.github.com/users/lalitpagaria/orgs", "repos_url": "https://api.github.com/users/lalitpagaria/repos", "events_url": "https://api.github.com/users/lalitpagaria/events{/privacy}", "received_events_url": "https://api.github.com/users/lalitpagaria/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
[ "Hey @lalitpagaria - RAG is quite different to other generation models so we don't have it on the short-term roadmap to add it to pipelines. We are still thinking about how to integrate retrieval augmented models to the pipelines.", "Thanks @patrickvonplaten \r\nYeah I totally agree with you.\r\nPlease let me know whether I close this issue or keep it open for future reference.", "Leave it open - I'll put it under projects so that I don't forget it :-) \r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "@patrickvonplaten Great work! I noticed that `transformers` included the implementation for `DPR`. But for `RAG`, I only find a [demo](https://huggingface.co/rag/). Is there a source code for `RAG`? Or do you know where is Facebook's source code for `RAG`? ", "transformers does include RAG.\r\nYou can even find the documentation here: https://huggingface.co/transformers/model_doc/rag.html" ]
1,602
1,614
1,608
CONTRIBUTOR
null
# 🚀 Feature request Thank you for the awesome work. I am working on https://github.com/deepset-ai/haystack/issues/443 and just wanted to check whether any plan to add RAG into `text-generation` pipeline. ## Motivation `text-generation` already have other models, hence it I would be great to have it in there. And this will help keeping our code clean by not adding classes for each type of generators. ``` model = pipeline('text-generation', model="facebook/rag-token-nq", tokenizer=None, device=-1) # ValueError: Unrecognized configuration class <class 'transformers.configuration_rag.RagConfig'> for this kind of AutoModel: AutoModelForCausalLM. Model type should be one of CamembertConfig, XLMRobertaConfig, RobertaConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, XLMConfig, CTRLConfig, ReformerConfig, BertGenerationConfig. ``` ## Your contribution If you guide me I am happy to help.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7777/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7777/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7776
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7776/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7776/comments
https://api.github.com/repos/huggingface/transformers/issues/7776/events
https://github.com/huggingface/transformers/pull/7776
721,292,991
MDExOlB1bGxSZXF1ZXN0NTAzMjM1MDQ5
7,776
Fix bert position ids in DPR convert script
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,602
1,602
MEMBER
null
https://github.com/huggingface/transformers/commit/614fef1691edb806de976756d4948ecbcd0c0ca3 introduced buffers for position ids for BERT that breaks the DPR convert script since the DPR weights don't have those. To fix that I followed @LysandreJik 's suggestion to manually add the position ids to the state dict before loading it into the model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7776/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7776/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7776", "html_url": "https://github.com/huggingface/transformers/pull/7776", "diff_url": "https://github.com/huggingface/transformers/pull/7776.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7776.patch", "merged_at": 1602667803000 }
https://api.github.com/repos/huggingface/transformers/issues/7775
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7775/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7775/comments
https://api.github.com/repos/huggingface/transformers/issues/7775/events
https://github.com/huggingface/transformers/pull/7775
721,243,018
MDExOlB1bGxSZXF1ZXN0NTAzMTk0NTUx
7,775
Create README.md
{ "login": "XiaoqiJiao", "id": 24711193, "node_id": "MDQ6VXNlcjI0NzExMTkz", "avatar_url": "https://avatars.githubusercontent.com/u/24711193?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XiaoqiJiao", "html_url": "https://github.com/XiaoqiJiao", "followers_url": "https://api.github.com/users/XiaoqiJiao/followers", "following_url": "https://api.github.com/users/XiaoqiJiao/following{/other_user}", "gists_url": "https://api.github.com/users/XiaoqiJiao/gists{/gist_id}", "starred_url": "https://api.github.com/users/XiaoqiJiao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XiaoqiJiao/subscriptions", "organizations_url": "https://api.github.com/users/XiaoqiJiao/orgs", "repos_url": "https://api.github.com/users/XiaoqiJiao/repos", "events_url": "https://api.github.com/users/XiaoqiJiao/events{/privacy}", "received_events_url": "https://api.github.com/users/XiaoqiJiao/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "This is great, thanks for uploading and sharing.\r\n\r\nWill merge this now, but had two questions:\r\n- should we add a link back to your repo at https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/TinyBERT\r\n- Is this checkpoint version 1 or version 2 from https://github.com/huawei-noah/Pretrained-Language-Model/blame/master/TinyBERT/README.md#L62-L72\r\n\r\nThank you!" ]
1,602
1,602
1,602
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7775/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7775/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7775", "html_url": "https://github.com/huggingface/transformers/pull/7775", "diff_url": "https://github.com/huggingface/transformers/pull/7775.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7775.patch", "merged_at": 1602682262000 }
https://api.github.com/repos/huggingface/transformers/issues/7774
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7774/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7774/comments
https://api.github.com/repos/huggingface/transformers/issues/7774/events
https://github.com/huggingface/transformers/issues/7774
721,206,335
MDU6SXNzdWU3MjEyMDYzMzU=
7,774
XLM-RoBERTa model for QA seems not properly work
{ "login": "antoniolanza1996", "id": 40452030, "node_id": "MDQ6VXNlcjQwNDUyMDMw", "avatar_url": "https://avatars.githubusercontent.com/u/40452030?v=4", "gravatar_id": "", "url": "https://api.github.com/users/antoniolanza1996", "html_url": "https://github.com/antoniolanza1996", "followers_url": "https://api.github.com/users/antoniolanza1996/followers", "following_url": "https://api.github.com/users/antoniolanza1996/following{/other_user}", "gists_url": "https://api.github.com/users/antoniolanza1996/gists{/gist_id}", "starred_url": "https://api.github.com/users/antoniolanza1996/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/antoniolanza1996/subscriptions", "organizations_url": "https://api.github.com/users/antoniolanza1996/orgs", "repos_url": "https://api.github.com/users/antoniolanza1996/repos", "events_url": "https://api.github.com/users/antoniolanza1996/events{/privacy}", "received_events_url": "https://api.github.com/users/antoniolanza1996/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Thanks for reporting, will investigate.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "hello, this command \" --model_type xlm-roberta \\\" not work for me,\r\ncan you help me? please" ]
1,602
1,643
1,608
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 (and I've also tried installing Transformers from `master`, see details below) - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help albert, bert, GPT2, XLM: @LysandreJik ## Information Model I am using (Bert, XLNet ...): [deepset/xlm-roberta-large-squad2](https://huggingface.co/deepset/xlm-roberta-large-squad2) The problem arises when using: * [x] the official example scripts: [run_squad.py](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: **SQuAD 2.0 dev set evaluation** * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ``` ! wget https://raw.githubusercontent.com/rajpurkar/SQuAD-explorer/master/dataset/dev-v2.0.json ! python transformers/examples/question-answering/run_squad.py \ --model_type xlm-roberta \ --model_name_or_path 'deepset/xlm-roberta-large-squad2' \ --do_eval \ --do_lower_case \ --predict_file 'dev-v2.0.json' \ --output_dir 'output' \ --overwrite_output_dir \ --version_2_with_negative ``` ## Expected behavior There are some values mismatch between: 1. values reported in the model card [here](https://huggingface.co/deepset/xlm-roberta-large-squad2#performance) 2. values obtained when Transformers is installed using `pip install transformers` 3. values obtained when Transformers is installed from master In particular: - Reported metrics in the model card: `"exact": 79.45759285774446, "f1": 83.79259828925511` - Transformers installed from pip: `'exact': 64.67615598416576, 'f1': 77.27580544355429` - Transformers installed from master: `'exact': 60.11959090373114, 'f1': 76.13129575803934`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7774/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7774/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7773
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7773/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7773/comments
https://api.github.com/repos/huggingface/transformers/issues/7773/events
https://github.com/huggingface/transformers/issues/7773
721,200,204
MDU6SXNzdWU3MjEyMDAyMDQ=
7,773
Error in run_ner.py - ModuleNotFoundError: No module named 'tasks'
{ "login": "danaludwig", "id": 6911685, "node_id": "MDQ6VXNlcjY5MTE2ODU=", "avatar_url": "https://avatars.githubusercontent.com/u/6911685?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danaludwig", "html_url": "https://github.com/danaludwig", "followers_url": "https://api.github.com/users/danaludwig/followers", "following_url": "https://api.github.com/users/danaludwig/following{/other_user}", "gists_url": "https://api.github.com/users/danaludwig/gists{/gist_id}", "starred_url": "https://api.github.com/users/danaludwig/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danaludwig/subscriptions", "organizations_url": "https://api.github.com/users/danaludwig/orgs", "repos_url": "https://api.github.com/users/danaludwig/repos", "events_url": "https://api.github.com/users/danaludwig/events{/privacy}", "received_events_url": "https://api.github.com/users/danaludwig/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Update - this is not a bug in run_ner.py, but sort-of a documentation bug. The page that describes how to do NER does not document that you first need to copy \"tasks.py\" and other scripts, into your local current directory.\r\n\r\nhttps://github.com/huggingface/transformers/tree/master/examples/token-classification/README.md\r\n\r\nFor instance, you could provide a list of \"wget\" commands. This may seem obvious to experienced developers, but it really helps to spell it out for the occasional clueless people like me :-)", "Hello! In order to run the script, you should generally clone the repository and run the scripts from there. Running a single script will very rarely work. You'd be way safer doing the following (showing you the full setup + venv setup):\r\n\r\n```py\r\ngit clone https://github.com/huggingface/transformers\r\ncd transformers\r\n\r\npython -m venv .env\r\nsource .env/bin/activate\r\npip install -e .\r\n\r\ncd examples\r\npip install -r requirements.txt\r\n\r\n# You can run your scripts now :)\r\n```", "Thank you; for future users, that would be super helpful to add that to the README.\r\n\r\nAlso, I think you need to cd to https://github.com/huggingface/transformers/tree/master/examples because the scripts assume that tasks.py etc are in the current directory. But the “activate” may take care of that.\r\n\r\nThanks for everything – great software!\r\n\r\nDana\r\n\r\nFrom: Lysandre Debut <[email protected]>\r\nSent: Thursday, October 15, 2020 2:00 AM\r\nTo: huggingface/transformers <[email protected]>\r\nCc: Ludwig, Dana <[email protected]>; Author <[email protected]>\r\nSubject: Re: [huggingface/transformers] Error in run_ner.py - ModuleNotFoundError: No module named 'tasks' (#7773)\r\n\r\n\r\nHello! In order to run the script, you should generally clone the repository and run the scripts from there. Running a single script will very rarely work. You'd be way safer doing the following (showing you the full setup + venv setup):\r\n\r\ngit clone https://github.com/huggingface/transformers\r\n\r\ncd transformers\r\n\r\n\r\n\r\npython -m venv .env\r\n\r\nsource .env/bin/activate\r\n\r\npip install -e .\r\n\r\n\r\n\r\ncd examples\r\n\r\npip install -r requirements.txt\r\n\r\n\r\n\r\n# You can run your scripts now :)\r\n\r\n—\r\nYou are receiving this because you authored the thread.\r\nReply to this email directly, view it on GitHub<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_huggingface_transformers_issues_7773-23issuecomment-2D709013098&d=DwMCaQ&c=iORugZls2LlYyCAZRB3XLg&r=A2YbHreGE4p0vzAywzM_Uctk-D3fPuXcmLPnjKJ7Gqc&m=QMIf0fCBI0DJve9mdTNWpHYcZNw6G6lbgNMtwt27-LI&s=_PAC-teAXIZpD6bvXaxJJooVSpQXCW-g1E0P6Xx0nzE&e=>, or unsubscribe<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_ABUXNRIKGTLERRFCOBWSGRTSK22XTANCNFSM4SQGNKXQ&d=DwMCaQ&c=iORugZls2LlYyCAZRB3XLg&r=A2YbHreGE4p0vzAywzM_Uctk-D3fPuXcmLPnjKJ7Gqc&m=QMIf0fCBI0DJve9mdTNWpHYcZNw6G6lbgNMtwt27-LI&s=JROY2gIxImDPq_udIaCyPb85pHW64zCpy7-aa4ggSQY&e=>.\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,602
1,608
1,608
NONE
null
## Environment info - Google colab notebook. - `transformers` version: 3.3.1 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @stefan-it ## Information Model I am using is NER model Script: run_ner.py: https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py The problem arises when using: * [X ] the official example scripts: (give details below) The tasks I am working on is: * building the NER model from model "bert-base-multilingual-cased" using GermEval data with instructions here: https://huggingface.co/transformers/v2.4.0/examples.html#named-entity-recognition ## To reproduce Steps to reproduce the behavior: ``` # Eliminate --model_type; it creates an error and said it isn't used. # sys.path.append(os.getcwd()) ! python3 run_ner.py --data_dir ./ \ --labels ./labels.txt \ --model_name_or_path $BERT_MODEL \ --output_dir $OUTPUT_DIR \ --max_seq_length $MAX_LENGTH \ --num_train_epochs $NUM_EPOCHS \ --per_gpu_train_batch_size $BATCH_SIZE \ --save_steps $SAVE_STEPS \ --seed $SEED \ --do_train \ --do_eval \ --do_predict Error output: /usr/local/lib/python3.6/dist-packages/transformers/training_args.py:332: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options) FutureWarning, Traceback (most recent call last): File "run_ner.py", line 308, in <module> main() File "run_ner.py", line 118, in main module = import_module("tasks") File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 994, in _gcd_import File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked ModuleNotFoundError: No module named 'tasks' ``` ## Expected behavior According to the example page, the model should get fine-tuned and tested on the GermEval test dataset.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7773/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7773/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7772
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7772/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7772/comments
https://api.github.com/repos/huggingface/transformers/issues/7772/events
https://github.com/huggingface/transformers/pull/7772
721,186,391
MDExOlB1bGxSZXF1ZXN0NTAzMTQ4NDc5
7,772
Added gpt2 model parallelism
{ "login": "alexorona", "id": 11825654, "node_id": "MDQ6VXNlcjExODI1NjU0", "avatar_url": "https://avatars.githubusercontent.com/u/11825654?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexorona", "html_url": "https://github.com/alexorona", "followers_url": "https://api.github.com/users/alexorona/followers", "following_url": "https://api.github.com/users/alexorona/following{/other_user}", "gists_url": "https://api.github.com/users/alexorona/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexorona/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexorona/subscriptions", "organizations_url": "https://api.github.com/users/alexorona/orgs", "repos_url": "https://api.github.com/users/alexorona/repos", "events_url": "https://api.github.com/users/alexorona/events{/privacy}", "received_events_url": "https://api.github.com/users/alexorona/received_events", "type": "User", "site_admin": false }
[ { "id": 2627272588, "node_id": "MDU6TGFiZWwyNjI3MjcyNTg4", "url": "https://api.github.com/repos/huggingface/transformers/labels/Model%20Parallel", "name": "Model Parallel", "color": "8B66A5", "default": false, "description": "Model Parallelilsm Implementations" } ]
closed
false
null
[]
[ "Very cool implementation that would interest a few of our team, pinging them here :)", "Indeed, adding a test would be nice! Will trigger the multi-gpu slow tests once the test is added.", "@LysandreJik @patrickvonplaten @sgugger Glad you like it! Yes, the basic form of the implementation works on T5. I've been working on that too. Testing solutions to the items you brought up locally and should make the improvements to the PR in the next day or two.", "Hi @alexorona! Before we merge this, having a test would really be necessary. Have you had any luck in implementing such a test?\r\n\r\nIf you're lacking time, we can also take over from here.", "Hi, I've successfully been able to start training gpt2-xl on multiple gpu's using the model parallelism code from this pull request, but I'm running into an issue when restarting the training process from a checkpoint model. It seems that in this case when reloading the model, only the memory of the first GPU keeps increasing until it reaches an Out Of Memory error, instead of spreading over all GPUs like it did when training from scratch. Is it possible that reloading from checkpoint triggers a different code path that has been overlooked until now?\r\n\r\n(edit)\r\nI seem to have found an issue and potential fix. Line 609 in Trainer.train() loads the optimizer saved in the checkpoint. This function has a _map_location_ parameter which seems to force the optimizer to load fully onto _self.args.device_ which I'm guessing would be the first GPU. \r\n`self.optimizer.load_state_dict(torch.load(os.path.join(model_path, \"optimizer.pt\"), map_location=self.args.device))`\r\nRemoving the _map_location_ parameter makes the function properly put back all the loaded parameters onto the correct devices. Maybe there is a better way of handling this but at least it is an indication.", "@LysandreJik Just haven't had the time to do that and it might be awhile before I can get around to writing tests. However, I have been able to implement the same thing on T5 and it's working. Fine-tuned a 3B model this weekend. Maybe I can add that to this branch tonight and the team can handle the tests? \r\n\r\n@MichielRuelens Did you try loading to CPU and then calling model.parallelize() ? ", "@LysandreJik I worked on t5 last weekend. Added that. Happy to explain things like why `get_device_map `should probably take a list of devices rather than a number. Can you help get this across the finish line?", "Very cool, thanks! Will take a look at it and open a PR on your branch.", "Great, with the tests PR merged this seems close to merge! Could you rebase & run the code quality tools (`make fixup` or `make style && make quality`) so that the test suite passes?\r\n\r\nAlso @patrickvonplaten and could you check the tests?", "Awesome, LGTM!", "oof that's a tough rebase. I don't think we'll be able to merge that. Could you close the PR and open a new one, so that we can see the diff? I don't think you need to do anything on the branch, just close the PR and open a new one.", "I'm researching doing the same for BART (https://github.com/huggingface/transformers/issues/8344) and stumbled upon the open source library DeepSpeed/ZeRO:\r\nhttps://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/\r\n\r\nI'm yet to experiment with deepspeed and fairscale, but I thought I'd ask whether you have already done this and decided that implementing from scratch is better. \r\n\r\nWhat are the pros/cons for using an in-house implementation as compared to using an external library, other than the obvious potential bugs and issues of using external libs and not having control over those?\r\n\r\nIf this has been discussed already please kindly send me to that issue/page? Thank you!\r\n\r\nWhatever the outcome, we can also qualititatively compare the results of this PR once it's done with doing the same via deepspeed and/or fairscale.\r\n", "Rebased with transformers 4.0.0 and moved PR to [here](https://github.com/huggingface/transformers/pull/8696).\r\n\r\n> I'm researching doing the same for BART (#8344) and stumbled upon the open source library DeepSpeed/ZeRO:\r\n> https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/\r\n\r\nGreat point, @stas00 ! From the description, DeepSpeed is an optimization on top of data parallelism and model parallelism. I think you've identified the next step! My reading of DeepSpeed this that one still has to implement data parallelism or model parallelism, but DeepSpeed will reduce the GPU memory footprint. Practically speaking, this could open the door to even larger models. During the development of the [final model parallel PR](https://github.com/huggingface/transformers/pull/8696), I ran into hard limits on AWS GPU instance with t5-11b and t5-11b in terms of the number of tokens you can train. You'll need the largest AWS instances to train 512 tokens on t5-3b. For t5-11b, you're restricted even more. Note that I haven't tried this with apex, so it might be possible to squeeze out a little more from the current implementation.", "Thanks for the follow up, @alexorona. From what I read deepspeed implements parallelism too, but I'm still in the process of studying - didn't get to play with it yet. I've started with fairscale first.\r\n\r\nWhat about [`fairscale`](https://github.com/facebookresearch/fairscale) then? It implements the model parallelism w/o needing to make any changes to the model. All that code is abstracted into the trainer-level calls. \r\n\r\nNow that you added T5+gpt2 model parallelism `transformers` needs to add a separate code for each model architecture. Why not follow `fairscale`-style and do it outside the model?", "Similarly to what we've done with the `tie_weights` methods which ties weights thanks to the `get_input_embeddings()` and `get_output_embeddings()` methods, we could probably have a model-agnostic way of enabling parallelization using a `get_layers()` method, like [this](https://github.com/huggingface/transformers/blob/gradient-checkpointing-v2/src/transformers/modeling_utils.py#L669-L682) for example. \r\n\r\nThis would allow to have the same code for each model architecture, provided the `get_layers()` utility is correctly implemented.", "Would need to look into how `deepspeed` is implemented. My reading is that it supports parallelism, but still requires you to go through the process of defining how parallelism will work in the model definition (which modules on which devices). \r\n\r\nNot sure on `fairscale`. It may or may not be a simplification. There were quirks in implementing model parallelism on gpt2 and t5 that have to do with the pytorch graph:\r\n- The LM head has to be loaded on the same device as the embedding layer\r\n- Tensors (not just layers) have to be shifted to the appropriate device during training\r\n\r\nInitially, I went to `eisen` to see if it could handle model parallelism, but it couldn't deal with transformer models. If `fairscale` can abstract that process, that would be great.", "I hear you, @alexorona, that you're saying that each model may have its special needs. So we need to see if perhaps this can be done via some sort of callbacks and add the callbacks points as we discover new needs.\r\n\r\nAs I was planning to do the same for BART, perhaps I should replicate your implementation for Bart and then based on 3 different models we can then see how we can make this model-agnostic. What do you think?\r\n\r\nPerhaps we should continue this discussion in a dedicated issue where we discuss model_parallelism for all of `transformers`, including considering using or forking existing implemenations.", "I think those are both great ideas, @stas00. Should give us a better understanding of how to move forward with this. Created [this issue](https://github.com/huggingface/transformers/issues/8771) to continue conversation. " ]
1,602
1,609
1,606
CONTRIBUTOR
null
# Model Parallelism for GPT2LMHead Addresses [issue 7526](https://github.com/huggingface/transformers/issues/7526) Adds two new methods to `GPT2LMHead` and the `GPT2Model` classes to enable you to generate and fine-tune models using model parallelism. This feature is most applicable for `gpt2-large` and `gpt2-xl`. Minor modifications are made to the `TrainingArguments` and `Trainer` classes to avoid conflicting data parallelism behavior and related batch_size increases which would negate model parallelism. Note that nearly 64GB of GPU (4 Tesla v100s) are needed to fine-tune `gpt2-xl` @ 1024 tokens. It is critically important to provide users the ability to specify where to put the blocks of a model because the GPU sizes and numbers are likely to be very diverse. This is done with a dictionary called `device_map`. I am planning on providing some examples and guidelines for the p3, p2 and g3 AWS instances. Model parallelism has to be baked into the model class itself. Currently working on the T5 model. From my calculations the 11B model cannot fit on the largest p3 instance that I have access to (8 Tesla v100 GPUs). The 3B model can. The methods are: - `parallelize`, which will distribute the attention blocks of the model across several devices according to a device map - `deparallelize`, which will move the model back to cpu # Example ``` model = GPT2LMHeadModel.from_pretrained('gpt2-xl') device_map = {0: [0, 1, 2, 3, 4, 5, 6, 7, 8], 1: [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21], 2: [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34], 3: [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]} model.parallelize(device_map) # Distributes the model's attention blocks across several devices model.deparallelize() # Puts the model back on cpu and calls torch.cuda.empty_cache() to liberate GPU memory ``` ## Reviewers `TrainingArguments`: @sgugger - Added a new parameter `model_parallel` and attribute `self.model_parallel` to control model parallelism behavior - Slightly modified the `train_batch_size` and `eval_batch_size` calculations to avoid automatically increasing the batch size if `self.model_parallel` (automatically increasing the batch size defeats the purpose of model parallelism because you won't be able to train a larger model if the batch_size increases proportionally to the number of devices) `Trainer`: @sgugger - Minor changes controlled by new `args.model_parallel `attribute `GPT2LMHead`: @patrickvonplaten - Adds parallelize and deparallelize methods - Adds new `self.model_parallel` and `self.device_map` attributes - Changes forward behavior when `self.model_parallel == True` to ensure tensors are on the right device
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7772/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7772/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7772", "html_url": "https://github.com/huggingface/transformers/pull/7772", "diff_url": "https://github.com/huggingface/transformers/pull/7772.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7772.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7771
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7771/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7771/comments
https://api.github.com/repos/huggingface/transformers/issues/7771/events
https://github.com/huggingface/transformers/issues/7771
721,128,273
MDU6SXNzdWU3MjExMjgyNzM=
7,771
Cannot trace_module on models using model's generate function
{ "login": "vikigenius", "id": 12724810, "node_id": "MDQ6VXNlcjEyNzI0ODEw", "avatar_url": "https://avatars.githubusercontent.com/u/12724810?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vikigenius", "html_url": "https://github.com/vikigenius", "followers_url": "https://api.github.com/users/vikigenius/followers", "following_url": "https://api.github.com/users/vikigenius/following{/other_user}", "gists_url": "https://api.github.com/users/vikigenius/gists{/gist_id}", "starred_url": "https://api.github.com/users/vikigenius/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vikigenius/subscriptions", "organizations_url": "https://api.github.com/users/vikigenius/orgs", "repos_url": "https://api.github.com/users/vikigenius/repos", "events_url": "https://api.github.com/users/vikigenius/events{/privacy}", "received_events_url": "https://api.github.com/users/vikigenius/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "`generate` currently does not support `torch.jit.trace`. This is sadly also not on the short-term roadmap.", "@patrickvonplaten, thanks for the response. In that case is there no way to trace the inference process of generative models provided here ? So any kind of inference of the form text -> text (for eg: summarization) cannot be exported to torchscript ? ", "Yes, if you want to make it faster, reduce `num_beams`.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hi, are summarization kinds of models still not traceable ??\r\n\r\nI am trying to deploy this onto AWS inferentia, whose prerequisite is that the model should be traceable !!\r\n\r\nfor example this sshleifer/distilbart-cnn-12-6\r\n\r\n@sshleifer, @patrickvonplaten can you guys please help ??", "Hey @DevBey,\r\n\r\nCould you maybe open a new issue that states exactly what doesn't work in your case? This issue is quite old now and it would be nice to have a reproducible code snippet with the current `transformers` version." ]
1,602
1,638
1,608
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Platform: Linux-5.8.14_1-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. @patrickvonplaten, @sshleifer --> @patrickvonplaten, @sshleifer ## Information Model I am using BART The problem arises when using: * [*] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [*] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. load any model that uses the generate function 2. try to trace it using trace_module <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> Can be easily reproduced with the following snippet: ``` import torch from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = 'sshleifer/bart-tiny-random' tokenizer = AutoTokenizer.from_pretrained(model) sqgen_model = AutoModelForSeq2SeqLM.from_pretrained(model, torchscript=True) sqgen_model.eval() dummy_input = ' '.join('dummy' for dummy in range(512)) batch = tokenizer( [dummy_input], return_tensors='pt', truncation=True, padding='longest', ) with torch.no_grad(): traced_model = torch.jit.trace_module( # type: ignore sqgen_model, { 'forward': (batch.input_ids, batch.attention_mask), 'generate': (batch.input_ids, batch.attention_mask), }, ) ``` It throws an error: ``` File "/home/void/.miniconda3/envs/lexml/src/transformers/src/transformers/generation_utils.py", line 288, in generate assert isinstance(max_length, int) and max_length > 0, "`max_length` should be a strictly positive integer." AssertionError: `max_length` should be a strictly positive integer. ``` obviously because the generate function's second argument is supposed to be max_length and not attention_mask ## Expected behavior Should be able to trace models that use the generate function. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7771/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7771/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7770
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7770/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7770/comments
https://api.github.com/repos/huggingface/transformers/issues/7770/events
https://github.com/huggingface/transformers/issues/7770
721,062,341
MDU6SXNzdWU3MjEwNjIzNDE=
7,770
How to create a QA model where the answer can be from the question text as well?
{ "login": "nrjvarshney", "id": 19836137, "node_id": "MDQ6VXNlcjE5ODM2MTM3", "avatar_url": "https://avatars.githubusercontent.com/u/19836137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nrjvarshney", "html_url": "https://github.com/nrjvarshney", "followers_url": "https://api.github.com/users/nrjvarshney/followers", "following_url": "https://api.github.com/users/nrjvarshney/following{/other_user}", "gists_url": "https://api.github.com/users/nrjvarshney/gists{/gist_id}", "starred_url": "https://api.github.com/users/nrjvarshney/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nrjvarshney/subscriptions", "organizations_url": "https://api.github.com/users/nrjvarshney/orgs", "repos_url": "https://api.github.com/users/nrjvarshney/repos", "events_url": "https://api.github.com/users/nrjvarshney/events{/privacy}", "received_events_url": "https://api.github.com/users/nrjvarshney/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Normally there's no difference. You just provide `[CLS] question [SEP] answer [SEP]` examples to the model, and the `start_positions` and `end_positions` can be indexes of the question. ", "Yes,\r\nCan you point me to such an implementation?\r\nhuggingface Squad Question answering code handles a lot of edge cases like start token should be before the end token, find n best predictions etc. \r\n@NielsRogge , Is there a simpler implementation for the question answering task?\r\n\r\nI found one in the documentation but that doesn't handle the predictions, sanity checks, n_best predictions etc.\r\n\r\n\r\n" ]
1,602
1,603
1,603
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> SQuAD QA dataset has questions where the answer is a span in the context, How do I create a QA system where the answer string can be from the context or question text? <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7770/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7770/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7769
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7769/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7769/comments
https://api.github.com/repos/huggingface/transformers/issues/7769/events
https://github.com/huggingface/transformers/issues/7769
720,887,012
MDU6SXNzdWU3MjA4ODcwMTI=
7,769
from transformers import RagSequenceForGeneration gives ImportError
{ "login": "mchari", "id": 30506151, "node_id": "MDQ6VXNlcjMwNTA2MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/30506151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchari", "html_url": "https://github.com/mchari", "followers_url": "https://api.github.com/users/mchari/followers", "following_url": "https://api.github.com/users/mchari/following{/other_user}", "gists_url": "https://api.github.com/users/mchari/gists{/gist_id}", "starred_url": "https://api.github.com/users/mchari/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mchari/subscriptions", "organizations_url": "https://api.github.com/users/mchari/orgs", "repos_url": "https://api.github.com/users/mchari/repos", "events_url": "https://api.github.com/users/mchari/events{/privacy}", "received_events_url": "https://api.github.com/users/mchari/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You don't have PyTorch installed in your environment. This is a PyTorch model.", "Indeed , that was the problem.\r\nThe code in https://huggingface.co/transformers/model_doc/rag.html has import torch after the \"from transformers import RagSequenceForGeneration,...\" statement, so I incorrectly concluded that torch is not needed for the import. Also, that RagTokenizer got imported without errors.\r\nThanks!\r\n" ]
1,602
1,602
1,602
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Platform: Linux - Python version:3.6.3 - PyTorch version (GPU?): - Tensorflow version (GPU?):2.3.0 - Using GPU in script?:no - Using distributed or parallel set-up in script?:no ### Who can help @patrickvonplaten @sgugger ## Information Model I am using (): The problem arises when using: ## To reproduce 1.python >> from transformers import RagSequenceForGeneration <- RagTokenizer is imported without any errors ## Expected behavior I shouldn't get ImportError: cannot import name 'RagSequenceForGeneration'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7769/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7769/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7768
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7768/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7768/comments
https://api.github.com/repos/huggingface/transformers/issues/7768/events
https://github.com/huggingface/transformers/issues/7768
720,848,012
MDU6SXNzdWU3MjA4NDgwMTI=
7,768
Is there any way to control the input of a layer of `Longformer`?
{ "login": "h56cho", "id": 52889259, "node_id": "MDQ6VXNlcjUyODg5MjU5", "avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h56cho", "html_url": "https://github.com/h56cho", "followers_url": "https://api.github.com/users/h56cho/followers", "following_url": "https://api.github.com/users/h56cho/following{/other_user}", "gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}", "starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h56cho/subscriptions", "organizations_url": "https://api.github.com/users/h56cho/orgs", "repos_url": "https://api.github.com/users/h56cho/repos", "events_url": "https://api.github.com/users/h56cho/events{/privacy}", "received_events_url": "https://api.github.com/users/h56cho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,602
1,602
NONE
null
Hello, Is there any way that I can directly control the input to a layer of the `Longformer` model, similar to `GPT2.transformer.h[]`? I tried `best_model_longformer.longformer.encoder.layer[layer_index](input_hidden_state_for_layer)` but it's giving this error: ```python Traceback (most recent call last): File "SEED_125_V20_15_LONGFORMER.py", line 426, in <module> main_function('/home/ec2-user/G1G2.txt','/home/ec2-user/G1G2_answer_num.txt', num_iter) File "SEED_125_V20_15_LONGFORMER.py", line 388, in main_function best_model_longformer) File "SEED_125_V20_15_LONGFORMER.py", line 205, in fill_MC_loss_accuracy_tensor best_model_longformer.longformer.encoder.layer[j](input_hidden_state) File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_longformer.py", line 852, in forward output_attentions=output_attentions, File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_longformer.py", line 796, in forward output_attentions, File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_longformer.py", line 241, in forward attention_mask = attention_mask.squeeze(dim=2).squeeze(dim=1) AttributeError: 'NoneType' object has no attribute 'squeeze' ``` :S thank you,
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7768/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7768/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7767
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7767/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7767/comments
https://api.github.com/repos/huggingface/transformers/issues/7767/events
https://github.com/huggingface/transformers/pull/7767
720,790,120
MDExOlB1bGxSZXF1ZXN0NTAyNzk5MDkw
7,767
Add predict step accumulation
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks so much @sgugger - you're a legend! 🙇 ", "> # What does this PR do?\r\n> Currently, the Trainer accumulates all predictions on the host (GPU or TPU) before gathering across all hosts (in case of distributed training) and moving back to the CPU. This can result in OOM errors when users have a big dataset (the model is already taking up a lot of space on the host) as highlighted in #7232. However moving the predictions to the CPU at each prediction step is also inefficient (particularly on TPU).\r\n> \r\n> This PR aims at fixing the OOM problem while retaining efficiency by introducing a new training argument called `eval_accumulation_step`. If left untouched, the behavior is the same as right now (all predictions accumulated on the host and moved at the end of the prediction loop). If set to an int, the predictions are gathered and moved every `eval_accumulation_step`. This required some clever reorganization of the predictions (see the docstring of `DistributedTensorGatherer` for more details).\r\n> \r\n> In passing I cleaned up the code related to gathering tensors across multiple hosts and fixed the issue of the `loss.item()` (big slow down to do that at every step on TPUs) and accumulated the losses the same way predictions and labels are. This still works for any number of outputs/labels of the model.\r\n> \r\n> To check those changes did not break anything, I ran `test_trainer_distributed.py` on my local setup and created an equivalent for TPUs that I also ran (they both pass).\r\n> \r\n> This slightly change Seq2SeqTrainer (since we don't want the `loss.item()`) so cc @patil-suraj I don't think this should break anything in it.\r\n> \r\n> Fixes #7232\r\n\r\nThanks so much @sgugger \r\n`eval_accumulation_steps` for the argument name and not `eval_accumulation_step` 😉" ]
1,602
1,652
1,602
COLLABORATOR
null
# What does this PR do? Currently, the Trainer accumulates all predictions on the host (GPU or TPU) before gathering across all hosts (in case of distributed training) and moving back to the CPU. This can result in OOM errors when users have a big dataset (the model is already taking up a lot of space on the host) as highlighted in #7232. However moving the predictions to the CPU at each prediction step is also inefficient (particularly on TPU). This PR aims at fixing the OOM problem while retaining efficiency by introducing a new training argument called `eval_accumulation_step`. If left untouched, the behavior is the same as right now (all predictions accumulated on the host and moved at the end of the prediction loop). If set to an int, the predictions are gathered and moved every `eval_accumulation_step`. This required some clever reorganization of the predictions (see the docstring of `DistributedTensorGatherer` for more details). In passing I cleaned up the code related to gathering tensors across multiple hosts and fixed the issue of the `loss.item()` (big slow down to do that at every step on TPUs) and accumulated the losses the same way predictions and labels are. This still works for any number of outputs/labels of the model. To check those changes did not break anything, I ran `test_trainer_distributed.py` on my local setup and created an equivalent for TPUs that I also ran (they both pass). This slightly change Seq2SeqTrainer (since we don't want the `loss.item()`) so cc @patil-suraj I don't think this should break anything in it. <!-- Remove if not applicable --> Fixes #7232
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7767/reactions", "total_count": 7, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7767/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7767", "html_url": "https://github.com/huggingface/transformers/pull/7767", "diff_url": "https://github.com/huggingface/transformers/pull/7767.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7767.patch", "merged_at": 1602690106000 }
https://api.github.com/repos/huggingface/transformers/issues/7766
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7766/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7766/comments
https://api.github.com/repos/huggingface/transformers/issues/7766/events
https://github.com/huggingface/transformers/issues/7766
720,710,326
MDU6SXNzdWU3MjA3MTAzMjY=
7,766
Use Marian-MT to evaluate translated outputs by printing out per-word log-probility
{ "login": "JunjieHu", "id": 5851098, "node_id": "MDQ6VXNlcjU4NTEwOTg=", "avatar_url": "https://avatars.githubusercontent.com/u/5851098?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JunjieHu", "html_url": "https://github.com/JunjieHu", "followers_url": "https://api.github.com/users/JunjieHu/followers", "following_url": "https://api.github.com/users/JunjieHu/following{/other_user}", "gists_url": "https://api.github.com/users/JunjieHu/gists{/gist_id}", "starred_url": "https://api.github.com/users/JunjieHu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JunjieHu/subscriptions", "organizations_url": "https://api.github.com/users/JunjieHu/orgs", "repos_url": "https://api.github.com/users/JunjieHu/repos", "events_url": "https://api.github.com/users/JunjieHu/events{/privacy}", "received_events_url": "https://api.github.com/users/JunjieHu/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Would love a contribution that implemented it. \r\nCan you paste a working fairseq command to try to emulate?\r\nAlternatively you could just send a PR.", "Hi @sshleifer \r\n\r\nMarian NMT seems to have such functionality as well. Please check their script [here](https://github.com/marian-nmt/marian-examples/blob/master/wmt2017-transformer/run-me.sh#L189).\r\n\r\nI modify the [translation code](https://fairseq.readthedocs.io/en/latest/command_line_tools.html#fairseq-generate) in fairseq named as *validate_lm.py*, and run the bash script to get the per-word log-probability.\r\n```bash\r\npython validate_lm.py \\\r\n $binarized_data_dir \\\r\n --source-lang en --target-lang fr \\\r\n --path $path_to_checkpoint \\\r\n --task translation \\\r\n --valid-subset train \\\r\n --max-sentences 16 \\\r\n --nll-file $out_file\r\n```\r\n\r\nvalidate_lm.py\r\n```python\r\nimport logging\r\nimport sys\r\n\r\nimport torch\r\n\r\nfrom fairseq import checkpoint_utils, distributed_utils, options, utils, tasks\r\nfrom fairseq.logging import metrics, progress_bar\r\nfrom fairseq.options import add_distributed_training_args\r\nfrom fairseq.criterions.cross_entropy import CrossEntropyCriterion\r\n\r\nlogging.basicConfig(\r\n format='%(asctime)s | %(levelname)s | %(name)s | %(message)s',\r\n datefmt='%Y-%m-%d %H:%M:%S',\r\n level=logging.INFO,\r\n stream=sys.stdout,\r\n)\r\nlogger = logging.getLogger('fairseq_cli.validate')\r\n\r\n\r\ndef main(args, override_args=None):\r\n utils.import_user_module(args)\r\n\r\n assert args.max_tokens is not None or args.max_sentences is not None, \\\r\n 'Must specify batch size either with --max-tokens or --max-sentences'\r\n\r\n use_fp16 = args.fp16\r\n use_cuda = torch.cuda.is_available() and not args.cpu\r\n\r\n if override_args is not None:\r\n if isinstance(override_args, dict):\r\n overrides = override_args\r\n else:\r\n overrides = vars(override_args)\r\n print('override_args')\r\n overrides.update(eval(getattr(override_args, 'model_overrides', '{}')))\r\n else:\r\n overrides = None\r\n\r\n # Load ensemble\r\n logger.info('loading model(s) from {}'.format(args.path))\r\n task = tasks.setup_task(args)\r\n models, _model_args = checkpoint_utils.load_model_ensemble(\r\n [args.path],\r\n arg_overrides=overrides,\r\n task=task,\r\n )\r\n model = models[0]\r\n\r\n # Move models to GPU\r\n for model in models:\r\n if use_fp16:\r\n model.half()\r\n if use_cuda:\r\n model.cuda()\r\n\r\n # Print args\r\n logger.info('args', args)\r\n logger.info('overrides', overrides)\r\n\r\n # Build criterion\r\n # criterion = task.build_criterion(args)\r\n criterion = CrossEntropyCriterion(task, False)\r\n criterion.eval()\r\n\r\n for subset in args.valid_subset.split(','):\r\n try:\r\n task.load_dataset(subset, combine=False, epoch=1)\r\n dataset = task.dataset(subset)\r\n except KeyError:\r\n raise Exception('Cannot find dataset: ' + subset)\r\n\r\n # Initialize data iterator\r\n itr = task.get_batch_iterator(\r\n dataset=dataset,\r\n max_tokens=args.max_tokens,\r\n max_sentences=args.max_sentences,\r\n max_positions=utils.resolve_max_positions(\r\n task.max_positions(),\r\n *[m.max_positions() for m in models],\r\n ),\r\n ignore_invalid_inputs=args.skip_invalid_size_inputs_valid_test,\r\n required_batch_size_multiple=args.required_batch_size_multiple,\r\n seed=args.seed,\r\n num_workers=args.num_workers,\r\n ).next_epoch_itr(shuffle=False)\r\n progress = progress_bar.progress_bar(\r\n itr,\r\n log_format=args.log_format,\r\n log_interval=args.log_interval,\r\n prefix=f\"valid on '{subset}' subset\",\r\n default_log_format=('tqdm' if not args.no_progress_bar else 'simple'),\r\n )\r\n\r\n fout = open(args.nll_file, 'w')\r\n nll = []\r\n for i, sample in enumerate(progress):\r\n sample = utils.move_to_cuda(sample) if use_cuda else sample\r\n model.eval()\r\n with torch.no_grad():\r\n loss, sample_size, log_output = criterion(model, sample, reduce=False)\r\n nsentences = log_output['nsentences']\r\n loss = loss.view(nsentences, -1).tolist()\r\n for j, sample_id in enumerate(sample['id'].tolist()):\r\n loss_j = [ll for ll in loss[j] if ll > 0]\r\n avg_nll = sum(loss_j) / len(loss_j)\r\n lstr = '\\t'.join([f'{ll:.4f}' for ll in loss_j])\r\n fout.write(f'{sample_id}\\t{avg_nll}\\t{lstr}\\n')\r\n nll.append((sample_id, avg_nll, lstr))\r\n fout.close()\r\n\r\ndef cli_main():\r\n parser = options.get_validation_parser()\r\n add_distributed_training_args(parser)\r\n args = options.parse_args_and_arch(parser)\r\n\r\n # only override args that are explicitly given on the command line\r\n override_parser = options.get_validation_parser()\r\n group = override_parser.add_argument_group(\"Valid BW\")\r\n group.add_argument('--nll-file', type=str, default=None)\r\n add_distributed_training_args(override_parser)\r\n override_args = options.parse_args_and_arch(override_parser, suppress_defaults=True)\r\n distributed_utils.call_main(args, main, override_args=override_args)\r\n\r\n\r\nif __name__ == '__main__':\r\n cli_main()\r\n```\r\n\r\n\r\n\r\n", "I'd tinker with that - perhaps in a few days.", "I started looking into this but to be able to reproduce any of these examples in order to replicate this in transformers I need to spend hours just to set things up :( It'd have been much easier if such requests came with ready data that we could use right away.\r\n\r\nCurrently waiting for very sloooooooow downloads of data from statmt.org - will try to setup fairseq for en-fr...\r\n\r\nwill keep you posted on the progress.\r\n\r\np.s. marianmt I couldn't even build on ubuntu-20.04, so hoping I could sort it out with fairseq", "OK, I found an easy way to set up something that your script would run on:\r\n\r\n```\r\nmkdir -p data-bin\r\ncurl https://dl.fbaipublicfiles.com/fairseq/models/wmt14.v2.en-fr.fconv-py.tar.bz2 | tar xvjf - -C data-bin\r\ncurl https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.newstest2014.tar.bz2 | tar xvjf - -C data-bin\r\npython validate_lm.py data-bin/wmt14.en-fr.joined-dict.newstest2014 --source-lang en --target-lang fr --path data-bin/wmt14.en-fr.joined-dict.transformer/model.pt --task translation --valid-subset test --max-tokens 128\r\n```\r\n\r\n(in the future requests please provide something similar so that the devs could quickly reproduce your example. And of course test that it works. Thank you)\r\n\r\nYour script won't run as is, had to do some tweaks to it. But it doesn't matter for now.\r\n\r\nWhat it produces is:\r\n```\r\n1596 0.7027008545895418 1.2075 1.4464 1.1499 0.1134 0.1958 0.1032\r\n1615 3.4015886902809145 5.8378 9.5874 0.6482 0.7548 0.1797\r\n2837 1.7323936596512794 0.8643 0.1775 7.3694 0.1477 0.1032\r\n256 1.5152931759754817 0.8425 7.1255 0.8411 0.0599 0.1208 0.1020\r\n556 2.3045735328147807 6.7926 0.0528 1.0420 2.4513 3.3704 0.1184\r\n2922 1.7860032822936773 0.4388 3.8867 1.9373 0.2531 6.5018 0.1011 1.0688 0.1005\r\n612 1.598365530371666 3.2449 2.8535 0.1290 0.1661\r\n605 1.7411062990625699 4.9944 0.1142 0.1147\r\n1481 0.4938086899263518 0.3987 0.5716 1.2258 0.5801 0.4607 0.1157 0.1041\r\n2013 0.7291228858133157 2.0326 0.1758 0.1322 1.7975 0.1314 0.1051\r\n75 1.2818062901496887 0.6200 0.2217 0.0730 0.1697 7.2470 0.4181 0.2230\r\n279 1.4058668922100748 8.1397 1.1370 0.1082 0.1427 0.0871 0.1210 0.1053\r\n2641 0.1703926378062793 0.3318 0.1146 0.2372 0.1261 0.1727 0.1082 0.1022\r\n1120 2.3883045655157833 2.9336 0.8035 1.6985 4.0525 8.6964 3.0033 0.0963 0.1081 0.1024\r\n1031 0.6803936168551445 4.6826 0.7309 0.0864 0.1111 0.1247 0.0863 0.5777 0.0856 0.2151 0.1035\r\n2484 0.17211276292800903 0.3391 0.2407 0.0678 0.4290 0.1203 0.0990 0.1382 0.1095 0.0758 0.1018\r\n2600 1.3488108797797136 4.0854 0.1176 1.0209 0.1188 0.0373 3.7863 0.2754\r\n2814 1.5876335703900881 1.6473 2.3662 0.2776 0.0731 4.8990 1.7478 0.1024\r\n2822 0.6652300975152424 0.6418 0.1277 0.6620 2.6037 0.4055 0.1121 0.1038\r\n2854 0.396458768418857 0.5300 0.1097 1.1606 0.4882 0.2649 0.1183 0.1035\r\n169 1.5903700785711408 4.9675 0.0435 0.1813 1.8350 0.1336 5.2166 0.2192 0.1262\r\n234 2.0860743997618556 2.8854 0.2894 0.3069 0.1739 0.0880 12.6388 0.1717 0.1345\r\n368 0.9114988259971142 0.7726 0.7938 1.8604 3.4764 0.0779 0.0802 0.1271 0.1035\r\n387 0.24928769376128912 0.4698 0.0763 0.2203 0.1345 0.3810 0.4948 0.1152 0.1024\r\n596 0.7321822415105999 5.0991 0.1796 0.1138 0.1113 0.1246 0.0427 0.0759 0.1104\r\n1200 0.587307695299387 1.3917 1.1277 1.4277 0.0914 0.1485 0.1969 0.2117 0.1029\r\n2015 1.1963263098150492 5.4510 1.0897 0.4743 0.0972 0.2240 1.2175 0.8575 0.1595\r\n2216 1.3388446755707264 4.0211 4.2651 0.4503 0.1335 0.0482 0.3432 1.3127 0.1366\r\n2994 0.3156363896559924 0.1788 0.1143 0.1824 1.7381 0.0294 0.0742 0.1021 0.1059\r\n```\r\nI don't suppose this is what you are after, do you?\r\n\r\n", "i think the original poster wants a way to get those log probs for models that are not in fair aew, like MarianMT.", "I think I understood that, I'm referring to:\r\n\r\n> I modify the translation code in fairseq named as validate_lm.py, and run the bash script to get the per-word log-probability.\r\n\r\nwhich is what I run and displayed the top of the results above. But in the OP it was requested:\r\n\r\n> For example:\r\n> Source input (x): I would like to run this experiment.\r\n> Translated output (y) in Chinese: 我 希望 跑 这个 实验 。\r\n> The word-level log probability: -0.3 -0.4 -0.23 -0.43 -0.23 -0.8\r\n\r\nSo the example code doesn't match the original request and I'm asking to clarify to what output is desired.\r\n\r\nIn other words - do you want the word and the average log probability of its tokens printed? If yes, I guess we gather the log probs for each token and then somehow keep track of the de-tokenizer and then average the ones that end up comprising each word. I don't have any experience with Kanji - do tokenizers there split the characters into sub-characters, or is it the case where each token is a word, in which case you're asking to just print the log probabilities of each token in the translation output?\r\n\r\nCould you redefine your request in terms of latin 2 latin language to remove this possible ambiguity and then once working adapt it to Chinese? I hope it makes sense where I am not sure what exactly you're after.\r\n", "@JunjieHu Is this roughly what you are looking for? \r\n```python\r\nbatch = tokenizer.prepare_seq2seq_batch(\": I would like to run this experiment.\")\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained('opus-mt/marian-en-zh')\r\ngenerated_ids = model.generate(batch)\r\noutputs = model(batch.input_ids, labels=generated_ids)\r\nlog_probas == outputs.logits[:, generated_ids]\r\n```\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hi @sshleifer Sorry for the late response! Yes! This is exactly what I want! Thanks!" ]
1,602
1,610
1,610
NONE
null
# ❓ Questions & Help I am going to evaluate the Marian's opus-mt to evaluate the translated output y given the input x. I want to get the model's log-probability of y given x, i.e., P(y|x). I didn't find any usage example in the document [here](https://huggingface.co/transformers/model_doc/marian.html#multilingual-models). @sshleifer do you know an example in this case? Thanks! For example: Source input (x): I would like to run this experiment. Translated output (y) in Chinese: 我 希望 跑 这个 实验 。 The word-level log probability: -0.3 -0.4 -0.23 -0.43 -0.23 -0.8 BTW, fairseq supports this function in their [comment-line tools](https://fairseq.readthedocs.io/en/latest/command_line_tools.html).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7766/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7766/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7765
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7765/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7765/comments
https://api.github.com/repos/huggingface/transformers/issues/7765/events
https://github.com/huggingface/transformers/issues/7765
720,642,460
MDU6SXNzdWU3MjA2NDI0NjA=
7,765
Seq2seq finetune example: "Please save or load state of the optimizer"
{ "login": "jsrozner", "id": 1113285, "node_id": "MDQ6VXNlcjExMTMyODU=", "avatar_url": "https://avatars.githubusercontent.com/u/1113285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jsrozner", "html_url": "https://github.com/jsrozner", "followers_url": "https://api.github.com/users/jsrozner/followers", "following_url": "https://api.github.com/users/jsrozner/following{/other_user}", "gists_url": "https://api.github.com/users/jsrozner/gists{/gist_id}", "starred_url": "https://api.github.com/users/jsrozner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jsrozner/subscriptions", "organizations_url": "https://api.github.com/users/jsrozner/orgs", "repos_url": "https://api.github.com/users/jsrozner/repos", "events_url": "https://api.github.com/users/jsrozner/events{/privacy}", "received_events_url": "https://api.github.com/users/jsrozner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'll leave it to @sshleifer - haven't really used the seq2seq fine-tuning too much.\r\n\r\nMaybe @patil-suraj has also an idea here :-) ", "You can safely ignore the `lr_scheduler` warning. `optimzer` is saved, but torch `lr_scheduler` warns you anyway just so you don't forget.\r\n\r\nNot sure about the first warning, but you can also safely ignore that\r\n", "@patil-suraj Thank you! Would there be an easy way to somehow silence the warning, or print a logging statement from huggingface library that the warning can be safely ignored? ", "I believe this warning has been hidden on the `master` branch, and will be hidden in the next release. See [this](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_pt_utils.py#L114-L119).", "Cool! Looks like originally in #7401. I pulled master and confirmed that this is fixed. \r\n\r\nAny notes on the computational graph warning that also pops up?\r\n../python3.8/site-packages/pytorch_lightning/utilities/distributed.py:37: UserWarning: Could not log computational graph since the model.example_input_array attribute is not set or input_array was not given" ]
1,602
1,607
1,607
CONTRIBUTOR
null
When running the example scripts in examples/seq2seq/finetune_bart and finetune_t5, get warning messages: ## Environment info - `transformers` version: 3.3.1 - Platform: Linux-4.15.0-66-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Ran both with and without gpus; same result - Using distributed or parallel set-up in script?: no ### Who can help @sshleifer for examples/seq2seq, Bart @patrickvonplaten (maybe because this also happens in T5?) ## Information Model I am using (Bert, XLNet ...): Occurs when running bart and also when running T5 via the examples/seq2seq/finetune The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [X ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: Steps to reproduce: 1) clone transformers into new directory 2) Set up environment (new): cd transformers && pip install .e; cd examples && pip install -r requirements.txt 3) cd seq2seq && ./finetune_t5_bart_tiny.sh Observe that warnings are printed: ../python3.8/site-packages/pytorch_lightning/utilities/distributed.py:37: UserWarning: Could not log computational graph since the `model.example_input_array` attribute is not set or `input_array` was not given warnings.warn(*args, **kwargs) .../python3.8/site-packages/torch/optim/lr_scheduler.py:200: UserWarning: Please also save or load the state of the optimzer when saving or loading the scheduler. warnings.warn(SAVE_STATE_WARNING, UserWarning) (There is both the optimizer warning and the computational graph logging warning) ## Expected behavior Should not see warnings for the given example. ## Other notes: There was a related issue where supplementary files / checkpoints were not being saved, but that seems to be fixed now.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7765/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7765/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7764
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7764/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7764/comments
https://api.github.com/repos/huggingface/transformers/issues/7764/events
https://github.com/huggingface/transformers/issues/7764
720,558,413
MDU6SXNzdWU3MjA1NTg0MTM=
7,764
Update of DialoGPT `max_length`
{ "login": "guillaume-be", "id": 27071604, "node_id": "MDQ6VXNlcjI3MDcxNjA0", "avatar_url": "https://avatars.githubusercontent.com/u/27071604?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guillaume-be", "html_url": "https://github.com/guillaume-be", "followers_url": "https://api.github.com/users/guillaume-be/followers", "following_url": "https://api.github.com/users/guillaume-be/following{/other_user}", "gists_url": "https://api.github.com/users/guillaume-be/gists{/gist_id}", "starred_url": "https://api.github.com/users/guillaume-be/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guillaume-be/subscriptions", "organizations_url": "https://api.github.com/users/guillaume-be/orgs", "repos_url": "https://api.github.com/users/guillaume-be/repos", "events_url": "https://api.github.com/users/guillaume-be/events{/privacy}", "received_events_url": "https://api.github.com/users/guillaume-be/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Very true! Thanks for the notification - will upload all DialoGPT configs.", "Done - note however that the \"task\" name was `conversational` not `dialogue`" ]
1,602
1,602
1,602
CONTRIBUTOR
null
### Who can help @patrickvonplaten ## Information Following https://github.com/huggingface/transformers/pull/5516, the DialoGPT models `max_length` has not been updated, and defaults to the `generate` value of 20. This value is very low for a conversational pipeline and would lead to answers that completely ignore the history (truncation happens to ensure enough space is available for the response). ## Expected behavior - The configuration files (e.g. https://s3.amazonaws.com/models.huggingface.co/bert/microsoft/DialoGPT-medium/config.json) need to store a `max_length` equal to 1000. - If the suggested config file structure from the PR is adopted, the code should be updated to read this value instead ```json "task_specific_params" : { "dialogue": { "max_length": 1000 } } ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7764/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7764/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7763
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7763/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7763/comments
https://api.github.com/repos/huggingface/transformers/issues/7763/events
https://github.com/huggingface/transformers/pull/7763
720,512,850
MDExOlB1bGxSZXF1ZXN0NTAyNTQ2OTIy
7,763
Allow Custom Dataset in RAG Retriever
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I took your comments into account, let me know if you have other things to improve.\r\nAlso I had to change the DPR encoder from the one trained on Natural Questions to the one trained ont the multiset/hybrid dataset to match the embeddings used by the Rag team.", "can't wait to try this out. \r\n@lhoestq , can this functionality be adapted to address #6399 ?", "Glad this got implemented! Many thanks @lhoestq . I checked out a copy, added a custom 25MB CSV file, and gave it a run:\r\npython examples/rag/use_own_knowledge_dataset.py\r\n\r\nGot this.\r\n```\r\nTraceback (most recent call last):\r\n File \"examples/rag/use_own_knowledge_dataset.py\", line 195, in <module>\r\n main(tmp_dir, rag_example_args, processing_args, index_hnsw_args)\r\n File \"examples/rag/use_own_knowledge_dataset.py\", line 84, in main\r\n ctx_tokenizer = DPRContextEncoderTokenizerFast.from_pretrained(rag_example_args.dpr_ctx_encoder_model_name)\r\n File \"/home/ioannis/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/tokenization_utils_base.py\", line 1544, in from_pretrained\r\n list(cls.vocab_files_names.values()),\r\nOSError: Model name 'facebook/dpr-ctx_encoder-multiset-base' was not found in tokenizers model name list (facebook/dpr-ctx_encoder-single-nq-base). We assumed 'facebook/dpr-ctx_encoder-multiset-base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\r\n```\r\n\r\nI then switched to **dpr-ctx_encoder-single-nq-base**. In this case, indexing was successful. However, after Step 3 - Load RAG, the script started downloading the 74G wiki dataset (which should not be necessary) and then errored out on line 195.\r\n\r\n```\r\n$ python examples/rag/use_own_knowledge_dataset.py\r\nINFO:__main__:Step 1 - Create the dataset\r\nUsing custom data configuration default\r\nReusing dataset csv (/home/ioannis/.cache/huggingface/datasets/csv/default-49c04e2dbd1cfa6f/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4)\r\n100%|█████████████████████████████████████████| 155/155 [00:01<00:00, 82.27ba/s]\r\n100%|█████████████████████████████████████| 18531/18531 [28:23<00:00, 10.88ba/s]\r\nINFO:__main__:Step 2 - Index the dataset\r\n100%|█████████████████████████████████████████| 297/297 [30:22<00:00, 6.14s/it]\r\nINFO:__main__:Step 3 - Load RAG\r\nUsing custom data configuration psgs_w100.nq.no_index\r\nDownloading and preparing dataset wiki_dpr/psgs_w100.nq.no_index (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/ioannis/.cache/huggingface/datasets/wiki_dpr/psgs_w100.nq.no_index/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2...\r\nDownloading: 100%|█████████████████████████| 11.2k/11.2k [00:00<00:00, 2.65MB/s]\r\nDownloading: 100%|███████████████████████| 78.4G/78.4G [1:54:59<00:00, 11.4MB/s]\r\nDataset wiki_dpr downloaded and prepared to /home/ioannis/.cache/huggingface/datasets/wiki_dpr/psgs_w100.nq.no_index/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2. Subsequent calls will reuse this data.\r\nUsing custom data configuration psgs_w100.nq.custom\r\nDownloading and preparing dataset wiki_dpr/psgs_w100.nq.custom (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/ioannis/.cache/huggingface/datasets/wiki_dpr/psgs_w100.nq.custom/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2...\r\nhuggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\r\nTo disable this warning, you can either:\r\n\t- Avoid using `tokenizers` before the fork if possible\r\n\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\r\nhuggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\r\n...\r\n...\r\nDownloading: 100%|██████████████████████████| 1.33G/1.33G [29:21<00:00, 753kB/s]\r\nDownloading: 100%|██████████████████████████| 1.33G/1.33G [29:30<00:00, 749kB/s]\r\n...\r\n...\r\nTraceback (most recent call last): \r\n File \"examples/rag/use_own_knowledge_dataset.py\", line 195, in <module>\r\n main(tmp_dir, rag_example_args, processing_args, index_hnsw_args)\r\n File \"examples/rag/use_own_knowledge_dataset.py\", line 116, in main\r\n rag_example_args.rag_model_name, index_name=\"custom\", indexed_dataset=dataset\r\n File \"/home/ioannis/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/retrieval_rag.py\", line 321, in from_pretrained\r\n config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer\r\n File \"/home/ioannis/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/retrieval_rag.py\", line 310, in __init__\r\n self.init_retrieval()\r\n File \"/home/ioannis/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/retrieval_rag.py\", line 338, in init_retrieval\r\n self.index.init_index()\r\n File \"/home/ioannis/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/retrieval_rag.py\", line 248, in init_index\r\n dummy=self.use_dummy_dataset,\r\n File \"/home/ioannis/anaconda3/envs/transformers/lib/python3.6/site-packages/datasets/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"/home/ioannis/anaconda3/envs/transformers/lib/python3.6/site-packages/datasets/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home/ioannis/anaconda3/envs/transformers/lib/python3.6/site-packages/datasets/builder.py\", line 553, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/ioannis/anaconda3/envs/transformers/lib/python3.6/site-packages/datasets/builder.py\", line 841, in _prepare_split\r\n generator, unit=\" examples\", total=split_info.num_examples, leave=False, disable=not_verbose\r\n File \"/home/ioannis/anaconda3/envs/transformers/lib/python3.6/site-packages/tqdm/std.py\", line 1133, in __iter__\r\n for obj in iterable:\r\n File \"/home/ioannis/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2/wiki_dpr.py\", line 124, in _generate_examples\r\n id, text, title = line.strip().split(\"\\t\")\r\nValueError: not enough values to unpack (expected 3, got 2)\r\n\r\n```\r\n", "> Glad this got implemented! Many thanks @lhoestq . I checked out a copy, added a custom 25MB CSV file, and gave it a run:\r\n> python examples/rag/use_own_knowledge_dataset.py\r\n> \r\n> Got this.\r\n> \r\n> ```\r\n> Traceback (most recent call last):\r\n> File \"examples/rag/use_own_knowledge_dataset.py\", line 195, in <module>\r\n> main(tmp_dir, rag_example_args, processing_args, index_hnsw_args)\r\n> File \"examples/rag/use_own_knowledge_dataset.py\", line 84, in main\r\n> ctx_tokenizer = DPRContextEncoderTokenizerFast.from_pretrained(rag_example_args.dpr_ctx_encoder_model_name)\r\n> File \"/home/ioannis/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/tokenization_utils_base.py\", line 1544, in from_pretrained\r\n> list(cls.vocab_files_names.values()),\r\n> OSError: Model name 'facebook/dpr-ctx_encoder-multiset-base' was not found in tokenizers model name list (facebook/dpr-ctx_encoder-single-nq-base). We assumed 'facebook/dpr-ctx_encoder-multiset-base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\r\n> ```\r\n> \r\n> I then switched to **dpr-ctx_encoder-single-nq-base**. In this case, indexing was successful. However, after Step 3 - Load RAG, the script started downloading the 74G wiki dataset (which should not be necessary) and then errored out on line 195.\r\n> \r\n> ```\r\n> $ python examples/rag/use_own_knowledge_dataset.py\r\n> INFO:__main__:Step 1 - Create the dataset\r\n> Using custom data configuration default\r\n> Reusing dataset csv (/home/ioannis/.cache/huggingface/datasets/csv/default-49c04e2dbd1cfa6f/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4)\r\n> 100%|█████████████████████████████████████████| 155/155 [00:01<00:00, 82.27ba/s]\r\n> 100%|█████████████████████████████████████| 18531/18531 [28:23<00:00, 10.88ba/s]\r\n> INFO:__main__:Step 2 - Index the dataset\r\n> 100%|█████████████████████████████████████████| 297/297 [30:22<00:00, 6.14s/it]\r\n> INFO:__main__:Step 3 - Load RAG\r\n> Using custom data configuration psgs_w100.nq.no_index\r\n> Downloading and preparing dataset wiki_dpr/psgs_w100.nq.no_index (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/ioannis/.cache/huggingface/datasets/wiki_dpr/psgs_w100.nq.no_index/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2...\r\n> Downloading: 100%|█████████████████████████| 11.2k/11.2k [00:00<00:00, 2.65MB/s]\r\n> Downloading: 100%|███████████████████████| 78.4G/78.4G [1:54:59<00:00, 11.4MB/s]\r\n> Dataset wiki_dpr downloaded and prepared to /home/ioannis/.cache/huggingface/datasets/wiki_dpr/psgs_w100.nq.no_index/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2. Subsequent calls will reuse this data.\r\n> Using custom data configuration psgs_w100.nq.custom\r\n> Downloading and preparing dataset wiki_dpr/psgs_w100.nq.custom (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/ioannis/.cache/huggingface/datasets/wiki_dpr/psgs_w100.nq.custom/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2...\r\n> huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\r\n> To disable this warning, you can either:\r\n> \t- Avoid using `tokenizers` before the fork if possible\r\n> \t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\r\n> huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\r\n> ...\r\n> ...\r\n> Downloading: 100%|██████████████████████████| 1.33G/1.33G [29:21<00:00, 753kB/s]\r\n> Downloading: 100%|██████████████████████████| 1.33G/1.33G [29:30<00:00, 749kB/s]\r\n> ...\r\n> ...\r\n> Traceback (most recent call last): \r\n> File \"examples/rag/use_own_knowledge_dataset.py\", line 195, in <module>\r\n> main(tmp_dir, rag_example_args, processing_args, index_hnsw_args)\r\n> File \"examples/rag/use_own_knowledge_dataset.py\", line 116, in main\r\n> rag_example_args.rag_model_name, index_name=\"custom\", indexed_dataset=dataset\r\n> File \"/home/ioannis/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/retrieval_rag.py\", line 321, in from_pretrained\r\n> config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer\r\n> File \"/home/ioannis/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/retrieval_rag.py\", line 310, in __init__\r\n> self.init_retrieval()\r\n> File \"/home/ioannis/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/retrieval_rag.py\", line 338, in init_retrieval\r\n> self.index.init_index()\r\n> File \"/home/ioannis/anaconda3/envs/transformers/lib/python3.6/site-packages/transformers/retrieval_rag.py\", line 248, in init_index\r\n> dummy=self.use_dummy_dataset,\r\n> File \"/home/ioannis/anaconda3/envs/transformers/lib/python3.6/site-packages/datasets/load.py\", line 611, in load_dataset\r\n> ignore_verifications=ignore_verifications,\r\n> File \"/home/ioannis/anaconda3/envs/transformers/lib/python3.6/site-packages/datasets/builder.py\", line 476, in download_and_prepare\r\n> dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n> File \"/home/ioannis/anaconda3/envs/transformers/lib/python3.6/site-packages/datasets/builder.py\", line 553, in _download_and_prepare\r\n> self._prepare_split(split_generator, **prepare_split_kwargs)\r\n> File \"/home/ioannis/anaconda3/envs/transformers/lib/python3.6/site-packages/datasets/builder.py\", line 841, in _prepare_split\r\n> generator, unit=\" examples\", total=split_info.num_examples, leave=False, disable=not_verbose\r\n> File \"/home/ioannis/anaconda3/envs/transformers/lib/python3.6/site-packages/tqdm/std.py\", line 1133, in __iter__\r\n> for obj in iterable:\r\n> File \"/home/ioannis/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2/wiki_dpr.py\", line 124, in _generate_examples\r\n> id, text, title = line.strip().split(\"\\t\")\r\n> ValueError: not enough values to unpack (expected 3, got 2)\r\n> ```\r\n\r\nI am facing the same issue.", "Thanks for reporting I'm looking into it", "> Glad this got implemented! Many thanks @lhoestq . I checked out a copy, added a custom 25MB CSV file, and gave it a run:\r\n> python examples/rag/use_own_knowledge_dataset.py\r\n> \r\n> Got this.\r\n> \r\n> ```\r\n> OSError: Model name 'facebook/dpr-ctx_encoder-multiset-base' was not found in tokenizers model name list (facebook/dpr-ctx_encoder-single-nq-base). We assumed 'facebook/dpr-ctx_encoder-multiset-base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\r\n> ```\r\n> \r\n> I then switched to **dpr-ctx_encoder-single-nq-base**. In this case, indexing was successful. However, after Step 3 - Load RAG, the script started downloading the 74G wiki dataset (which should not be necessary) and then errored out on line 195.\r\n> \r\n> ```\r\n> $ python examples/rag/use_own_knowledge_dataset.py\r\n> ...\r\n> ValueError: not enough values to unpack (expected 3, got 2)\r\n> ```\r\n\r\nYou're having this issue because you are running the script with a version of transformers that doesn't include the changes I had to make in this PR to support custom datasets. This PR not only adds an example script, but there are also changes to make it possible in the core code of the RAG retriever.\r\n\r\nEverything works fine if you have all the changes of this PR", "> > Glad this got implemented! Many thanks @lhoestq . I checked out a copy, added a custom 25MB CSV file, and gave it a run:\r\n> > python examples/rag/use_own_knowledge_dataset.py\r\n> > Got this.\r\n> > ```\r\n> > OSError: Model name 'facebook/dpr-ctx_encoder-multiset-base' was not found in tokenizers model name list (facebook/dpr-ctx_encoder-single-nq-base). We assumed 'facebook/dpr-ctx_encoder-multiset-base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.\r\n> > ```\r\n> > \r\n> > \r\n> > I then switched to **dpr-ctx_encoder-single-nq-base**. In this case, indexing was successful. However, after Step 3 - Load RAG, the script started downloading the 74G wiki dataset (which should not be necessary) and then errored out on line 195.\r\n> > ```\r\n> > $ python examples/rag/use_own_knowledge_dataset.py\r\n> > ...\r\n> > ValueError: not enough values to unpack (expected 3, got 2)\r\n> > ```\r\n> \r\n> You're having this issue because you are running the script with a version of transformers that doesn't include the changes I had to make in this PR to support custom datasets. This PR not only adds an example script, but there are also changes to make it possible in the core code of the RAG retriever.\r\n> \r\n> Everything works fine if you have all the changes of this PR\r\n\r\nHaving a separate script to fine-tune with custom datasets would be super useful!", "> Having a separate script to fine-tune with custom datasets would be super useful!\r\n\r\nI am adding flags to the fine-tuning scripts to make it work with a custom retriever ;)", "Ah, I think my mistake was that I was using the previous conda environment of transformers (with the new branch). Trying this again with a new env now :)", "> > I just checkd out [14420dc](https://github.com/huggingface/transformers/commit/14420dcbfa0f06012152cc66a7af84d7165a2a17) (committed one hour ago) and I am still getting the same error. Also, it was this branch that was giving the previous errors, not the main transformers branch. Perhaps some of your files have not been updated in the repository? Confused :)\r\n> > `OSError: Model name 'facebook/dpr-ctx_encoder-multiset-base' was not found in tokenizers model name list (facebook/dpr-ctx_encoder-single-nq-base). We assumed 'facebook/dpr-ctx_encoder-multiset-base' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.`\r\n> \r\n> Could you try to run\r\n> \r\n> ```python\r\n> from transformers import DPRContextEncoderTokenizerFast\r\n> tokenizer = DPRContextEncoderTokenizerFast.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n> ```\r\n> \r\n> and let me know if you're having the OSError on [14420dc](https://github.com/huggingface/transformers/commit/14420dcbfa0f06012152cc66a7af84d7165a2a17) ?\r\n> \r\n> Also just to help me fix this issue, could you also tell me the input of this code please:\r\n> \r\n> ```python\r\n> from transformers.tokenization_dpr_fast import CONTEXT_ENCODER_PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES\r\n> print(CONTEXT_ENCODER_PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES)\r\n> # it should print {'facebook/dpr-ctx_encoder-single-nq-base': 512, 'facebook/dpr-ctx_encoder-multiset-base': 512}\r\n> ```\r\n\r\nI was using the old transformers conda env. Doing things over again now and will report back asap!", "Ok thanks !\r\nI deleted my comment since I just noticed yours about the wrong env.\r\nIt should work for you now in a new env", "I added some tests for the distributed retriever for fine-tuning.\r\nUnless you still have an issue @ioannist this PR should be ready.\r\nCc @patrickvonplaten if you want to take a look at the new changes", "> Ok thanks !\r\n> I deleted my comment since I just noticed yours about the wrong env.\r\n> It should work for you now in a new env\r\n\r\nI created a new conda env from 14420dcb, installed pytorch and tf 2.2 (2.0 did not work). I ran again into the OS error. Here is the output on the tests you asked me to run +1 more.\r\n\r\n(HEAD detached at 14420dcb)\r\n\r\n```\r\nfrom transformers import DPRContextEncoderTokenizerFast\r\ntokenizer = DPRContextEncoderTokenizerFast.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n```\r\nNo error\r\n\r\n```\r\nfrom transformers.tokenization_dpr_fast import CONTEXT_ENCODER_PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES\r\nprint(CONTEXT_ENCODER_PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES)\r\n```\r\nModuleNotFoundError: No module named 'transformers.tokenization_dpr_fast'\r\nI checked if the tokenization_dpr_fast file is under src/transformers. It's there.\r\n\r\n```\r\nfrom transformers.tokenization_dpr import CONTEXT_ENCODER_PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES\r\nprint(CONTEXT_ENCODER_PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES)\r\n```\r\n{'facebook/dpr-ctx_encoder-single-nq-base': 512}\r\n", "Give me an hour to test this more. I may be messing up somewhere and don't wanna waste your time..", "If you can't import `transformers.tokenization_dpr_fast` that must be an environment issue. This file was added recently on the master branch. It looks like your env imports a version of transformers that is not up-to-date. That's why you're having an OSError.\r\n\r\nMaybe you can run `pip show transformers` to check the location of your transformers installation ?", "> If you can't import `transformers.tokenization_dpr_fast` that must be an environment issue. This file was added recently on the master branch. It looks like your env imports a version of transformers that is not up-to-date. That's why you're having an OSError.\r\n> \r\n> Maybe you can run `pip show transformers` to check the location of your transformers installation ?\r\n\r\nOk, I just did the env again for the 3rd time and it works! No idea what i messed up before. \r\n\r\nStep -3 Load Rag in progress :)\r\n", "Slow tests pass. This one is ready to merge.\r\n", "@lhoestq - feel free to merge whenever!", "When I do \"from transformers import DPRContextEncoder\", I get an error :\r\nFile \"convert_slow_tokenizer.py\", line 24, in <module>\r\n from tokenizers.models import BPE, Unigram, WordPiece\r\nImportError: cannot import name 'Unigram'\r\n\r\nUnigram is missing in tokenizers 0.8.1.rc2. Needed to update to 0.9.0", "> I took your comments into account, let me know if you have other things to improve.\r\n> Also I had to change the DPR encoder from the one trained on Natural Questions to the one trained ont the multiset/hybrid dataset to match the embeddings used by the Rag team.\r\n\r\n@lhoestq \r\n\r\nHi, can you elaborate on the change you made in the DPR bit more? My understanding is, you have pretrained the DPR with a hybrid dataset to improve the performance when encoding a custom knowledge base. \r\n\r\nIf you have pretrained the DPR, can you please publish the code? \r\n\r\nCan you please refer to this issue also. \r\n\r\nhttps://github.com/huggingface/transformers/issues/8037\r\n\r\nThanks a lot.", "There are two versions of DPR in the paper. One trained on NQ and one trained on various datasets. The authors released the code and the weights in this [this repo](https://github.com/facebookresearch/DPR).\r\n\r\nThe change I did was just to use the weight of the second one, since it was the one used for RAG.", "Thanks. So as mentioned in the RAG paper, we can use the doc encoder to get\nembeddings for any custom dataset. Later we can only fine tune the BART\nmodel and the question encoder.\n\nOn Tue, Oct 27, 2020, 01:42 Quentin Lhoest <[email protected]> wrote:\n\n> There are two versions of DPR in the paper. One trained on NQ and one\n> trained on various datasets. The authors released the code and the weights\n> in this this repo <https://github.com/facebookresearch/DPR>.\n>\n> The change I did was just to use the weight of the second one, since it\n> was the one used for RAG.\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/7763#issuecomment-716521222>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGUJ5EC6J2ROO56RXB3SMVVDPANCNFSM4SPHCGQQ>\n> .\n>\n", "Yes exactly :) ", "@lhoestq , I tried use_own_knowledge_dataset.py to try and retrieve passages from a custom dataset in response to queries. It works, though the relevance of the results isn't great. \r\nI wanted to try and use finetune.sh. Is it possible to finetune just the retriever ? Is there a sample format for the training data ? Looks like it will need the question and passages as input. Need positive and negative passages ? \r\nThanks!", "@mchari He previously said (https://github.com/huggingface/transformers/issues/8037), it is possible to retrain DPR with Facebook code and then convert the checkpoint to the huge face compatible.\r\n\r\nWhat if you fine-tune the RAG system with your own data letting the question encoder to get fine-tuned? It seems like the pre-training of DPR can be a hard task since the results can depend on the selection procedure of the negative samples as mentioned in the paper. " ]
1,602
1,605
1,603
MEMBER
null
As asked in #7725 , #7462 and #7631 , I added a way to let users build and load their own knowledge dataset for RAG. I also added an example script that shows how to do that from csv files. Before merging I'd like to make sure it creates the exact same embeddings that the one that were computed by the RAG team. I might need to do adjustments to the tokenization and maybe change the DPR encoder from the one trained on Natural Questions to the one trained ont the multiset/hybrid dataset. Any feedbacks on the example and the HFIndex changes are welcome ! More details about the changes: Previously the HFIndex only allowed to load existing datasets ("canonical" datasets) from the datasets library. So I split it into two classes `CanonicalHFIndex` to load canonical datasets and `CustomHFIndex` for custom user-defined ones. Moreover the `config.index_name` used to accept any canonical dataset name, or "legacy" for the index the RAG team first provided. Now `config.index_name` can also accept "custom" for custom user-defined indexed datasets.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7763/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7763/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7763", "html_url": "https://github.com/huggingface/transformers/pull/7763", "diff_url": "https://github.com/huggingface/transformers/pull/7763.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7763.patch", "merged_at": 1603129366000 }
https://api.github.com/repos/huggingface/transformers/issues/7762
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7762/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7762/comments
https://api.github.com/repos/huggingface/transformers/issues/7762/events
https://github.com/huggingface/transformers/pull/7762
720,466,961
MDExOlB1bGxSZXF1ZXN0NTAyNTA1NDU0
7,762
Faster pegasus tokenization test with reduced data size
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,602
1,602
CONTRIBUTOR
null
This used to take 10s (and tokenize 100K words) now tokenizes 1k words and takes 1s.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7762/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7762/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7762", "html_url": "https://github.com/huggingface/transformers/pull/7762", "diff_url": "https://github.com/huggingface/transformers/pull/7762.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7762.patch", "merged_at": 1602620550000 }
https://api.github.com/repos/huggingface/transformers/issues/7761
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7761/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7761/comments
https://api.github.com/repos/huggingface/transformers/issues/7761/events
https://github.com/huggingface/transformers/issues/7761
720,402,149
MDU6SXNzdWU3MjA0MDIxNDk=
7,761
Deutsch to English Translation Model by Google doesn't work anymore...
{ "login": "avacaondata", "id": 35173563, "node_id": "MDQ6VXNlcjM1MTczNTYz", "avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avacaondata", "html_url": "https://github.com/avacaondata", "followers_url": "https://api.github.com/users/avacaondata/followers", "following_url": "https://api.github.com/users/avacaondata/following{/other_user}", "gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}", "starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions", "organizations_url": "https://api.github.com/users/avacaondata/orgs", "repos_url": "https://api.github.com/users/avacaondata/repos", "events_url": "https://api.github.com/users/avacaondata/events{/privacy}", "received_events_url": "https://api.github.com/users/avacaondata/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Do you want to post the full error message? (and the information asked in the issue template)", "These models have been added last months, so they shouldn't have changed much. The full issue template filled would be very helpful here!", "This is the error: \r\nValueError: Unrecognized model identifier: bert-generation. Should contain one of retribert, t5, mobilebert, distilbert, albert, camembert, xlm-roberta, pegasus, marian, mbart, bart, reformer, longformer, roberta, flaubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm, ctrl, electra, encoder-decoder\r\n\r\nUsing exactly the code appearing in the link I passed:\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"google/bert2bert_L-24_wmt_de_en\", pad_token=\"<pad>\", eos_token=\"</s>\", bos_token=\"<s>\")\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"google/bert2bert_L-24_wmt_de_en\")\r\n\r\nsentence = \"Willst du einen Kaffee trinken gehen mit mir?\"\r\n\r\ninput_ids = tokenizer(sentence, return_tensors=\"pt\", add_special_tokens=False).input_ids\r\noutput_ids = model.generate(input_ids)[0]\r\nprint(tokenizer.decode(output_ids, skip_special_tokens=True))\r\n```\r\n\r\nTransformers version: 3.1.0\r\n", "@patrickvonplaten what should be done here? The `BertGeneration` model cannot be loaded directly through the `AutoModelForSeq2SeqLM` auto-model, can it?", "Then how could I load it?\r\n", "Hey @alexvaca0 - the google/encoder-decoder models were released in transformers 3.2.0 => so you will have to update your transformers version for it :-) It should then work as expected.", "Ohhh so sorry, my bad :( Thanks a lot for the quick response! :) ", "I think this may not have been fully resolved? I'm getting a simmilar error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/modeling_utils.py\", line 926, in from_pretrained\r\n state_dict = torch.load(resolved_archive_file, map_location=\"cpu\")\r\n File \"/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/serialization.py\", line 527, in load\r\n with _open_zipfile_reader(f) as opened_zipfile:\r\n File \"/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/serialization.py\", line 224, in __init__\r\n super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer))\r\nRuntimeError: version_ <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at /opt/conda/conda-bld/pytorch_1579022060824/work/caffe2/serialize/inline_container.cc:132, please report a bug to PyTorch. Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 2. Your PyTorch installation may be too old. (init at /opt/conda/conda-bld/pytorch_1579022060824/work/caffe2/serialize/inline_container.cc:132)\r\nframe #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x47 (0x7fd86f9d2627 in /home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/lib/libc10.so)\r\nframe #1: caffe2::serialize::PyTorchStreamReader::init() + 0x1f5b (0x7fd82fbbb9ab in /home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch.so)\r\nframe #2: caffe2::serialize::PyTorchStreamReader::PyTorchStreamReader(std::string const&) + 0x64 (0x7fd82fbbcbc4 in /home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch.so)\r\nframe #3: <unknown function> + 0x6d2146 (0x7fd87067e146 in /home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_python.so)\r\nframe #4: <unknown function> + 0x28ba06 (0x7fd870237a06 in /home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/lib/libtorch_python.so)\r\n<omitting python frames>\r\nframe #37: __libc_start_main + 0xe7 (0x7fd87474cb97 in /lib/x86_64-linux-gnu/libc.so.6)\r\n\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"wmt_test.py\", line 26, in <module>\r\n model = AutoModelForSeq2SeqLM.from_pretrained(\"google/bert2bert_L-24_wmt_de_en\").to(device)\r\n File \"/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/modeling_auto.py\", line 1073, in from_pretrained\r\n return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)\r\n File \"/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/modeling_utils.py\", line 929, in from_pretrained\r\n \"Unable to load weights from pytorch checkpoint file. \"\r\nOSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.\r\n```\r\n\r\npython = 3.76, torch==1.4.0, transformers==3.2.0", "Hi @ezekielbarnett could you open a new issue and fill the issue template? A reproducible code example would be particularly helpful here." ]
1,602
1,607
1,602
NONE
null
Hi, the model in: https://huggingface.co/google/bert2bert_L-24_wmt_de_en doesn't work anymore. It seems that the library has changed a lot since the model was added, therefore the classes themselves seem to have changed in names etc. Can anyone tell me how could I apply with the current library functionality? Thanks in advance! :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7761/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7761/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7760
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7760/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7760/comments
https://api.github.com/repos/huggingface/transformers/issues/7760/events
https://github.com/huggingface/transformers/issues/7760
720,388,277
MDU6SXNzdWU3MjAzODgyNzc=
7,760
AttributeError: 'tuple' object has no attribute 'detach'
{ "login": "ShivanshuPurohit", "id": 42869065, "node_id": "MDQ6VXNlcjQyODY5MDY1", "avatar_url": "https://avatars.githubusercontent.com/u/42869065?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ShivanshuPurohit", "html_url": "https://github.com/ShivanshuPurohit", "followers_url": "https://api.github.com/users/ShivanshuPurohit/followers", "following_url": "https://api.github.com/users/ShivanshuPurohit/following{/other_user}", "gists_url": "https://api.github.com/users/ShivanshuPurohit/gists{/gist_id}", "starred_url": "https://api.github.com/users/ShivanshuPurohit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ShivanshuPurohit/subscriptions", "organizations_url": "https://api.github.com/users/ShivanshuPurohit/orgs", "repos_url": "https://api.github.com/users/ShivanshuPurohit/repos", "events_url": "https://api.github.com/users/ShivanshuPurohit/events{/privacy}", "received_events_url": "https://api.github.com/users/ShivanshuPurohit/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "As the error says, you've applied a `.detach()` method to a model output, which are *always* tuples. You can check the [documentation here](https://huggingface.co/transformers/main_classes/output.html).\r\n\r\nYou probably want the first output of your model so change this line:\r\n```py\r\nbatch_output = model(batch_data, token_type_ids=None, attention_mask=batch_masks) \r\n```\r\nto\r\n```py\r\nbatch_output = model(batch_data, token_type_ids=None, attention_mask=batch_masks)[0]\r\n```" ]
1,602
1,602
1,602
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: bert-base-uncased - Platform: pytorch - Python version: 3.6 - PyTorch version (GPU?): 1.6.0 - Tensorflow version (GPU?): - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik documentation: @sgugger --> ## Information Model I am using (Bert,): The problem arises when using: * [ ] the official example scripts: (give details below) * [.] my own modified scripts: (give details below) I am using bert for keyphrase extraction, based on the allenai-scibert code. When evaluating the model, ` for _ in range(params.eval_steps): # fetch the next evaluation batch batch_data, batch_tags = next(data_iterator) batch_masks = batch_data.gt(0) loss, _ = model(batch_data, token_type_ids=None, attention_mask=batch_masks, labels=batch_tags) if params.n_gpu > 1 and params.multi_gpu: loss = loss.mean() loss_avg.update(loss.item()) batch_output = model(batch_data, token_type_ids=None, attention_mask=batch_masks) # shape: (batch_size, max_len, num_labels) batch_output = batch_output.detach().cpu().numpy() batch_tags = batch_tags.to('cpu').numpy() pred_tags.extend([idx2tag.get(idx) for indices in np.argmax(batch_output, axis=2) for idx in indices]) true_tags.extend([idx2tag.get(idx) for indices in batch_tags for idx in indices]) assert len(pred_tags) == len(true_tags)` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [.] my own task or dataset: (give details below) SemEval 2017, task 1 ## To reproduce Steps to reproduce the behavior: 1. run the train.py script from [this repo](https://github.com/pranav-ust/BERT-keyphrase-extraction), but with `transformers` library instead of `pytorch-pretrained-bert` 2. The script gives the error: `Traceback (most recent call last): File "train.py", line 219, in <module> train_and_evaluate(model, train_data, val_data, optimizer, scheduler, params, args.model_dir, args.restore_file) File "train.py", line 106, in train_and_evaluate train_metrics = evaluate(model, train_data_iterator, params, mark='Train') File "/content/BERT-keyphrase-extraction/evaluate.py", line 54, in evaluate batch_output = batch_output.detach().cpu().numpy() AttributeError: 'tuple' object has no attribute 'detach'` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The model should continue training after the first epoch
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7760/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7760/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7759
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7759/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7759/comments
https://api.github.com/repos/huggingface/transformers/issues/7759/events
https://github.com/huggingface/transformers/pull/7759
720,338,558
MDExOlB1bGxSZXF1ZXN0NTAyMzkwNTQ0
7,759
Adding optional trial argument to model_init
{ "login": "madlag", "id": 272253, "node_id": "MDQ6VXNlcjI3MjI1Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/272253?v=4", "gravatar_id": "", "url": "https://api.github.com/users/madlag", "html_url": "https://github.com/madlag", "followers_url": "https://api.github.com/users/madlag/followers", "following_url": "https://api.github.com/users/madlag/following{/other_user}", "gists_url": "https://api.github.com/users/madlag/gists{/gist_id}", "starred_url": "https://api.github.com/users/madlag/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/madlag/subscriptions", "organizations_url": "https://api.github.com/users/madlag/orgs", "repos_url": "https://api.github.com/users/madlag/repos", "events_url": "https://api.github.com/users/madlag/events{/privacy}", "received_events_url": "https://api.github.com/users/madlag/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,602
1,602
CONTRIBUTOR
null
# Model structure optimization This PR proposes to add the "trial" argument to the model_init function when using `trainer.hyperparameter_search`. It's backward compatible using six to check the number of arguments of model_init through Python reflection. ``` def model_init(trial): if trial != None: layer_count = trial.suggest_int("layer_count", 2, 4) else: layer_count = 2 return MyModel(layer_count) trainer = Trainer( ... model_init=model_init, ... ) trainer.hyperparameter_search(direction="maximize") ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7759/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7759/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7759", "html_url": "https://github.com/huggingface/transformers/pull/7759", "diff_url": "https://github.com/huggingface/transformers/pull/7759.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7759.patch", "merged_at": 1602601623000 }
https://api.github.com/repos/huggingface/transformers/issues/7758
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7758/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7758/comments
https://api.github.com/repos/huggingface/transformers/issues/7758/events
https://github.com/huggingface/transformers/pull/7758
720,286,775
MDExOlB1bGxSZXF1ZXN0NTAyMzQ0MTI5
7,758
fixed lots of typos.
{ "login": "kwsp", "id": 30088608, "node_id": "MDQ6VXNlcjMwMDg4NjA4", "avatar_url": "https://avatars.githubusercontent.com/u/30088608?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kwsp", "html_url": "https://github.com/kwsp", "followers_url": "https://api.github.com/users/kwsp/followers", "following_url": "https://api.github.com/users/kwsp/following{/other_user}", "gists_url": "https://api.github.com/users/kwsp/gists{/gist_id}", "starred_url": "https://api.github.com/users/kwsp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kwsp/subscriptions", "organizations_url": "https://api.github.com/users/kwsp/orgs", "repos_url": "https://api.github.com/users/kwsp/repos", "events_url": "https://api.github.com/users/kwsp/events{/privacy}", "received_events_url": "https://api.github.com/users/kwsp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,602
1,602
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixed lots of typos in the documentation. For the record, I used a chrome spell check extension to find common typos and used vim + ripgrep + fzf to do bulk corrections. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7758/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7758/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7758", "html_url": "https://github.com/huggingface/transformers/pull/7758", "diff_url": "https://github.com/huggingface/transformers/pull/7758.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7758.patch", "merged_at": 1602597620000 }
https://api.github.com/repos/huggingface/transformers/issues/7757
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7757/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7757/comments
https://api.github.com/repos/huggingface/transformers/issues/7757/events
https://github.com/huggingface/transformers/issues/7757
720,214,790
MDU6SXNzdWU3MjAyMTQ3OTA=
7,757
Help with finetuning mBART on an unseen language
{ "login": "laibamehnaz", "id": 36405283, "node_id": "MDQ6VXNlcjM2NDA1Mjgz", "avatar_url": "https://avatars.githubusercontent.com/u/36405283?v=4", "gravatar_id": "", "url": "https://api.github.com/users/laibamehnaz", "html_url": "https://github.com/laibamehnaz", "followers_url": "https://api.github.com/users/laibamehnaz/followers", "following_url": "https://api.github.com/users/laibamehnaz/following{/other_user}", "gists_url": "https://api.github.com/users/laibamehnaz/gists{/gist_id}", "starred_url": "https://api.github.com/users/laibamehnaz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/laibamehnaz/subscriptions", "organizations_url": "https://api.github.com/users/laibamehnaz/orgs", "repos_url": "https://api.github.com/users/laibamehnaz/repos", "events_url": "https://api.github.com/users/laibamehnaz/events{/privacy}", "received_events_url": "https://api.github.com/users/laibamehnaz/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @laibamehnaz , that's a great question for forum\r\nhttps://discuss.huggingface.co/.\r\n\r\nCould you post it there, someone might have tried it and the forum is better to discuss such questions :) ", "Great. I will post it there. Thanks a lot.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,602
1,608
1,608
NONE
null
I wanted to know how could we finetune mBART on summarization task on a different language than that of English. Also, how can we finetune mBART on translation task where one of the languages is not present in the language code list that mBART has been trained on. Appreciate any help!! Thank you.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7757/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7757/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7756
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7756/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7756/comments
https://api.github.com/repos/huggingface/transformers/issues/7756/events
https://github.com/huggingface/transformers/pull/7756
720,071,168
MDExOlB1bGxSZXF1ZXN0NTAyMTU0OTcz
7,756
[Rag] Fix loading of pretrained Rag Tokenizer
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Got 2 offline yes! from @thomwolf and @LysandreJik => merging." ]
1,602
1,602
1,602
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #7710, #7690 A bug was introduced in https://github.com/huggingface/transformers/pull/7141#discussion_r503800703 that changes the loading of pre-trained special tokens files. This PR more or less reverts the critical changes so that RAG works again. This can be verified by running this test: ``` RUN_SLOW=1 pytest tests/test_modeling_rag.py::RagModelIntegrationTests::test_rag_sequence_generate_batch ``` A new RAG tokenizer test was added. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? Running all slow tests to check that nothing breaks. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> @thomwolf @LysandreJik @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7756/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7756/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7756", "html_url": "https://github.com/huggingface/transformers/pull/7756", "diff_url": "https://github.com/huggingface/transformers/pull/7756.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7756.patch", "merged_at": 1602592463000 }
https://api.github.com/repos/huggingface/transformers/issues/7755
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7755/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7755/comments
https://api.github.com/repos/huggingface/transformers/issues/7755/events
https://github.com/huggingface/transformers/issues/7755
720,061,760
MDU6SXNzdWU3MjAwNjE3NjA=
7,755
HfArgumentParser not support optional bools
{ "login": "huhk-sysu", "id": 14769033, "node_id": "MDQ6VXNlcjE0NzY5MDMz", "avatar_url": "https://avatars.githubusercontent.com/u/14769033?v=4", "gravatar_id": "", "url": "https://api.github.com/users/huhk-sysu", "html_url": "https://github.com/huhk-sysu", "followers_url": "https://api.github.com/users/huhk-sysu/followers", "following_url": "https://api.github.com/users/huhk-sysu/following{/other_user}", "gists_url": "https://api.github.com/users/huhk-sysu/gists{/gist_id}", "starred_url": "https://api.github.com/users/huhk-sysu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/huhk-sysu/subscriptions", "organizations_url": "https://api.github.com/users/huhk-sysu/orgs", "repos_url": "https://api.github.com/users/huhk-sysu/repos", "events_url": "https://api.github.com/users/huhk-sysu/events{/privacy}", "received_events_url": "https://api.github.com/users/huhk-sysu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes optional bools are not supported by the `HFArgumentParser` because it wants to either `store_true` or `store_false` them. I have fixed the `TrainingArguments` to work around that (e.g., the defaults that come from the parser are okay) but when I have a bit of time I'll try to fix this better.", "Thanks. I want to report a related issue:\r\n\r\nFor the same reason above, the argument `evaluate_during_training` would be set to either `store_true` or `store_false`, so its default value of `None` doesn't work.\r\nhttps://github.com/huggingface/transformers/blob/1ba08dc221ff101a751c16462c3a256d726e7c85/src/transformers/training_args.py#L188-L191\r\n\r\nAnd this may lead to another problem:\r\nhttps://github.com/huggingface/transformers/blob/1ba08dc221ff101a751c16462c3a256d726e7c85/src/transformers/training_args.py#L326-L335\r\nLine 326 will always be `True`, and the `EvaluationStrategy` can only be chosen from `STEPS` and `NO`, but without `EPOCH`.\r\n\r\nThe final result is(I remove unimportant args):\r\n`python main.py --evaluation_strategy epoch`\r\nmay lead to `evaluation_strategy=EvaluationStrategy.NO`\r\nwhile\r\n`python main.py --evaluate_during_training --evaluation_strategy epoch`\r\nmay lead to `evaluation_strategy=EvaluationStrategy.STEPS`" ]
1,602
1,602
1,602
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Platform: Linux-3.10.0 - Python version: 3.7.3 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. --> @sgugger ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) Part of the code are copied from [examples/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/v3.3.1/examples/text-classification/run_glue.py) ## To reproduce Steps to reproduce the behavior: 1. Save the following code as 'test.py' then `python test.py --output_dir 123` ```python from transformers import TrainingArguments, HfArgumentParser parser = HfArgumentParser(TrainingArguments) training_args, = parser.parse_args_into_dataclasses() # All of the following fields has a default value of `None` print(training_args.greater_is_better) # type=Optional[bool] print(training_args.disable_tqdm) # type=Optional[bool] print(training_args.metric_for_best_model) # type=Optional[str] print(training_args.save_total_limit) # type=Optional[int] ``` 2. Got following output: False False None None Notice the first two fields' default values are changed. They should be remain `None` as long as I don't pass the args from the cli. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior It should be the following output: None None None None Because all of these four fields are **Optional** and their default values are `None` (See [training_args.py](https://github.com/huggingface/transformers/blob/v3.3.1/src/transformers/training_args.py)). Also, in https://github.com/huggingface/transformers/blob/1ba08dc221ff101a751c16462c3a256d726e7c85/src/transformers/training_args.py#L324 https://github.com/huggingface/transformers/blob/1ba08dc221ff101a751c16462c3a256d726e7c85/src/transformers/training_args.py#L342 both `greater_is_better` and `disable_tqdm` are detected whether they are None, which means the author intends them to be None when they are unspecific in passing args. This may caused by https://github.com/huggingface/transformers/blob/1ba08dc221ff101a751c16462c3a256d726e7c85/src/transformers/hf_argparser.py#L67-L68 Since they have a type of `Optional[bool]`,their `kwargs["action"]` is set to `store_true`. So when I don't pass the args from the cli, they have a default value of `False` instead of `None`. I'm not sure if this is a intended design or a mistake, sorry for disturbing. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7755/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7755/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7754
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7754/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7754/comments
https://api.github.com/repos/huggingface/transformers/issues/7754/events
https://github.com/huggingface/transformers/pull/7754
720,014,630
MDExOlB1bGxSZXF1ZXN0NTAyMTA5Mzc2
7,754
ElectraTokenizerFast
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,602
1,602
MEMBER
null
Before the fix, when loading an `ElectraTokenizerFast`: ```py from transformers import ElectraTokenizerFast tokenizer = ElectraTokenizerFast.from_pretrained("ahotrod/electra_large_discriminator_squad2_512") ``` ``` Traceback (most recent call last): File "/Users/jik/Library/Application Support/JetBrains/PyCharm2020.2/scratches/7735.py", line 3, in <module> tokenizer = ElectraTokenizerFast.from_pretrained("ahotrod/electra_large_discriminator_squad2_512") File "/Users/jik/Workspaces/python/transformers/src/transformers/tokenization_utils_base.py", line 1555, in from_pretrained resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs File "/Users/jik/Workspaces/python/transformers/src/transformers/tokenization_utils_base.py", line 1623, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/Users/jik/Workspaces/python/transformers/src/transformers/tokenization_bert.py", line 641, in __init__ **kwargs, File "/Users/jik/Workspaces/python/transformers/src/transformers/tokenization_utils_fast.py", line 89, in __init__ self._tokenizer = convert_slow_tokenizer(slow_tokenizer) File "/Users/jik/Workspaces/python/transformers/src/transformers/convert_slow_tokenizer.py", line 565, in convert_slow_tokenizer converter_class = CONVERTERS[transformer_tokenizer.__class__.__name__] KeyError: 'ElectraTokenizer' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7754/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7754/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7754", "html_url": "https://github.com/huggingface/transformers/pull/7754", "diff_url": "https://github.com/huggingface/transformers/pull/7754.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7754.patch", "merged_at": 1602579041000 }
https://api.github.com/repos/huggingface/transformers/issues/7753
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7753/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7753/comments
https://api.github.com/repos/huggingface/transformers/issues/7753/events
https://github.com/huggingface/transformers/pull/7753
719,942,758
MDExOlB1bGxSZXF1ZXN0NTAyMDQ5NjY5
7,753
New TF model design
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for your comments! I will try to answer as clearly as possible.\r\n\r\n### From embedding building to `tf.keras.layers.Embedding`\r\n> Would we now be relying on the LM head weights being in the checkpoint?\r\n\r\nExactly, also another difference is to have the `word_embeddings` weights initialized at the model instanciation instead of model building (after a first call). The weights are anyway saved in the checkpoints and shared with the layers that needs to have access to it. You can also see that with the `resize_token_embeddings` methods. The only test with a decoder we have for BERT is `create_and_check_bert_lm_head` and is passing. Do you see any other test we can apply to check if the sharing part works as expected?\r\n\r\n### Dense -> DenseEinsum\r\n> Does the change from tf.keras.layers.Dense to tf.keras.layers.experimental.EinsumDense imply a breaking change or some magic that we must do to ensure that the weights get correctly loaded?\r\n\r\nNo breaking change at all. Old model format can be loaded in the new one and vice versa. And the magic is as simple as a single `reshape` call because both must have compliant shapes:\r\n\r\n```\r\nif K.int_shape(symbolic_weight) != saved_weight_value.shape:\r\n try:\r\n array = np.reshape(saved_weight_value, K.int_shape(symbolic_weight))\r\n except AssertionError as e:\r\n e.args += (K.int_shape(symbolic_weight), saved_weight_value.shape)\r\n raise e\r\n else:\r\n array = saved_weight_value\r\n```\r\n\r\nAnd this is already integrated in the current release https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_utils.py#L290\r\n\r\n> Is that change the main one that unlocks better performance?\r\n\r\nExactly! Only that change to unlock serving performance.\r\n\r\n### Tuples -> tf.TensorArray\r\n> Why is that change necessary? Does it give better performance? Is it necessary for graph mode?\r\n\r\nAs you said it is necessary to be able to use the `output_hidden_states` and `output_attentions` boolean parameters in graph mode. Otherwise we have to disable them 100% of the time in graph mode and it is pitty to remove that feature. I don't think it gives better performance but thanks to this we can have a variable output size. Second good reason to adopt this feature is for serving, if a SavedModel has been created with `output_attentions=True` the model will give you 12 outputs (one for each element of the tuple) instead of just one as it is the case with `tf.TensorArray`. At the end it is not really a breaking change as a tensor can be used as a tuple, you can test this by yourself simply by doing:\r\n\r\n```\r\nimport tensorflow as tf\r\nt = tf.constant([[1,2],[3,4]])\r\na, b = t\r\n```\r\n\r\n### Joining the `nsp` + `mlm` in one `cls`\r\n> This is cool as it's more similar to what we do in PyTorch! However, you're now handling the names of the weights directly in the `modeling_tf_utils.py` file. I cannot imagine that scales as more models are converted to this implementation?\r\n\r\nExactly, everything will rely on the `load_tf_weights()` function in `modeling_tf_utils.py`. I didn't have major issues to handle that case, it was quite simple, at least in BERT. Let's see how it goes for the others.", "Ok @sgugger I should have addressed all your comments.", "Ok @patrickvonplaten @sgugger @LysandreJik I should have done the changes you proposed. May I proceed the same way for the other models? Or you want me to do other updates?", "I gave up to filter out the empty tensors from the output, it was too complex to implement. So now we will always have 3 outputs corresponding to `attentions`, `hidden_states` and `logits`. But if `output_attentions` or `output_hidden_states` equals `False` they will be empty tensors (first dim equals to 0).", "Ok, now BERT looks exactly like I expected. Properly optimized + cleaner code base + full compliance with AutoGraph. Next step is to apply the same changes to the other models.", "> flip the return_dict=True switch (I don't think this should be done here - as it affects all PT models as well)\r\n\r\nThis flip is part of the improvements, if this flip is not here I can remove basically almost half of the improvements because the model will not be able to run properly in graph mode.\r\n\r\n> discuss whether we want to do the parameter renaming\r\n\r\nOk. But for now as far as I have seen, it concerns only BERT, but I still need to update other models I did not updated all of them, just few for now (wait the big next push).\r\n\r\nI agree that this PR concerns already huge changes with only updating BERT, and indeed to do one PR per model would be easier to handle. Nevertheless, I already started to update the other models, so I will revert locally and create one branch for each on my fork.", "@patrickvonplaten I removed the naming updates in order to better discuss this in a later PR.\r\n\r\nI still have work to do on the model part, mainly still two things:\r\n\r\n- Being able to properly read the model summary\r\n- Being able to properly build a graph visualization\r\n\r\nSaying this because usual subclass models cannot be parsed by Keras internals ([see this issue](https://github.com/tensorflow/tensorflow/issues/31647) to have a better explanation of the problem)", "Thanks @sgugger for your comments.\r\n\r\n> On the comments related to this PR specifically now, the big problem is that we cannot change the global switch of return_dict now. This will break changes in TF, PyTorch and Flax fopr all users of the library. Depending on the time needed to finish the TF push, there might be intermediate releases before v4 where we can't afford to have that breaking change. There is a crude way to have that switch set to True for the TFBert models just now I suggested in the comments and there is probably some clever way with a private flag that could allow us to add the TF models one by one with a new default without breaking anything. I can look more into it if you want.\r\n\r\nI don't expect this PR to be released before the v4.0 (to appear in an intermediate release). All would like that all the TF improvements comes directly in once in the next major release. Then I'm acting like if it was the next major release, thus the `return_dict=True` by default in the config. But for sure I can open a PR for each model, I agree that it will be easier to handle. Also nothing prevent to make this change in a specific PR and I will rebase this one on it, this will be the same.\r\n\r\n> Another comment is that we should add some more tests of forward/backward compatibility with the new loading function, just to be absolutely sure we don't break anything.\r\n\r\nAny idea of what are the other tests I can add, for this? For me being able to load \"old\" and \"new\" model in same time is enough. I will be happy to have your opinion on how to improve this :)", "@sgugger @patrickvonplaten @LysandreJik I have revert the conflicting changes in order to move the discussion into another PR (the general `return_dict` update and the layers naming).\r\n\r\n@sgugger I have put a warning as you suggested. But please let me know if you have a better one, I'm not really happy of the one I put.\r\n\r\nIn general, do you have any other comments on this? So I can move to the other models in order to apply the same updates if we all agree of these new changes.", "Once you're happy with the state of that PR, could you close this PR and open a new one explaining the exact changes? This is a very big thread that takes a while to load, and it's hard to understand the state this current PR is in. Thanks!", "You are totally right and this was my plan. Since last week I'm shrinking this PR to get only the optimization part and put the other improvements into separate PRs. Once done, I will reopen a clean one 👍 ", "Fantastic! Thanks @jplu!" ]
1,602
1,612
1,612
CONTRIBUTOR
null
# What does this PR do? This PR aims to improve the TensorFlow code base of transformers. For sake of simplicity the changes are made only on the BERT model, but the new features can be applied to all the others. The PR brings among some bug fix the following main new features; 1. It is now possible to train an LM model from scratch (see this [notebook ](https://colab.research.google.com/drive/1As9iz2_2eQp1Ct8trxRG3ScNfxeaVAH2?usp=sharing)as example). 2. The default outputs of the models are now dictionaries. One can only return tuples in eager execution otherwise a warning message will be displayed saying that only dictionaries are allowed in graph mode. This update fix two issues, the first one in graph mode where a layer cannot return different size of outputs, and a second one where an output cannot have a `None` value. 3. Better inputs handle. Now the inputs of each model and the main layer are parsed with a single generic function bringing a more robust parsing and a better error handling in case of wrong input. This feature fix an issue when the input was a list of symbolic inputs (i.e. `tf.keras.layers.Input`). 4. TensorFlow models looks now much more similar to what looks PyTorch models making easier for users to switch from a PyTorch model to its TensorFlow implementation and vice versa. 5. Old and new model implementations can coexist in the library making this new implementation 100% backward compatible. Including the tests.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7753/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7753/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7753", "html_url": "https://github.com/huggingface/transformers/pull/7753", "diff_url": "https://github.com/huggingface/transformers/pull/7753.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7753.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7752
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7752/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7752/comments
https://api.github.com/repos/huggingface/transformers/issues/7752/events
https://github.com/huggingface/transformers/pull/7752
719,925,770
MDExOlB1bGxSZXF1ZXN0NTAyMDM1MDMx
7,752
Model Card
{ "login": "nreimers", "id": 10706961, "node_id": "MDQ6VXNlcjEwNzA2OTYx", "avatar_url": "https://avatars.githubusercontent.com/u/10706961?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nreimers", "html_url": "https://github.com/nreimers", "followers_url": "https://api.github.com/users/nreimers/followers", "following_url": "https://api.github.com/users/nreimers/following{/other_user}", "gists_url": "https://api.github.com/users/nreimers/gists{/gist_id}", "starred_url": "https://api.github.com/users/nreimers/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nreimers/subscriptions", "organizations_url": "https://api.github.com/users/nreimers/orgs", "repos_url": "https://api.github.com/users/nreimers/repos", "events_url": "https://api.github.com/users/nreimers/events{/privacy}", "received_events_url": "https://api.github.com/users/nreimers/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Looks great!" ]
1,602
1,602
1,602
CONTRIBUTOR
null
# What does this PR do? New model card for model uploaded to https://huggingface.co/sentence-transformers ## Who can review? Model Cards: @julien-c
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7752/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7752/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7752", "html_url": "https://github.com/huggingface/transformers/pull/7752", "diff_url": "https://github.com/huggingface/transformers/pull/7752.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7752.patch", "merged_at": 1602696659000 }
https://api.github.com/repos/huggingface/transformers/issues/7751
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7751/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7751/comments
https://api.github.com/repos/huggingface/transformers/issues/7751/events
https://github.com/huggingface/transformers/issues/7751
719,858,490
MDU6SXNzdWU3MTk4NTg0OTA=
7,751
Unicode issue with tokenizer.decode()
{ "login": "kelvin-jiang", "id": 20145768, "node_id": "MDQ6VXNlcjIwMTQ1NzY4", "avatar_url": "https://avatars.githubusercontent.com/u/20145768?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kelvin-jiang", "html_url": "https://github.com/kelvin-jiang", "followers_url": "https://api.github.com/users/kelvin-jiang/followers", "following_url": "https://api.github.com/users/kelvin-jiang/following{/other_user}", "gists_url": "https://api.github.com/users/kelvin-jiang/gists{/gist_id}", "starred_url": "https://api.github.com/users/kelvin-jiang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kelvin-jiang/subscriptions", "organizations_url": "https://api.github.com/users/kelvin-jiang/orgs", "repos_url": "https://api.github.com/users/kelvin-jiang/repos", "events_url": "https://api.github.com/users/kelvin-jiang/events{/privacy}", "received_events_url": "https://api.github.com/users/kelvin-jiang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is not a bug, but a lack of vocabulary diversity in T5's tokenizer. T5's tokenizer was not trained on a corpus containing that character, and therefore cannot encode it: it encodes it to an unknown token which is represented by `⁇`. \r\n\r\nYou can try using other characters that the tokenizer doesn't know how to process, for example emojis:\r\n\r\n```py\r\nfrom transformers import T5Tokenizer\r\ntokenizer = T5Tokenizer.from_pretrained('t5-3b')\r\nsent = 'Luis 😃 Fonsi. Luis Alfonso Rodríguez López-Cepero, more commonly known by his stage name Luis Fonsi, (born April 15, 1978) is a Puerto Rican singer, songwriter and actor.'\r\nprint(tokenizer.decode(tokenizer.encode(sent)))\r\n\r\n# Luis ⁇ Fonsi. Luis Alfonso Rodr ⁇ guez López-Cepero, more commonly known by his stage name Luis Fonsi, (born April 15, 1978) is a Puerto Rican singer, songwriter and actor.\r\n```\r\n\r\nYou can use the `tokenize` + `convert_tokens_to_string` because the sequence has never been converted to IDs, only to tokens:\r\n```py\r\nprint(tokenizer.tokenize(sent))\r\n\r\n# ['▁Lu', 'is', '▁', '😃', '▁F', 'on', 's', 'i', '.', '▁Lu', 'is', '▁Al', 'f', 'on', 's', 'o', '▁Rod', 'r', 'í', 'gu', 'ez', '▁L', 'ó', 'p', 'ez', '-', 'C', 'e', 'per', 'o', ',', '▁more', '▁commonly', '▁known', '▁by', '▁his', '▁stage', '▁name', '▁Lu', 'is', '▁F', 'on', 's', 'i', ',', '▁(', 'born', '▁April', '▁15,', '▁1978', ')', '▁is', '▁', 'a', '▁Puerto', '▁Rica', 'n', '▁singer', ',', '▁', 'songwriter', '▁and', '▁actor', '.']\r\n```\r\n\r\nIf your dataset contains a lot of such characters, you should think about [adding these to the tokenizer's vocabulary.](https://huggingface.co/transformers/internal/tokenization_utils.html#transformers.tokenization_utils_base.SpecialTokensMixin.add_tokens)" ]
1,602
1,602
1,602
NONE
null
## Environment info - `transformers` version: 3.3.1 - Platform: Ubuntu 18.04.1 LTS - Python version: 3.6.9 - PyTorch version (GPU?): 1.5.0 (No) - Tensorflow version (GPU?): 1.14.0 (No) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help Since the issue regards tokenizers, tagging @mfuntowicz. ## Information Model I am using (Bert, XLNet ...): T5 The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run this (I'm running in a Python shell, but it's reproducible in a script) ```python >>> from transformers import T5Tokenizer >>> tokenizer = T5Tokenizer.from_pretrained('t5-3b') >>> sent = 'Luis Fonsi. Luis Alfonso Rodríguez López-Cepero, more commonly known by his stage name Luis Fonsi, (born April 15, 1978) is a Puerto Rican singer, songwriter and actor.' >>> tokenizer.decode(tokenizer.encode(sent)) 'Luis Fonsi. Luis Alfonso Rodr ⁇ guez López-Cepero, more commonly known by his stage name Luis Fonsi, (born April 15, 1978) is a Puerto Rican singer, songwriter and actor.' ``` The "í" character turns into "⁇", while other unicode characters like "ó" come out fine. ## Expected behavior If I use a different set of functions that should end up with the same result, I get the expected: ```python >>> tokenizer.convert_tokens_to_string(tokenizer.tokenize(sent)) 'Luis Fonsi. Luis Alfonso Rodríguez López-Cepero, more commonly known by his stage name Luis Fonsi, (born April 15, 1978) is a Puerto Rican singer, songwriter and actor.' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7751/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7751/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7750
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7750/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7750/comments
https://api.github.com/repos/huggingface/transformers/issues/7750/events
https://github.com/huggingface/transformers/pull/7750
719,834,801
MDExOlB1bGxSZXF1ZXN0NTAxOTU4NjU3
7,750
Update pyarrow to meet datasets 1.1.2
{ "login": "ncoop57", "id": 7613470, "node_id": "MDQ6VXNlcjc2MTM0NzA=", "avatar_url": "https://avatars.githubusercontent.com/u/7613470?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ncoop57", "html_url": "https://github.com/ncoop57", "followers_url": "https://api.github.com/users/ncoop57/followers", "following_url": "https://api.github.com/users/ncoop57/following{/other_user}", "gists_url": "https://api.github.com/users/ncoop57/gists{/gist_id}", "starred_url": "https://api.github.com/users/ncoop57/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ncoop57/subscriptions", "organizations_url": "https://api.github.com/users/ncoop57/orgs", "repos_url": "https://api.github.com/users/ncoop57/repos", "events_url": "https://api.github.com/users/ncoop57/events{/privacy}", "received_events_url": "https://api.github.com/users/ncoop57/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The `datasets` library already has a requirement on `pyarrow>=0.17.1`: https://github.com/huggingface/datasets/blob/master/setup.py#L68, so this line should not be necessary!", "OH whoops, this must be a colab thing since it already has a `pyarrow` version installed, so what I needed to do was run the pip install with the [`--ignore-installed` flag](https://stackoverflow.com/questions/24764549/upgrade-python-packages-from-requirements-txt-using-pip-command)." ]
1,602
1,602
1,602
CONTRIBUTOR
null
# What does this PR do? Update pyarrow to meet the requirement of y'alls awesome datasets 1.1.2 library for running the examples (tested on the finetune_tiny_bart.sh seq2seq example). You can see the colab I tested on here: https://colab.research.google.com/drive/12XSQoFRlpXLEd_tfvwC5wUVYJZUPKFwE?usp=sharing Fixes #7691 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Issue #7691 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @sshleifer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7750/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7750/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7750", "html_url": "https://github.com/huggingface/transformers/pull/7750", "diff_url": "https://github.com/huggingface/transformers/pull/7750.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7750.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7749
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7749/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7749/comments
https://api.github.com/repos/huggingface/transformers/issues/7749/events
https://github.com/huggingface/transformers/issues/7749
719,809,230
MDU6SXNzdWU3MTk4MDkyMzA=
7,749
Does bart need to cache prev_key_padding_mask?
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[]
1,602
1,603
1,603
CONTRIBUTOR
null
This is only be used for self attention, presumably to remember to ignore if the last token generated was a pad. But if the last token generated was a pad, we are done with the hypothesis anyways so may not need this. If this is unnec it will save 5-10 lines of annoying stuff.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7749/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7749/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7748
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7748/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7748/comments
https://api.github.com/repos/huggingface/transformers/issues/7748/events
https://github.com/huggingface/transformers/pull/7748
719,797,901
MDExOlB1bGxSZXF1ZXN0NTAxOTI5MDc2
7,748
Create README.md
{ "login": "sarahlintang", "id": 6174505, "node_id": "MDQ6VXNlcjYxNzQ1MDU=", "avatar_url": "https://avatars.githubusercontent.com/u/6174505?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sarahlintang", "html_url": "https://github.com/sarahlintang", "followers_url": "https://api.github.com/users/sarahlintang/followers", "following_url": "https://api.github.com/users/sarahlintang/following{/other_user}", "gists_url": "https://api.github.com/users/sarahlintang/gists{/gist_id}", "starred_url": "https://api.github.com/users/sarahlintang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarahlintang/subscriptions", "organizations_url": "https://api.github.com/users/sarahlintang/orgs", "repos_url": "https://api.github.com/users/sarahlintang/repos", "events_url": "https://api.github.com/users/sarahlintang/events{/privacy}", "received_events_url": "https://api.github.com/users/sarahlintang/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Thank you! You can add more metadata and/or links to eval results if necessary." ]
1,602
1,602
1,602
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7748/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7748/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7748", "html_url": "https://github.com/huggingface/transformers/pull/7748", "diff_url": "https://github.com/huggingface/transformers/pull/7748.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7748.patch", "merged_at": 1602695431000 }
https://api.github.com/repos/huggingface/transformers/issues/7747
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7747/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7747/comments
https://api.github.com/repos/huggingface/transformers/issues/7747/events
https://github.com/huggingface/transformers/issues/7747
719,794,427
MDU6SXNzdWU3MTk3OTQ0Mjc=
7,747
BertTokenizer meet multilingual corpus, it fails to work.@mfuntowicz
{ "login": "qhd1996", "id": 24516022, "node_id": "MDQ6VXNlcjI0NTE2MDIy", "avatar_url": "https://avatars.githubusercontent.com/u/24516022?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qhd1996", "html_url": "https://github.com/qhd1996", "followers_url": "https://api.github.com/users/qhd1996/followers", "following_url": "https://api.github.com/users/qhd1996/following{/other_user}", "gists_url": "https://api.github.com/users/qhd1996/gists{/gist_id}", "starred_url": "https://api.github.com/users/qhd1996/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qhd1996/subscriptions", "organizations_url": "https://api.github.com/users/qhd1996/orgs", "repos_url": "https://api.github.com/users/qhd1996/repos", "events_url": "https://api.github.com/users/qhd1996/events{/privacy}", "received_events_url": "https://api.github.com/users/qhd1996/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,602
1,602
NONE
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7747/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7747/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7746
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7746/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7746/comments
https://api.github.com/repos/huggingface/transformers/issues/7746/events
https://github.com/huggingface/transformers/issues/7746
719,759,205
MDU6SXNzdWU3MTk3NTkyMDU=
7,746
Keep getting the same `Target 1 is out of bounds` error with `LongformerForMultipleChoice`
{ "login": "h56cho", "id": 52889259, "node_id": "MDQ6VXNlcjUyODg5MjU5", "avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h56cho", "html_url": "https://github.com/h56cho", "followers_url": "https://api.github.com/users/h56cho/followers", "following_url": "https://api.github.com/users/h56cho/following{/other_user}", "gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}", "starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h56cho/subscriptions", "organizations_url": "https://api.github.com/users/h56cho/orgs", "repos_url": "https://api.github.com/users/h56cho/repos", "events_url": "https://api.github.com/users/h56cho/events{/privacy}", "received_events_url": "https://api.github.com/users/h56cho/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey there. Please just reply on the previous issue you posted, rather than opening a new issue. \r\n\r\nI made a mistake in my previous answer, you shouldn't `unsqueeze` the answer, because it's just a tensor of shape (batch_size,).\r\n\r\nI've created a notebook that illustrates how to use BertForMultipleChoice (LongformerForMultipleChoice would be the same): https://colab.research.google.com/drive/1mWx3R7-1lPldJqH26d3fnoyZX6Qa4IpV?usp=sharing" ]
1,602
1,602
1,602
NONE
null
Hello, I am a Transformer user who posted the similar question yesterday. I am trying to use `LongformerForMultipleChoice` model, I've updated my code according to the answer that was provided. However, I am still getting the same error `Target out of bounds`. The correct answers are coded correctly, like my multiple-choice answers works very well with the `GPT2DoubleHeadsModel`. I am not sure why I am keep getting this error: ```python # import the pre-trained HuggingFace Longformer tokenizer. longformer_tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096') # get the pre-trained HuggingFace Longformer best_model_longformer = LongformerForMultipleChoice.from_pretrained('allenai/longformer-base-4096', output_hidden_states = True) # my multiple choice question has 4 options. question_list = [main_question, main_question, main_question, main_question] options_list = [option1, option2, option3, option4] # unsqueeze the answer mc_labels = torch.tensor(my_answer).unsqueeze(0) encoded_dict = longformer_tokenizer(question_list, options_list, return_tensors = 'pt', add_prefix_space = True, padding = True) input_hidden_state = best_model_longformer( **{k: v.unsqueeze(0) for k,v in encoded_dict.items()}, labels = mc_labels, return_dict=True)[2][0][:,:,:].detach() ``` and I am getting the error below: ``` Traceback (most recent call last): File "SEED_125_V20_15_LONGFORMER.py", line 427, in <module> main_function('/home/ec2-user/G1G2.txt','/home/ec2-user/G1G2_answer_num.txt', num_iter) File "SEED_125_V20_15_LONGFORMER.py", line 389, in main_function best_model_longformer) File "SEED_125_V20_15_LONGFORMER.py", line 198, in fill_MC_loss_accuracy_tensor input_hidden_state = best_model_longformer(**{k: v.unsqueeze(0) for k,v in encoded_dict.items()}, labels = mc_labels, return_dict = True)[2][0][:,:,:].detach() File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/transformers/modeling_longformer.py", line 1808, in forward loss = loss_fct(reshaped_logits, labels) File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 948, in forward ignore_index=self.ignore_index, reduction=self.reduction) File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 2422, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "/home/ec2-user/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 2218, in nll_loss ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) IndexError: Target 1 is out of bounds. ``` When I do instead `mc_labels = torch.tensor([my_answer]).unsqueeze(0)` (note the square brackets around `my_answer`), another error occurs, the error is something like `cannot process multiple answers`. How can I solve this issue? Thank you,
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7746/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7746/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7745
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7745/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7745/comments
https://api.github.com/repos/huggingface/transformers/issues/7745/events
https://github.com/huggingface/transformers/issues/7745
719,691,929
MDU6SXNzdWU3MTk2OTE5Mjk=
7,745
Attention masks are ignored when using model.generate() in batch setting for GPT-2
{ "login": "rohit497", "id": 16389162, "node_id": "MDQ6VXNlcjE2Mzg5MTYy", "avatar_url": "https://avatars.githubusercontent.com/u/16389162?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rohit497", "html_url": "https://github.com/rohit497", "followers_url": "https://api.github.com/users/rohit497/followers", "following_url": "https://api.github.com/users/rohit497/following{/other_user}", "gists_url": "https://api.github.com/users/rohit497/gists{/gist_id}", "starred_url": "https://api.github.com/users/rohit497/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rohit497/subscriptions", "organizations_url": "https://api.github.com/users/rohit497/orgs", "repos_url": "https://api.github.com/users/rohit497/repos", "events_url": "https://api.github.com/users/rohit497/events{/privacy}", "received_events_url": "https://api.github.com/users/rohit497/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "@patrickvonplaten can you help with this?", "Hey @rohit497,\r\n\r\nCould you please take a look at this entry in the forum:\r\nhttps://discuss.huggingface.co/t/batch-generation-with-gpt2/1517\r\n? \r\n\r\nIt also links to a test verifying that batch generation works correctly", "@patrickvonplaten I tried modifying my code to reflect the test (updated in the original issue as well) and updated to the latest version of transformers but it seems like the batch generation still doesn't work. Here are the values of `batch` and `single` that I get.\r\n\r\n```\r\nSINGLE:\r\n\r\ntensor([[10248, 14410, 5338, 318, 510, 351, 262, 4252, 17962, 616,\r\n 3329, 8027, 351, 617, 32856, 290, 616, 10038, 373, 2712,\r\n 16576, 416, 3589, 13, 314, 635, 1392, 616, 1492]],\r\n device='cuda:0')\r\ntensor([[ 2061, 466, 345, 477, 466, 284, 787, 340, 257, 1049,\r\n 1110, 290, 616, 10038, 373, 1972, 38427, 1701, 383, 2368,\r\n 1048, 508, 1965, 502, 326]], device='cuda:0')\r\n\r\nBATCH:\r\n\r\ntensor([[10248, 14410, 5338, 318, 510, 351, 262, 4252, 17962, 616,\r\n 3329, 8027, 351, 617, 32856, 290, 616, 10038, 373, 2712,\r\n 16576, 416, 3589, 13, 314, 635, 1392, 616, 1492],\r\n [50256, 50256, 50256, 50256, 2061, 466, 345, 477, 466, 284,\r\n 787, 340, 257, 1049, 1110, 290, 616, 10038, 373, 257,\r\n 1310, 1180, 30, 50256, 50256, 50256, 50256, 50256, 50256]],\r\n device='cuda:0')\r\n```\r\n\r\nAs you can see, the first sentence (i.e. the longer one) is matched because it needs no padding. However, the second sentence has padding on the left and it seems like it generates the eos token (the pad token) a lot. Am I missing something here?\r\n\r\n", "On further investigation, I found that if `do_sample ` is set to `False`, the batch generation works as expected but it fails with sampling. For my project, I'm trying to get diverse sentences from gpt2 using the same prompt, so sampling is very important. Is there a fix on the way for when `do_sample = True`?", "Hey @rohit497,\r\n\r\nI checked your sample and the code seems to work fine! Here to reproduce my results:\r\n\r\n```python\r\n#!/usr/bin/env python3\r\nimport torch\r\n\r\nfrom transformers import GPT2LMHeadModel, GPT2Tokenizer\r\n\r\nMODEL_CLASSES = {\r\n \"gpt2\": (GPT2LMHeadModel, GPT2Tokenizer),\r\n}\r\n\r\n\r\ndef set_seed():\r\n torch.manual_seed(42)\r\n\r\n\r\ndef generate_sequences_parallel(model, tokenizer, orig_prompt_list):\r\n\r\n set_seed()\r\n inputs = tokenizer(\r\n orig_prompt_list, add_special_tokens=False, return_tensors=\"pt\", padding=True\r\n )\r\n\r\n input_ids = inputs[\"input_ids\"]\r\n attn_masks = inputs[\"attention_mask\"]\r\n\r\n max_len_input_ids = max([len(input_id) for input_id in input_ids])\r\n\r\n output_sequences = model.generate(\r\n input_ids=input_ids,\r\n max_length=10 + max_len_input_ids,\r\n temperature=1.0,\r\n top_k=0,\r\n top_p=0.9,\r\n repetition_penalty=1.0,\r\n do_sample=True,\r\n num_return_sequences=1,\r\n attention_mask=attn_masks,\r\n )\r\n\r\n return output_sequences\r\n\r\n\r\nprompt_list_single = [\r\n [\r\n \"Good Morning Who is up with the sun Starting my morning routine with some Yoga and my mood was\"\r\n ],\r\n [\"What do you all do to make it a great day and my mood was\"],\r\n]\r\nprompt_list_batch = [\r\n \"Good Morning Who is up with the sun Starting my morning routine with some Yoga and my mood was\",\r\n \"What do you all do to make it a great day and my mood was\",\r\n]\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\nmodel = GPT2LMHeadModel.from_pretrained(\"gpt2\")\r\ntokenizer.padding_side = \"left\"\r\n\r\n# Define PAD Token = EOS Token = 50256\r\ntokenizer.pad_token = tokenizer.eos_token\r\nmodel.config.pad_token_id = model.config.eos_token_id\r\n\r\n\r\nsingle = []\r\nfor elem in prompt_list_single:\r\n single.append(generate_sequences_parallel(model, tokenizer, elem))\r\n\r\nprint(\"BATCH\")\r\nprint()\r\n\r\nbatch = generate_sequences_parallel(model, tokenizer, prompt_list_batch)\r\n\r\nprint(tokenizer.batch_decode(batch, skip_special_tokens=True))\r\n```\r\nThe outputs look good so I think the attention_mask is correctly applied and batch generation works.\r\n\r\nThe reason that you the results are not identical is becasue you sample from two different distributions. When you pass a single example the softmax output has `batch_size=1` while when you use a batch the softmax output has `batch_size=2` dimension. That means that the first time you sample from a `(1, vocab_size)` distribution whereas the second time you sample from a `(2, vocab_size)` distribution. Now while each part of `(2, vocab_size)` is the same as for the single batch passes, the sampled output can differ because `torch.multinomial` does not yield the same results IMO (maybe you can check that actually). I adapted the test slightly for which there was a `torch.manual_seed()` for some reason which might be misleading. The test only checks for argmax as this is deterministic. \r\n\r\nHope this helps." ]
1,602
1,603
1,603
NONE
null
## Environment info - `transformers` version: '3.3.1' and '2.1.0' (Tested on both) - Platform: Linux Azure VM - Python version: 3.6.8 - PyTorch version (GPU?): 1.3.0 (Yes) - Tensorflow version (GPU?): N/A - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik @TevenLeScao ## Information Model I am using (Bert, XLNet ...): GPT-2 The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ```python import argparse import logging import os import sys import time sys.path.append('transformers/src') import numpy as np import torch import csv import copy from transformers import ( GPT2LMHeadModel, GPT2Tokenizer ) from multiprocessing import Pool, cpu_count from tqdm import tqdm MODEL_CLASSES = { "gpt2": (GPT2LMHeadModel, GPT2Tokenizer), } def set_seed(): np.random.seed(42) torch.manual_seed(42) torch.cuda.manual_seed_all(42) def generate_sequences_parallel(model, tokenizer, orig_prompt_list): set_seed() proc_cnt = cpu_count() - 2 prompt_list = copy.deepcopy(orig_prompt_list) max_seq_len = 128 requires_preprocessing = False if not requires_preprocessing: # GPT-2 doesn't require prepocess so we don't need to parallelize that inputs = tokenizer(orig_prompt_list, add_special_tokens=False, return_tensors="pt", padding=True) input_ids = inputs["input_ids"] attn_masks = inputs["attention_mask"] max_len_input_ids = max([len(input_id) for input_id in input_ids]) input_ids = input_ids.to('cuda') attn_masks = attn_masks.to('cuda') output_sequences = model.generate( input_ids=input_ids, max_length=10 + max_len_input_ids, temperature=1.0, top_k=0, top_p=0.9, repetition_penalty=1.0, do_sample=True, num_return_sequences=1, attention_mask=attn_masks ) return output_sequences prompt_list_single = [['Good Morning Who is up with the sun Starting my morning routine with some Yoga and my mood was'], ['What do you all do to make it a great day and my mood was']] prompt_list_batch = ['Good Morning Who is up with the sun Starting my morning routine with some Yoga and my mood was', 'What do you all do to make it a great day and my mood was'] tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2LMHeadModel.from_pretrained('gpt2') model.to('cuda') tokenizer.padding_side = "left" # Define PAD Token = EOS Token = 50256 tokenizer.pad_token = tokenizer.eos_token model.config.pad_token_id = model.config.eos_token_id single = [] for elem in prompt_list_single: single.append(generate_sequences_parallel(model, tokenizer, elem)) print('BATCH') print() batch = generate_sequences_parallel(model, tokenizer, prompt_list_batch) assert(torch.eq(single[0],batch[0])) assert(torch.eq(single[1],batch[1])) ``` ## Expected behavior I expect the results of this script with batch size 1 to be the size as batch size 2 but it just ignores all the generated attention_ masks and position_ids. I've looked at #3021 and #3167 but those don't seem to offer a concrete solution. Is there some way to use GPT-2's batch generation? Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7745/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7745/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7744
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7744/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7744/comments
https://api.github.com/repos/huggingface/transformers/issues/7744/events
https://github.com/huggingface/transformers/issues/7744
719,608,760
MDU6SXNzdWU3MTk2MDg3NjA=
7,744
cannot load "layoutlm-base-uncased"
{ "login": "rzhao6", "id": 31970475, "node_id": "MDQ6VXNlcjMxOTcwNDc1", "avatar_url": "https://avatars.githubusercontent.com/u/31970475?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rzhao6", "html_url": "https://github.com/rzhao6", "followers_url": "https://api.github.com/users/rzhao6/followers", "following_url": "https://api.github.com/users/rzhao6/following{/other_user}", "gists_url": "https://api.github.com/users/rzhao6/gists{/gist_id}", "starred_url": "https://api.github.com/users/rzhao6/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rzhao6/subscriptions", "organizations_url": "https://api.github.com/users/rzhao6/orgs", "repos_url": "https://api.github.com/users/rzhao6/repos", "events_url": "https://api.github.com/users/rzhao6/events{/privacy}", "received_events_url": "https://api.github.com/users/rzhao6/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nYes the error message is right, you have to add \"microsoft/\" before the name:\r\n```\r\nLayoutLMTokenizer.from_pretrained('microsoft/layoutlm-base-uncased')\r\n```", "ah i see, i directly used the line in the example here https://huggingface.co/transformers/model_doc/layoutlm.html#layoutlmmodel\r\n\r\nthank you for helping!" ]
1,602
1,602
1,602
NONE
null
hi, im trying to do "LayoutLMTokenizer.from_pretrained('layoutlm-base-uncased')", and got an error saying "OSError: Model name 'layoutlm-base-uncased' was not found in tokenizers model name list (microsoft/layoutlm-base-uncased, microsoft/layoutlm-large-uncased). We assumed 'layoutlm-base-uncased' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url." thank you! - `transformers` version: 3.3.1 - Python version: 3.7.7 - PyTorch version (GPU?): 1.6.0, GPU @mfuntowicz
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7744/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7744/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7743
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7743/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7743/comments
https://api.github.com/repos/huggingface/transformers/issues/7743/events
https://github.com/huggingface/transformers/issues/7743
719,529,537
MDU6SXNzdWU3MTk1Mjk1Mzc=
7,743
should PegasusTokenizer replace `/n` with `<n>`?
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 2368374212, "node_id": "MDU6TGFiZWwyMzY4Mzc0MjEy", "url": "https://api.github.com/repos/huggingface/transformers/labels/pegasus", "name": "pegasus", "color": "1f76a8", "default": false, "description": "" } ]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "Yes, will have to ensure we add a test then. you can assign this to me, @sshleifer ", "Mildly off topic: \r\nI wonder if there is a way to read in a file one \"line\" at a time, where the system for deciding line numbers is the one text-editors use rather than splitting by '\\n'.\r\nHere `vim` allows me to write a line with many newline symbols!\r\n![image](https://user-images.githubusercontent.com/6045025/95786026-d903b200-0ca4-11eb-8106-926d694122aa.png)\r\n\r\n", "Which means they are probable escaped `\\n` chars - check the saved file in `less` and see if it's really a new line or just `\\\\n`? between line 1 and line 2 in your snapshot there lies a real `\\n`.\r\n\r\nI think what you are after is some sort of binary format ala python's `b\"line\\nline2\"` ", "Your first line is composed of the characters `\\` and `n` and not the actual character customarily represented by `\\n` which is `hex(0a)` (ascii code = 10), no?", "a few clarification questions, @sshleifer:\r\n\r\n> On the encode/text -> ids side I'm certain.\r\n\r\nDo we want this for the Reformer (As it's pegasus' super-class) or just pegasus?\r\n\r\n> On the decode/ids -> text side, I'm worried about breaking run_eval.py, which reads the generations from disk before calculating rouge here\r\n\r\nSurely, returning `<n>` in the final results is half-baked. And surely, replacing those will break run_eval\r\n\r\nHow does run_eval currently handle this situation for other multiline generators? If it doesn't, then we should switch it to a different from plain text format to handle this, since sooner or later we will run into this limitation anyway. Switching to a csv format would probably be the simplest \"upgrade\", that will take care of new lines automatically.", "(1) just pegasus\r\n(2) It doesn't handle the situation -- it leaves `<n>` in the output and trusts `calculate_rouge_score` (which calls `add_newline_to_end_of_each_sentence`) to temporarily remove `<n>` and then add `\\n` between sentences, thereby computing `rougeLsum` correctly. This happens after results are saved, and therefore generations still have `<n>`.\r\nhttps://github.com/huggingface/transformers/blob/dc552b9b7025ea9c38717f30ad3d69c2a972049d/examples/seq2seq/sentence_splitter.py#L18\r\n\r\n", "What I'm asking is shouldn't pegasus's `decode` deliver final results devoid of internally used tokens like `<n>`? If the input may contain `\\n`, the output should match and also contain `\\n` if the generator intended so.\r\n\r\nIf this is correct then our tools need to work with this requirement and not bend the requirements to their needs.", "OK, I did the override and don't know enough about pegasus to tell whether it does the right thing.\r\n\r\nCurrently: `\"a\\nb\"` gets tokenized as `\"_a\", \"_b\"`.\r\n\r\nIf I add a `_tokenize` override (pegasus inherits it) and add `text = re.sub(r\"\\r?\\n\", \"<n>\", text)`, now the above produces: `\"_a\", \"<n>\", \"b\" ` - notice that b is no longer tokenized in the same way - it's missing the leading \"_\".\r\n\r\nHere is a much longer test:\r\n\r\n```\r\nfrom transformers.tokenization_pegasus import PegasusTokenizer, PegasusTokenizerFast\r\ntokenizer = PegasusTokenizer.from_pretrained(\"google/pegasus-large\")\r\n\r\ns1 = \"This is test.\"\r\ns2 = \"Testing!\"\r\ninputs = [f\"{s1} {s2}\", f\"{s1}\\n{s2}\", f\"{s1}\\r\\n{s2}\", f\"{s1}\\n\\n{s2}\"]\r\ne1 = ['▁This', '▁is', '▁test', '.']\r\ne2 = ['▁Testing', '!']\r\nexpected = [ e1 + e2, e1 + ['<n>'] + e2, e1 + ['<n>'] + e2, e1 + ['<n>', '<n>'] + e2]\r\n\r\nfor i, t in enumerate(inputs):\r\n i\r\n f\"inp: {t}\"\r\n o = tokenizer._tokenize(t)\r\n f\"got: {o}\"\r\n f\"exp: {expected[i]}\"\r\n #assert o == expected[i], \"not matched\"\r\n```\r\n\r\nSo with the new line we get:\r\n```\r\n\"got: ['▁This', '▁is', '▁test', '.', '<n>', 'Testing', '!']\"\r\n\"exp: ['▁This', '▁is', '▁test', '.', '<n>', '▁Testing', '!']\"\r\n```\r\n\r\nThis doesn't look right, correct?", "You can play with the new test and check that it does the right thing, PR https://github.com/huggingface/transformers/pull/7877", "@sshleifer, you probably need to close this one too", "Hi all, I'm back on this thread to possibly re-open the discussion: it's important for my model to learn where the newlines should be placed in the output, and from my understanding, this information is being removed by the Pegasus tokenizer:\r\n\r\nFor example, if my target output is \r\n\r\n```\r\nSECTION HEADING \\n\\nHere is the output for this section, cool!\r\n```\r\n\r\nIf I encode and decode through the tokenizer, it becomes\r\n```\r\nSECTION HEADING Here is the output for this section, cool!\r\n```\r\n\r\nSo I guess my question would be \r\n1. Am I missing something, and is there some toggle I can enable that would allow for the tokenizer to preserve new lines?\r\n2. If there is not a toggle, is there a reason that one shouldn't be added?\r\n\r\nOf course I have the option of pre-processing my text to convert new lines to `<n>` and then post-processing to turn the `<n>` back to `\\n`, but seems a little hacky for my liking 😅 \r\n", "It might help to open a new issue, @njbrake \r\n\r\nAs you can see from https://github.com/huggingface/transformers/pull/7877 why this one was closed.\r\n\r\nI'm not sure who maintains Pegasus these days as Sam has moved on, but surely you will discover in the new Issue.\r\n" ]
1,602
1,677
1,603
CONTRIBUTOR
null
On the `encode`/text -> ids side I'm certain. On the `decode`/ids -> text side, I'm worried about breaking `run_eval.py`, which reads the generations from disk before calculating rouge [here](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_eval.py#L141) cc @stas00
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7743/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7743/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7742
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7742/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7742/comments
https://api.github.com/repos/huggingface/transformers/issues/7742/events
https://github.com/huggingface/transformers/pull/7742
719,515,437
MDExOlB1bGxSZXF1ZXN0NTAxNjg5NjE0
7,742
Avoid unnecessary DDP synchronization when gradient_accumulation_steps > 1
{ "login": "noamwies", "id": 3121971, "node_id": "MDQ6VXNlcjMxMjE5NzE=", "avatar_url": "https://avatars.githubusercontent.com/u/3121971?v=4", "gravatar_id": "", "url": "https://api.github.com/users/noamwies", "html_url": "https://github.com/noamwies", "followers_url": "https://api.github.com/users/noamwies/followers", "following_url": "https://api.github.com/users/noamwies/following{/other_user}", "gists_url": "https://api.github.com/users/noamwies/gists{/gist_id}", "starred_url": "https://api.github.com/users/noamwies/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/noamwies/subscriptions", "organizations_url": "https://api.github.com/users/noamwies/orgs", "repos_url": "https://api.github.com/users/noamwies/repos", "events_url": "https://api.github.com/users/noamwies/events{/privacy}", "received_events_url": "https://api.github.com/users/noamwies/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,602
1,602
CONTRIBUTOR
null
# What does this PR do? This PR avoid unnecessary ```DistributedDataParallel``` synchronization when gradient_accumulation_steps > 1 by using ```DistributedDataParallel.no_sync```. This lead to speedup when training with multiple gpu's for example the ```run_language_modeling.py``` complete wiki-2 epoch in 85 seconds instead of 111 ```bash python run_language_modeling.py --output_dir=runs --model_type=gpt2 --model_name_or_path=gpt2 --per_device_train_batch_size 6 --do_train --train_data_file=$TRAIN_FILE --gradient_accumulation_steps=32 --fp16 --block_size 513 --overwrite_output_dir ``` ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @patrickvonplaten <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7742/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7742/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7742", "html_url": "https://github.com/huggingface/transformers/pull/7742", "diff_url": "https://github.com/huggingface/transformers/pull/7742.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7742.patch", "merged_at": 1602596805000 }
https://api.github.com/repos/huggingface/transformers/issues/7741
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7741/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7741/comments
https://api.github.com/repos/huggingface/transformers/issues/7741/events
https://github.com/huggingface/transformers/pull/7741
719,509,257
MDExOlB1bGxSZXF1ZXN0NTAxNjg0NjA2
7,741
Avoid unnecessary DDP synchronization when gradient_accumulation_steps > 1
{ "login": "noamwies", "id": 3121971, "node_id": "MDQ6VXNlcjMxMjE5NzE=", "avatar_url": "https://avatars.githubusercontent.com/u/3121971?v=4", "gravatar_id": "", "url": "https://api.github.com/users/noamwies", "html_url": "https://github.com/noamwies", "followers_url": "https://api.github.com/users/noamwies/followers", "following_url": "https://api.github.com/users/noamwies/following{/other_user}", "gists_url": "https://api.github.com/users/noamwies/gists{/gist_id}", "starred_url": "https://api.github.com/users/noamwies/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/noamwies/subscriptions", "organizations_url": "https://api.github.com/users/noamwies/orgs", "repos_url": "https://api.github.com/users/noamwies/repos", "events_url": "https://api.github.com/users/noamwies/events{/privacy}", "received_events_url": "https://api.github.com/users/noamwies/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,602
1,602
CONTRIBUTOR
null
# What does this PR do? This PR avoid unnecessary ```DistributedDataParallel``` synchronization when gradient_accumulation_steps > 1 by using ```DistributedDataParallel.no_sync```. This lead to speedup when training with multiple gpu's for example the ```run_language_modeling.py``` complete wiki-2 epoch in 85 seconds instead of 111 ```bash python run_language_modeling.py --output_dir=runs --model_type=gpt2 --model_name_or_path=gpt2 --per_device_train_batch_size 6 --do_train --train_data_file=$TRAIN_FILE --gradient_accumulation_steps=32 --fp16 --block_size 513 --overwrite_output_dir ``` ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @patrickvonplaten <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7741/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7741/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7741", "html_url": "https://github.com/huggingface/transformers/pull/7741", "diff_url": "https://github.com/huggingface/transformers/pull/7741.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7741.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7740
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7740/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7740/comments
https://api.github.com/repos/huggingface/transformers/issues/7740/events
https://github.com/huggingface/transformers/issues/7740
719,499,080
MDU6SXNzdWU3MTk0OTkwODA=
7,740
examples/seq2seq/finetune_trainer.py: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`.
{ "login": "forest1988", "id": 2755894, "node_id": "MDQ6VXNlcjI3NTU4OTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4", "gravatar_id": "", "url": "https://api.github.com/users/forest1988", "html_url": "https://github.com/forest1988", "followers_url": "https://api.github.com/users/forest1988/followers", "following_url": "https://api.github.com/users/forest1988/following{/other_user}", "gists_url": "https://api.github.com/users/forest1988/gists{/gist_id}", "starred_url": "https://api.github.com/users/forest1988/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/forest1988/subscriptions", "organizations_url": "https://api.github.com/users/forest1988/orgs", "repos_url": "https://api.github.com/users/forest1988/repos", "events_url": "https://api.github.com/users/forest1988/events{/privacy}", "received_events_url": "https://api.github.com/users/forest1988/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This happens in every trainer I've ever used and you should just ignore it. It's happening in pytorch so we cant control the type of error.", "cc @sgugger who may have a different opinion.", "This is a warning that appears all the time from PyTofh if you save your learning rate scheduler. You should file an issue on their side if you find it annoying.\r\nNormally, the latest version of transformers should catch that warning from you though.", "@sshleifer @sgugger \r\nThank you for quickly answering my question!\r\n\r\nI apologize that I misunderstood this UserWarning as to be caused by your codes.\r\nThanks to your kind explanations, I now understand that this is caused not by examples/seq2seq and transformers Trainer, but by PyTorch. \r\nI also understand that I will come across the same UserWarning all the time if I save the learning rate scheduler.\r\n\r\nI'm relieved to hear that I can ignore it if I don't find it annoying.\r\n\r\nThanks again!" ]
1,602
1,602
1,602
CONTRIBUTOR
null
## Environment info - `transformers` version: 3.3.1 - I did `pip install -e .` in the repository cloned from https://github.com/huggingface/transformers/tree/ba4bbd92bcb55febbfa06aaa1551738388ec7eb0 - Platform: Linux - Python version: 3.8.3 (anaconda3-2020.07) - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help examples/seq2seq: @sshleifer ## Information Model I am using (Bert, XLNet ...): Bart (facebook/bart-base) The problem arises when using: * [x] the official example scripts: (give details below) The tasks I am working on is: * [x] an official XSUM summarization task ## To reproduce Running examples/seq2seq/finetune_trainer.py as below. ```sh $ CUDA_VISIBLE_DEVICES=3 python finetune_trainer.py \ --learning_rate=3e-5 \ --fp16 \ --do_train --do_eval --do_predict --evaluate_during_training \ --predict_with_generate \ --n_val 1000 \ --model_name_or_path facebook/bart-base \ --data_dir ********/xsum/ \ --output_dir ******** \ 2>&1 | tee test.log ``` Then, I get an UserWarning message: ```sh 0%| | 0/76506 [00:00<?, ?it/s]/********/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:118: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. " ``` ## Expected behavior It's an UserWarning, not an Error. It may be not critical, but I want to know why this is caused. I'm sorry if the "Bug Report" isn't the right choice for this issue. I'm new to Transformers Trainer. I apologize that I can't distinguish whether this UserWarning belongs to examples/seq2seq/ or Trainer itself. Thank you in advance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7740/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7740/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7739
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7739/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7739/comments
https://api.github.com/repos/huggingface/transformers/issues/7739/events
https://github.com/huggingface/transformers/issues/7739
719,495,661
MDU6SXNzdWU3MTk0OTU2NjE=
7,739
Cannot load pretrained microsoft's layoutlm
{ "login": "MaxHoefl", "id": 14946739, "node_id": "MDQ6VXNlcjE0OTQ2NzM5", "avatar_url": "https://avatars.githubusercontent.com/u/14946739?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MaxHoefl", "html_url": "https://github.com/MaxHoefl", "followers_url": "https://api.github.com/users/MaxHoefl/followers", "following_url": "https://api.github.com/users/MaxHoefl/following{/other_user}", "gists_url": "https://api.github.com/users/MaxHoefl/gists{/gist_id}", "starred_url": "https://api.github.com/users/MaxHoefl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MaxHoefl/subscriptions", "organizations_url": "https://api.github.com/users/MaxHoefl/orgs", "repos_url": "https://api.github.com/users/MaxHoefl/repos", "events_url": "https://api.github.com/users/MaxHoefl/events{/privacy}", "received_events_url": "https://api.github.com/users/MaxHoefl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Not a bug, when updating to torch 1.6 it works" ]
1,602
1,602
1,602
NONE
null
## Environment info - `transformers` version: 3.3.1 - Platform: Linux-4.15.0-112-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.4.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @julien-c @sshleifer ## Information Model I am using: LayoutLM The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ``` from transformers import LayoutLMTokenizer, LayoutLMForTokenClassification import torch tokenizer = LayoutLMTokenizer.from_pretrained('microsoft/layoutlm-base-uncased') model = LayoutLMForTokenClassification.from_pretrained('microsoft/layoutlm-base-uncased') ``` ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) ~/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 926 try: --> 927 state_dict = torch.load(resolved_archive_file, map_location="cpu") 928 except Exception: ~/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args) 526 if _is_zipfile(opened_file): --> 527 with _open_zipfile_reader(f) as opened_zipfile: 528 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) ~/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/serialization.py in __init__(self, name_or_buffer) 223 def __init__(self, name_or_buffer): --> 224 super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer)) 225 RuntimeError: version_ <= kMaxSupportedFileFormatVersion INTERNAL ASSERT FAILED at /pytorch/caffe2/serialize/inline_container.cc:132, please report a bug to PyTorch. Attempted to read a PyTorch file with version 3, but the maximum supported version for reading is 2. Your PyTorch installation may be too old. (init at /pytorch/caffe2/serialize/inline_container.cc:132) frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7faec0a9f193 in /home/user/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/lib/libc10.so) frame #1: caffe2::serialize::PyTorchStreamReader::init() + 0x1f5b (0x7faec3c279eb in /home/user/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/lib/libtorch.so) frame #2: caffe2::serialize::PyTorchStreamReader::PyTorchStreamReader(std::string const&) + 0x64 (0x7faec3c28c04 in /home/user/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/lib/libtorch.so) frame #3: <unknown function> + 0x6c1ef6 (0x7faf0bb54ef6 in /home/user/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/lib/libtorch_python.so) frame #4: <unknown function> + 0x295928 (0x7faf0b728928 in /home/user/anaconda3/envs/nlp/lib/python3.8/site-packages/torch/lib/libtorch_python.so) frame #5: PyCFunction_Call + 0x56 (0x5640b0549f76 in /home/user/anaconda3/envs/nlp/bin/python) frame #6: _PyObject_MakeTpCall + 0x22f (0x5640b050785f in /home/user/anaconda3/envs/nlp/bin/python) frame #7: <unknown function> + 0x18bfdc (0x5640b0555fdc in /home/user/anaconda3/envs/nlp/bin/python) frame #8: PyVectorcall_Call + 0x71 (0x5640b0507041 in /home/user/anaconda3/envs/nlp/bin/python) frame #9: <unknown function> + 0x18c92a (0x5640b055692a in /home/user/anaconda3/envs/nlp/bin/python) frame #10: _PyObject_MakeTpCall + 0x1a4 (0x5640b05077d4 in /home/user/anaconda3/envs/nlp/bin/python) frame #11: _PyEval_EvalFrameDefault + 0x4596 (0x5640b058ef56 in /home/user/anaconda3/envs/nlp/bin/python) frame #12: _PyEval_EvalCodeWithName + 0x659 (0x5640b0554e19 in /home/user/anaconda3/envs/nlp/bin/python) frame #13: _PyObject_FastCallDict + 0x20c (0x5640b055648c in /home/user/anaconda3/envs/nlp/bin/python) frame #14: _PyObject_Call_Prepend + 0x63 (0x5640b0556733 in /home/user/anaconda3/envs/nlp/bin/python) frame #15: <unknown function> + 0x18c8ca (0x5640b05568ca in /home/user/anaconda3/envs/nlp/bin/python) frame #16: _PyObject_MakeTpCall + 0x1a4 (0x5640b05077d4 in /home/user/anaconda3/envs/nlp/bin/python) frame #17: _PyEval_EvalFrameDefault + 0x475 (0x5640b058ae35 in /home/user/anaconda3/envs/nlp/bin/python) frame #18: _PyEval_EvalCodeWithName + 0x2d2 (0x5640b0554a92 in /home/user/anaconda3/envs/nlp/bin/python) frame #19: _PyFunction_Vectorcall + 0x1e3 (0x5640b0555943 in /home/user/anaconda3/envs/nlp/bin/python) frame #20: <unknown function> + 0x10011a (0x5640b04ca11a in /home/user/anaconda3/envs/nlp/bin/python) frame #21: _PyEval_EvalCodeWithName + 0x7df (0x5640b0554f9f in /home/user/anaconda3/envs/nlp/bin/python) frame #22: <unknown function> + 0x18bd20 (0x5640b0555d20 in /home/user/anaconda3/envs/nlp/bin/python) frame #23: <unknown function> + 0x10077f (0x5640b04ca77f in /home/user/anaconda3/envs/nlp/bin/python) frame #24: _PyEval_EvalCodeWithName + 0x2d2 (0x5640b0554a92 in /home/user/anaconda3/envs/nlp/bin/python) frame #25: PyEval_EvalCodeEx + 0x44 (0x5640b0555754 in /home/user/anaconda3/envs/nlp/bin/python) frame #26: PyEval_EvalCode + 0x1c (0x5640b05e3edc in /home/user/anaconda3/envs/nlp/bin/python) frame #27: <unknown function> + 0x24f083 (0x5640b0619083 in /home/user/anaconda3/envs/nlp/bin/python) frame #28: <unknown function> + 0x140699 (0x5640b050a699 in /home/user/anaconda3/envs/nlp/bin/python) frame #29: <unknown function> + 0xfeb84 (0x5640b04c8b84 in /home/user/anaconda3/envs/nlp/bin/python) frame #30: _PyGen_Send + 0x149 (0x5640b054edc9 in /home/user/anaconda3/envs/nlp/bin/python) frame #31: _PyEval_EvalFrameDefault + 0x49a3 (0x5640b058f363 in /home/user/anaconda3/envs/nlp/bin/python) frame #32: _PyGen_Send + 0x149 (0x5640b054edc9 in /home/user/anaconda3/envs/nlp/bin/python) frame #33: _PyEval_EvalFrameDefault + 0x49a3 (0x5640b058f363 in /home/user/anaconda3/envs/nlp/bin/python) frame #34: _PyGen_Send + 0x149 (0x5640b054edc9 in /home/user/anaconda3/envs/nlp/bin/python) frame #35: <unknown function> + 0x1701cd (0x5640b053a1cd in /home/user/anaconda3/envs/nlp/bin/python) frame #36: <unknown function> + 0x10075e (0x5640b04ca75e in /home/user/anaconda3/envs/nlp/bin/python) frame #37: _PyFunction_Vectorcall + 0x10b (0x5640b055586b in /home/user/anaconda3/envs/nlp/bin/python) frame #38: <unknown function> + 0xfeb84 (0x5640b04c8b84 in /home/user/anaconda3/envs/nlp/bin/python) frame #39: _PyFunction_Vectorcall + 0x10b (0x5640b055586b in /home/user/anaconda3/envs/nlp/bin/python) frame #40: <unknown function> + 0x10075e (0x5640b04ca75e in /home/user/anaconda3/envs/nlp/bin/python) frame #41: _PyEval_EvalCodeWithName + 0x2d2 (0x5640b0554a92 in /home/user/anaconda3/envs/nlp/bin/python) frame #42: _PyFunction_Vectorcall + 0x1e3 (0x5640b0555943 in /home/user/anaconda3/envs/nlp/bin/python) frame #43: <unknown function> + 0x18be79 (0x5640b0555e79 in /home/user/anaconda3/envs/nlp/bin/python) frame #44: PyVectorcall_Call + 0x71 (0x5640b0507041 in /home/user/anaconda3/envs/nlp/bin/python) frame #45: _PyEval_EvalFrameDefault + 0x1fdb (0x5640b058c99b in /home/user/anaconda3/envs/nlp/bin/python) frame #46: _PyEval_EvalCodeWithName + 0x659 (0x5640b0554e19 in /home/user/anaconda3/envs/nlp/bin/python) frame #47: <unknown function> + 0x18bd20 (0x5640b0555d20 in /home/user/anaconda3/envs/nlp/bin/python) frame #48: <unknown function> + 0x10011a (0x5640b04ca11a in /home/user/anaconda3/envs/nlp/bin/python) frame #49: <unknown function> + 0x215056 (0x5640b05df056 in /home/user/anaconda3/envs/nlp/bin/python) frame #50: <unknown function> + 0x1847f3 (0x5640b054e7f3 in /home/user/anaconda3/envs/nlp/bin/python) frame #51: <unknown function> + 0x140699 (0x5640b050a699 in /home/user/anaconda3/envs/nlp/bin/python) frame #52: <unknown function> + 0xfeb84 (0x5640b04c8b84 in /home/user/anaconda3/envs/nlp/bin/python) frame #53: _PyEval_EvalCodeWithName + 0x659 (0x5640b0554e19 in /home/user/anaconda3/envs/nlp/bin/python) frame #54: _PyFunction_Vectorcall + 0x1e3 (0x5640b0555943 in /home/user/anaconda3/envs/nlp/bin/python) frame #55: <unknown function> + 0x10075e (0x5640b04ca75e in /home/user/anaconda3/envs/nlp/bin/python) frame #56: <unknown function> + 0x215056 (0x5640b05df056 in /home/user/anaconda3/envs/nlp/bin/python) frame #57: <unknown function> + 0x1847f3 (0x5640b054e7f3 in /home/user/anaconda3/envs/nlp/bin/python) frame #58: <unknown function> + 0x140699 (0x5640b050a699 in /home/user/anaconda3/envs/nlp/bin/python) frame #59: <unknown function> + 0xfeb84 (0x5640b04c8b84 in /home/user/anaconda3/envs/nlp/bin/python) frame #60: _PyEval_EvalCodeWithName + 0x659 (0x5640b0554e19 in /home/user/anaconda3/envs/nlp/bin/python) frame #61: <unknown function> + 0x18bd20 (0x5640b0555d20 in /home/user/anaconda3/envs/nlp/bin/python) frame #62: <unknown function> + 0xfeb84 (0x5640b04c8b84 in /home/user/anaconda3/envs/nlp/bin/python) frame #63: <unknown function> + 0x215056 (0x5640b05df056 in /home/user/anaconda3/envs/nlp/bin/python) During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) <ipython-input-6-29a9c0941587> in <module> 3 4 tokenizer = LayoutLMTokenizer.from_pretrained('microsoft/layoutlm-base-uncased') ----> 5 model = LayoutLMForTokenClassification.from_pretrained('microsoft/layoutlm-base-uncased') 6 ~/anaconda3/envs/nlp/lib/python3.8/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 927 state_dict = torch.load(resolved_archive_file, map_location="cpu") 928 except Exception: --> 929 raise OSError( 930 "Unable to load weights from pytorch checkpoint file. " 931 "If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. " OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. ``` ## Expected behavior Load pretrained layoutlm
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7739/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7739/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7738
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7738/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7738/comments
https://api.github.com/repos/huggingface/transformers/issues/7738/events
https://github.com/huggingface/transformers/pull/7738
719,460,744
MDExOlB1bGxSZXF1ZXN0NTAxNjQ0NjQ0
7,738
Add license info to nlptown/bert-base-multilingual-uncased-sentiment
{ "login": "alexcombessie", "id": 4739848, "node_id": "MDQ6VXNlcjQ3Mzk4NDg=", "avatar_url": "https://avatars.githubusercontent.com/u/4739848?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexcombessie", "html_url": "https://github.com/alexcombessie", "followers_url": "https://api.github.com/users/alexcombessie/followers", "following_url": "https://api.github.com/users/alexcombessie/following{/other_user}", "gists_url": "https://api.github.com/users/alexcombessie/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexcombessie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexcombessie/subscriptions", "organizations_url": "https://api.github.com/users/alexcombessie/orgs", "repos_url": "https://api.github.com/users/alexcombessie/repos", "events_url": "https://api.github.com/users/alexcombessie/events{/privacy}", "received_events_url": "https://api.github.com/users/alexcombessie/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[ "Thanks Alex!" ]
1,602
1,602
1,602
CONTRIBUTOR
null
PR to close this thread: https://discuss.huggingface.co/t/what-is-the-license-of-nlptown-bert-base-multilingual-uncased-sentiment/1445/4
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7738/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7738/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7738", "html_url": "https://github.com/huggingface/transformers/pull/7738", "diff_url": "https://github.com/huggingface/transformers/pull/7738.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7738.patch", "merged_at": 1602518171000 }
https://api.github.com/repos/huggingface/transformers/issues/7737
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7737/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7737/comments
https://api.github.com/repos/huggingface/transformers/issues/7737/events
https://github.com/huggingface/transformers/issues/7737
719,455,744
MDU6SXNzdWU3MTk0NTU3NDQ=
7,737
blenderbot-3B has wrong model card
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false }
[ { "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "repos_url": "https://api.github.com/users/sshleifer/repos", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "type": "User", "site_admin": false } ]
[ "fixed lazily. " ]
1,602
1,604
1,604
CONTRIBUTOR
null
bb90 too
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7737/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7737/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7736
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7736/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7736/comments
https://api.github.com/repos/huggingface/transformers/issues/7736/events
https://github.com/huggingface/transformers/pull/7736
719,423,862
MDExOlB1bGxSZXF1ZXN0NTAxNjE0MDc5
7,736
Make T5 Supports Gradient Checkpointing
{ "login": "agemagician", "id": 6087313, "node_id": "MDQ6VXNlcjYwODczMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/agemagician", "html_url": "https://github.com/agemagician", "followers_url": "https://api.github.com/users/agemagician/followers", "following_url": "https://api.github.com/users/agemagician/following{/other_user}", "gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}", "starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/agemagician/subscriptions", "organizations_url": "https://api.github.com/users/agemagician/orgs", "repos_url": "https://api.github.com/users/agemagician/repos", "events_url": "https://api.github.com/users/agemagician/events{/privacy}", "received_events_url": "https://api.github.com/users/agemagician/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I've tested your code, it raise an error\r\n`TypeError('CheckpointFunctionBackward.forward: expected Variable (got NoneType) for return value 1') \r\n> /share/home/dwaydwaydway/miniconda3/envs/t5/lib/python3.7/site-packages/torch/utils/checkpoint.py(163)checkpoint()\r\n 162 \r\n--> 163 return CheckpointFunction.apply(function, preserve, *args)\r\n 164`", "@dwaydwaydway yes, it is really annoying issue. I will try to fix it and open a new pull when it is done." ]
1,602
1,602
1,602
CONTRIBUTOR
null
# What does this PR do? Since T5 3B and 11B models are really huge models to be fine-tuned on a single GPU, Gradient Checkpointing will allow this model to be fine-tuned on a single GPU but at the cost of more training time. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? T5: @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7736/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7736/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7736", "html_url": "https://github.com/huggingface/transformers/pull/7736", "diff_url": "https://github.com/huggingface/transformers/pull/7736.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7736.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7735
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7735/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7735/comments
https://api.github.com/repos/huggingface/transformers/issues/7735/events
https://github.com/huggingface/transformers/issues/7735
719,423,550
MDU6SXNzdWU3MTk0MjM1NTA=
7,735
Tokenizer Fast bug: ValueError: TextInputSequence must be str
{ "login": "mariusjohan", "id": 49961316, "node_id": "MDQ6VXNlcjQ5OTYxMzE2", "avatar_url": "https://avatars.githubusercontent.com/u/49961316?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariusjohan", "html_url": "https://github.com/mariusjohan", "followers_url": "https://api.github.com/users/mariusjohan/followers", "following_url": "https://api.github.com/users/mariusjohan/following{/other_user}", "gists_url": "https://api.github.com/users/mariusjohan/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariusjohan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariusjohan/subscriptions", "organizations_url": "https://api.github.com/users/mariusjohan/orgs", "repos_url": "https://api.github.com/users/mariusjohan/repos", "events_url": "https://api.github.com/users/mariusjohan/events{/privacy}", "received_events_url": "https://api.github.com/users/mariusjohan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, thanks for opening such a detailed issue with a notebook!\r\n\r\nUnfortunately, fast tokenizers don’t currently work with the QA pipeline. They will in the second pipeline version which is expected in a few weeks to a few months, but right now please use the slow tokenizers for the QA pipeline. \r\n\r\nThanks!", "I think the issue is still there.", "Please open a new issue with your environment, an example of what the issue is and how you expect it to work. Thank you.", "> Hi, thanks for opening such a detailed issue with a notebook!\r\n> \r\n> Unfortunately, fast tokenizers don’t currently work with the QA pipeline. They will in the second pipeline version which is expected in a few weeks to a few months, but right now please use the slow tokenizers for the QA pipeline.\r\n> \r\n> Thanks!\r\n\r\n\r\n~and how do I do that? I don't understand the difference from slow and fast tokenizers. Do I need to train my tokenizer again, or can I just somehow \"cast\" the fast into the slow version?~\r\n\r\n\r\nI could fix this simply by changing:\r\n\r\n\r\n from transformers import RobertaTokenizerFast \r\n tokenizer = RobertaTokenizerFast\r\n\r\nto:\r\n\r\n from transformers import RobertaTokenizer \r\n tokenizer = RobertaTokenizer", "I also find this problem when using transformers. I check my data and find that if csv file contains much Null data or the length of str is 0, this error will be returned. I filter these data and I can successfully run my code.", "double check the data and make sure there is no nan in your data, this is the problem i encountered" ]
1,602
1,685
1,602
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: In a Colab enviroment aswell as on my local windows version - Python version: 3.7.4 - PyTorch version (GPU?): Yes and No - Tensorflow version (GPU?): I didn't try with tensorflow, but I suspect that it has nothing to do with it - Using GPU in script?: I used the automodeling on a GPU session in Colab - Using distributed or parallel set-up in script?: Nope ### Who can help @mfuntowicz ## Information Model I am using: Initially Electra but I tested it out with BERT, DistilBERT and RoBERTa It's using your scripts, but again, it believe it wouldn't work if I did it myself. The model is trained on SQuAD. #### Error traceback ``` """ Traceback (most recent call last): File "/usr/lib/python3.6/multiprocessing/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/usr/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar return list(map(*args)) File "/usr/local/lib/python3.6/dist-packages/transformers/data/processors/squad.py", line 165, in squad_convert_example_to_features return_token_type_ids=True, File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 2050, in encode_plus **kwargs, File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_fast.py", line 473, in _encode_plus **kwargs, File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_fast.py", line 376, in _batch_encode_plus is_pretokenized=is_split_into_words, File "/usr/local/lib/python3.6/dist-packages/tokenizers/implementations/base_tokenizer.py", line 212, in encode return self._tokenizer.encode(sequence, pair, is_pretokenized, add_special_tokens) ValueError: TextInputSequence must be str """ ``` ## To reproduce Steps to reproduce the behavior: 1. Download model and tokenizer (fast) 2. Test it out with the transformers pipeline for a question answering task I've also made a small notebook to test it out for yourself. [here](https://colab.research.google.com/drive/11_qK3w7OWBTYC_GdspAjFna2XJkBgODU?usp=sharing) <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Instead of giving an error, I would expect the tokenizer to work... <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7735/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7735/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7734
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7734/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7734/comments
https://api.github.com/repos/huggingface/transformers/issues/7734/events
https://github.com/huggingface/transformers/issues/7734
719,416,327
MDU6SXNzdWU3MTk0MTYzMjc=
7,734
GLUE STS-B on longer sequence lengths doesn't work?
{ "login": "Querela", "id": 1648294, "node_id": "MDQ6VXNlcjE2NDgyOTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1648294?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Querela", "html_url": "https://github.com/Querela", "followers_url": "https://api.github.com/users/Querela/followers", "following_url": "https://api.github.com/users/Querela/following{/other_user}", "gists_url": "https://api.github.com/users/Querela/gists{/gist_id}", "starred_url": "https://api.github.com/users/Querela/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Querela/subscriptions", "organizations_url": "https://api.github.com/users/Querela/orgs", "repos_url": "https://api.github.com/users/Querela/repos", "events_url": "https://api.github.com/users/Querela/events{/privacy}", "received_events_url": "https://api.github.com/users/Querela/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hello! Thanks for opening such a detailed issue! You would have more luck asking such an open ended/research question on the [forums](https://discuss.huggingface.co), however!", "Thank you, I have asked the same question in the forums. I currently only want feedback whether others have the same issue with the standard GLUE script. And then what solutions can be considered...\r\n\r\nLink: https://discuss.huggingface.co/t/finetuning-sequence-pairs-glue-with-higher-sequence-lengths-seems-to-fail/1656\r\n\r\n_I created the issue because GitHub is my first starting point when I have such an question, and it does not seem to fit StackOverflow? Maybe another stackexchange?_", "So, I will rework this answer with more details later on. But the TL;DR for now.\r\n\r\n_Gradient Accumulation for the win._\r\n\r\nI'm still not exactly sure why it will not train with a batch size of 16 and sequence length of 256 (as the results will just skew to either 0 or 1), but using gradient accumulation (e. g. 64 samples, 4 * 16) to virtually augment the batch size before backpropagation seems to work fine and results are as expected (same or better compared to default sequence length of 128).\r\n\r\nReasons seem to vary and are not exactly clear, like different data (not shuffled but completely different topic/structure) or learning rate + optimizer. I will continue to look into this.", "This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.\n\nIf you think this still needs to be addressed please comment on this thread." ]
1,602
1,614
1,614
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I have an issue where I tried to use the [standard GLUE finetuning script](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py) for task STS-B with longer sequence lengths and the results are **bad** (see below). Correlation decreases massively when using longer sequence lengths, and when I instead use binary classification with two classes instead of regression, it is the same situation. For 128 and with some data (e. g. Yelp) 256 works well but longer sequence lengths then simply fail. My _assumption_ was that longer sequence lengths should results in similar or sometimes better results and that for shorter input sequences padding is added but not incorporated into the embedding because of masking (where the input sequence is and where it is not)? Initially, I was using the Yelp Business Review dataset for sentiment prediction (which worked well for sequence lengths of 128, 256 and 512) but pairing same reviews sentiments for the same business should be similar to sequence pair classification (I know the task/data works) but it only gave good results for a sequence length of 128 and 256, but 400 or 512 just predicted zeros (as far as I observed). I then tried to just use this with the GLUE STS-B data with the same issue happening. Background: Before that, I was using GluonNLP (MXNet) and the [BERT demo finetuning script](https://gluon-nlp.mxnet.io/examples/sentence_embedding/bert.html) (also GLUE STS-B like) with the same data and basically same framework/workflow (even hyperparameters) as here in PyTorch but there all sequence lengths worked, and longer sequence length even improved results (even with smaller batch sizes because of GPU RAM and longer training durations). As the input texts were were smaller and longer (about a third of the data, I guess) this was not that surprising. I'm currently trying to switch to `transformers` because of the larger choice and support of models... So, what am **I** doing wrong? I tried using a constant learning rate schedule (using the default learning rate in the code) but it gave no improvements. I tried different datasets also with almost similar end results. (even if input texts were longer than the maximum sequence length) Can others reproduce this? (Just switch to seqlen 512 and batchsize 8 / seqlen 256 and batchsize 16) Do I have to choose another padding strategy? --- Results on GeForce RTX 2080 with `transformers` version `3.3.1` and CUDA 10.2: ```bash # my script args (basically just changing the output dir and the sequence length (batch size for GPU memory reasons)) # transformers_copy being the cloned repo root folder export GLUE_DIR=data/glue export TASK_NAME=STS-B python transformers_copy/examples/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --data_dir data/sentiment/yelp-pair-b/ --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir output/glue_yelp_128_32 CUDA_VISIBLE_DEVICES=1 python transformers_copy/examples/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --data_dir data/glue/STS-B/ --max_seq_length 256 --per_device_train_batch_size 16 --per_device_eval_batch_size 16 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir output/glue_STS-B_256_16 --save_steps 1000 CUDA_VISIBLE_DEVICES=1 python transformers_copy/examples/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --data_dir data/glue/STS-B/ --max_seq_length 512 --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir output/glue_STS-B_512_8 --save_steps 2000 ``` ```python # cat glue_STS-B_128_32/eval_results_sts-b.txt # seqlen 128 eval_loss = 0.5857866220474243 eval_pearson = 0.8675888610991327 eval_spearmanr = 0.8641174656753431 eval_corr = 0.865853163387238 epoch = 3.0 total_flos = 1434655122529536 ``` ```python # cat glue_STS-B_256_16/eval_results_sts-b.txt # seqlen 256 # this result should be bad, as far as I would think eval_loss = 2.2562920122146606 eval_pearson = 0.22274851498729242 eval_spearmanr = 0.09065396938535858 eval_corr = 0.1567012421863255 epoch = 3.0 total_flos = 2869310245059072 ``` ```python # cat glue_STS-B_512_8/eval_results_sts-b.txt # seqlen 512 eval_loss = 2.224635926246643 eval_pearson = 0.24041184048438544 eval_spearmanr = 0.08133980923357159 eval_corr = 0.1608758248589785 epoch = 3.0 total_flos = 5738620490118144 ``` Yelp (sentiment, single sequence) with sequence length of 512 ```python # cat yelp-sentiment-b_512_16_1/eval_results_sent-b.txt eval_loss = 0.2301591751359403 eval_acc = 0.92832 eval_f1 = 0.945765994794504 eval_acc_and_f1 = 0.937042997397252 eval_pearson = 0.8404006160382227 eval_spearmanr = 0.8404006160382247 eval_corr = 0.8404006160382237 eval_class_report = {'not same': {'precision': 0.9099418011639767, 'recall': 0.8792393761957215, 'f1-score': 0.8943271612218422, 'support': 17249}, 'same': {'precision': 0.937509375093751, 'recall': 0.954169338340814, 'f1-score': 0.945765994794504, 'support': 32751}, 'accuracy': 0.92832, 'macro avg': {'precision': 0.9237255881288639, 'recall': 0.9167043572682677, 'f1-score': 0.920046578008173, 'support': 50000}, 'weighted avg': {'precision': 0.9279991134394574, 'recall': 0.92832, 'f1-score': 0.928020625988607, 'support': 50000}} epoch = 0.08 total_flos = 26906733281280000 ``` Yelp (sequence pairs) with 128, 256 and 512 (were 512 fails) ```python # cat yelp-pair-b_128_32_3/eval_results_same-b.txt # seqlen 128 eval_loss = 0.4788903475597093 eval_acc = 0.8130612708878027 eval_f1 = 0.8137388152678672 eval_acc_and_f1 = 0.813400043077835 eval_pearson = 0.6262220422479998 eval_spearmanr = 0.6262220422479998 eval_corr = 0.6262220422479998 eval_class_report = {'not same': {'precision': 0.8189660129967221, 'recall': 0.8058966668552996, 'f1-score': 0.8123787792355962, 'support': 35342}, 'same': {'precision': 0.8072925445249733, 'recall': 0.8202888622481018, 'f1-score': 0.8137388152678672, 'support': 35034}, 'accuracy': 0.8130612708878027, 'macro avg': {'precision': 0.8131292787608477, 'recall': 0.8130927645517008, 'f1-score': 0.8130587972517317, 'support': 70376}, 'weighted avg': {'precision': 0.8131548231814548, 'recall': 0.8130612708878027, 'f1-score': 0.8130558211583339, 'support': 70376}} epoch = 3.0 total_flos = 71009559802626048 ``` ```python # cat yelp-pair-b_256_16_1/eval_results_same-b.txt # seqlen 256 eval_loss = 0.3369856428101318 eval_acc = 0.8494088893941116 eval_f1 = 0.8505977218901545 eval_acc_and_f1 = 0.850003305642133 eval_pearson = 0.6990572001217541 eval_spearmanr = 0.6990572001217481 eval_corr = 0.6990572001217511 eval_class_report = {'not same': {'precision': 0.8588791553054476, 'recall': 0.8377850715862147, 'f1-score': 0.8482009854474619, 'support': 35342}, 'same': {'precision': 0.840315302768648, 'recall': 0.8611348975281156, 'f1-score': 0.8505977218901545, 'support': 35034}, 'accuracy': 0.8494088893941116, 'macro avg': {'precision': 0.8495972290370477, 'recall': 0.8494599845571651, 'f1-score': 0.8493993536688083, 'support': 70376}, 'weighted avg': {'precision': 0.8496378513129752, 'recall': 0.8494088893941116, 'f1-score': 0.8493941090198912, 'support': 70376}} epoch = 1.0 total_flos = 47339706535084032 ``` ```python # cat yelp-pair-b_512_8_3/eval_results_same-b.txt # seqlen 512 # here it basically just predicts zeros all the time (as fas as I saw) eval_loss = 0.6931421184073636 eval_acc = 0.5021882459929522 eval_f1 = 0.0 eval_acc_and_f1 = 0.2510941229964761 eval_pearson = nan eval_spearmanr = nan eval_corr = nan eval_class_report = {'not same': {'precision': 0.5021882459929522, 'recall': 1.0, 'f1-score': 0.6686089407669461, 'support': 35342}, 'same': {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 35034}, 'accuracy': 0.5021882459929522, 'macro avg': {'precision': 0.2510941229964761, 'recall': 0.5, 'f1-score': 0.33430447038347305, 'support': 70376}, 'weighted avg': {'precision': 0.25219303441347785, 'recall': 0.5021882459929522, 'f1-score': 0.3357675512189583, 'support': 70376}} epoch = 3.0 total_flos = 284038239210504192 ``` Side note: I also ran Yelp with regression and it worked for 128 but for 512 the correlation was below 0.3 so it also failed again. And I worked on another (private) dataset with similar results... <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**: n/a
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7734/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7734/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7733
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7733/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7733/comments
https://api.github.com/repos/huggingface/transformers/issues/7733/events
https://github.com/huggingface/transformers/pull/7733
719,360,986
MDExOlB1bGxSZXF1ZXN0NTAxNTYxNzIw
7,733
[Prophetnet] Develop in parallel
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,602
1,651
1,605
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7733/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7733/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7733", "html_url": "https://github.com/huggingface/transformers/pull/7733", "diff_url": "https://github.com/huggingface/transformers/pull/7733.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7733.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/7732
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7732/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7732/comments
https://api.github.com/repos/huggingface/transformers/issues/7732/events
https://github.com/huggingface/transformers/pull/7732
719,353,096
MDExOlB1bGxSZXF1ZXN0NTAxNTU1MDk0
7,732
Fix #7731
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,602
1,602
MEMBER
null
Fixes #7731
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7732/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7732/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7732", "html_url": "https://github.com/huggingface/transformers/pull/7732", "diff_url": "https://github.com/huggingface/transformers/pull/7732.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7732.patch", "merged_at": 1602508253000 }
https://api.github.com/repos/huggingface/transformers/issues/7731
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7731/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7731/comments
https://api.github.com/repos/huggingface/transformers/issues/7731/events
https://github.com/huggingface/transformers/issues/7731
719,321,902
MDU6SXNzdWU3MTkzMjE5MDI=
7,731
Pytorch 1.6 DataParallel
{ "login": "NorthGuard", "id": 12046820, "node_id": "MDQ6VXNlcjEyMDQ2ODIw", "avatar_url": "https://avatars.githubusercontent.com/u/12046820?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NorthGuard", "html_url": "https://github.com/NorthGuard", "followers_url": "https://api.github.com/users/NorthGuard/followers", "following_url": "https://api.github.com/users/NorthGuard/following{/other_user}", "gists_url": "https://api.github.com/users/NorthGuard/gists{/gist_id}", "starred_url": "https://api.github.com/users/NorthGuard/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NorthGuard/subscriptions", "organizations_url": "https://api.github.com/users/NorthGuard/orgs", "repos_url": "https://api.github.com/users/NorthGuard/repos", "events_url": "https://api.github.com/users/NorthGuard/events{/privacy}", "received_events_url": "https://api.github.com/users/NorthGuard/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Fixed by #7732!" ]
1,602
1,602
1,602
NONE
null
I get an error similar to that of [#4189](https://github.com/huggingface/transformers/issues/4189) and [#3936](https://github.com/huggingface/transformers/issues/3936), when using DataParallel with GPT2. The issue should be resolved in newer versions of transformers, but I still get an error. ## Environment info - `transformers` version: 3.3.1 - Platform: Linux-3.10.0-1062.18.1.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes CUDA compilation tools, release 10.2, V10.2.89 ### Who can help albert, bert, GPT2, XLM: @LysandreJik TextGeneration: @TevenLeScao ## Information Model I am using: GPT2 The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behaviour: 1. Run forward on GPT2 with 2 GPUs ### Small example ```python import torch from torch.nn import DataParallel from transformers import GPT2Tokenizer, GPT2LMHeadModel device = "cuda:0" # Get model tokenizer = GPT2Tokenizer.from_pretrained("gpt2") model = GPT2LMHeadModel.from_pretrained("gpt2") model = DataParallel(model, device_ids=list(range(torch.cuda.device_count()))) model.to(device=device) # Run forward inputs = tokenizer(["This is an example"], return_tensors="pt") outputs = model( input_ids=inputs["input_ids"].to(device), attention_mask=inputs["attention_mask"].to(device), labels=inputs["input_ids"].to(device), ) print(f"outputs: {outputs}") print("Success.") ``` ### Output ``` Traceback (most recent call last): File "minimum_example.py", line 15, in <module> outputs = model( File "/home/user/.conda/envs/main/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/user/.conda/envs/main/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 155, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/home/user/.conda/envs/main/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 165, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/home/user/.conda/envs/main/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply output.reraise() File "/home/user/.conda/envs/main/lib/python3.8/site-packages/torch/_utils.py", line 395, in reraise raise self.exc_type(msg) StopIteration: Caught StopIteration in replica 0 on device 0. Original Traceback (most recent call last): File "/home/user/.conda/envs/main/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker output = module(*input, **kwargs) File "/home/user/.conda/envs/main/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/user/.conda/envs/main/lib/python3.8/site-packages/transformers/modeling_gpt2.py", line 752, in forward transformer_outputs = self.transformer( File "/home/user/.conda/envs/main/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/user/.conda/envs/main/lib/python3.8/site-packages/transformers/modeling_gpt2.py", line 587, in forward attention_mask = attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility StopIteration ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior I simply expect to get an output and no error :) <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7731/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7731/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7730
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7730/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7730/comments
https://api.github.com/repos/huggingface/transformers/issues/7730/events
https://github.com/huggingface/transformers/pull/7730
719,228,374
MDExOlB1bGxSZXF1ZXN0NTAxNDUxODUx
7,730
Upgrading in pipelines TFAutoModelWithLMHead to new Causal/Masked/Seq2Seq LM classes
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,602
1,602
CONTRIBUTOR
null
# What does this PR do? Updates code to remove deprecation warning. No tests as it simply removes current warnings from tests. ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @mfuntowicz <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7730/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7730/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7730", "html_url": "https://github.com/huggingface/transformers/pull/7730", "diff_url": "https://github.com/huggingface/transformers/pull/7730.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7730.patch", "merged_at": 1602753968000 }
https://api.github.com/repos/huggingface/transformers/issues/7729
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7729/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7729/comments
https://api.github.com/repos/huggingface/transformers/issues/7729/events
https://github.com/huggingface/transformers/pull/7729
719,227,929
MDExOlB1bGxSZXF1ZXN0NTAxNDUxNDk3
7,729
Fix DeBERTa integration tests
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,602
1,602
MEMBER
null
Fix the two DeBERTa integration tests. - The classification test doesn't actually make sense, since there is no classification head saved in `microsoft/deberta-base` - Fix to the way the attention was initialized according to the pos type. closes https://github.com/huggingface/transformers/issues/7565 closes https://github.com/huggingface/transformers/pull/7645
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7729/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7729/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7729", "html_url": "https://github.com/huggingface/transformers/pull/7729", "diff_url": "https://github.com/huggingface/transformers/pull/7729.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7729.patch", "merged_at": 1602830954000 }
https://api.github.com/repos/huggingface/transformers/issues/7728
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7728/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7728/comments
https://api.github.com/repos/huggingface/transformers/issues/7728/events
https://github.com/huggingface/transformers/pull/7728
719,212,421
MDExOlB1bGxSZXF1ZXN0NTAxNDM4OTQ5
7,728
Improving Pipelines by defaulting to framework='tf' when pytorch seems unavailable.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Actually, This introduced a pretty big bug where\r\n\r\n```python\r\nnlp = pipeline(task) \r\n```\r\n\r\nWould not work anymore. I tried a different solution around that." ]
1,602
1,602
1,602
CONTRIBUTOR
null
# What does this PR do? When loading a model that was `tf` only and by passing only model by string without framework argument, it would fail with an odd error message: ```python >>> transformers.AutoModel.from_pretrained('Narsil/small') OSError: Can't load weights for 'Narsil/small'. Make sure that: - 'Narsil/small' is a correct model identifier listed on 'https://huggingface.co/models' (It exists and contains tf_model.h5) - or 'Narsil/small' is the correct path to a directory containing a file named one of pytorch_model.bin, tf_model.h5, model.ckpt. ``` This PR corrects the `get_framework` that happens very early in the pipeline to detect the type of model automatically. It does trigger an early download, but that will happen anyway later. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @mfuntowicz <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7728/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7728/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7728", "html_url": "https://github.com/huggingface/transformers/pull/7728", "diff_url": "https://github.com/huggingface/transformers/pull/7728.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7728.patch", "merged_at": 1602747728000 }
https://api.github.com/repos/huggingface/transformers/issues/7727
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7727/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7727/comments
https://api.github.com/repos/huggingface/transformers/issues/7727/events
https://github.com/huggingface/transformers/issues/7727
719,211,157
MDU6SXNzdWU3MTkyMTExNTc=
7,727
what is the perplexity of distilbert-base-uncased ?
{ "login": "OleNet", "id": 3206718, "node_id": "MDQ6VXNlcjMyMDY3MTg=", "avatar_url": "https://avatars.githubusercontent.com/u/3206718?v=4", "gravatar_id": "", "url": "https://api.github.com/users/OleNet", "html_url": "https://github.com/OleNet", "followers_url": "https://api.github.com/users/OleNet/followers", "following_url": "https://api.github.com/users/OleNet/following{/other_user}", "gists_url": "https://api.github.com/users/OleNet/gists{/gist_id}", "starred_url": "https://api.github.com/users/OleNet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/OleNet/subscriptions", "organizations_url": "https://api.github.com/users/OleNet/orgs", "repos_url": "https://api.github.com/users/OleNet/repos", "events_url": "https://api.github.com/users/OleNet/events{/privacy}", "received_events_url": "https://api.github.com/users/OleNet/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @OleNet ,\r\n\r\nYou probably have a better chance of getting an anwser by posting your question on the discussion forum here: `http://discuss.huggingface.co/`.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,602
1,608
1,608
NONE
null
# ❓ Questions & Help ## Details In the [readme](https://github.com/huggingface/transformers/tree/master/examples/distillation) , it is said that distilbert-base-uncased is pretraind on the same data used to pretrain Bert, so I wonder what is the final perplexity or cross entropy of the pretrain?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7727/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7727/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7726
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7726/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7726/comments
https://api.github.com/repos/huggingface/transformers/issues/7726/events
https://github.com/huggingface/transformers/pull/7726
719,209,343
MDExOlB1bGxSZXF1ZXN0NTAxNDM2NDY5
7,726
Do not softmax when num_labels==1
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,602
1,602
MEMBER
null
When `config.num_labels=1`, the text classification pipeline shouldn't softmax over the results as it will always return 1. Instead, when the number of labels is 1, this will run a sigmoid over the result. cf https://github.com/huggingface/transformers/issues/7493#issuecomment-706413792
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7726/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7726/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7726", "html_url": "https://github.com/huggingface/transformers/pull/7726", "diff_url": "https://github.com/huggingface/transformers/pull/7726.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7726.patch", "merged_at": 1602596548000 }
https://api.github.com/repos/huggingface/transformers/issues/7725
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7725/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7725/comments
https://api.github.com/repos/huggingface/transformers/issues/7725/events
https://github.com/huggingface/transformers/issues/7725
719,204,750
MDU6SXNzdWU3MTkyMDQ3NTA=
7,725
how we can replace/swap the Wikipedia data with our custom data for knowledge retrieval in the RAG model and the format of the retrieval data.
{ "login": "Madhuri05thorat", "id": 56575589, "node_id": "MDQ6VXNlcjU2NTc1NTg5", "avatar_url": "https://avatars.githubusercontent.com/u/56575589?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Madhuri05thorat", "html_url": "https://github.com/Madhuri05thorat", "followers_url": "https://api.github.com/users/Madhuri05thorat/followers", "following_url": "https://api.github.com/users/Madhuri05thorat/following{/other_user}", "gists_url": "https://api.github.com/users/Madhuri05thorat/gists{/gist_id}", "starred_url": "https://api.github.com/users/Madhuri05thorat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Madhuri05thorat/subscriptions", "organizations_url": "https://api.github.com/users/Madhuri05thorat/orgs", "repos_url": "https://api.github.com/users/Madhuri05thorat/repos", "events_url": "https://api.github.com/users/Madhuri05thorat/events{/privacy}", "received_events_url": "https://api.github.com/users/Madhuri05thorat/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "I am also playing with RAG model and I am trying to understand how to replace the Wikipedia data with my custom data. From transformes/model-cards/facebook/rag-sequence-nq I see that the train dataset is wiki_dpr. So I loaded it with the following\r\n`\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"wiki_dpr\")\r\n`\r\nThe dataset is loaded with arrow into RAM(it's prety big, 75 GB). I was wandering, the custom dataset must have the same format as `wiki_dpr` ? If you can help with a tutorial on how to replace wiki dataset with a custom one, it will be very helpful. Thank you :D", "I think @lhoestq is working on this right now? ", "Yes indeed. I'll create the PR later today to allow users to use their own data. I'll also add code examples", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hi, Sorry to bump a closed issue but it's the first one in a google search for Rag (or DPR) and huggingface. \r\n\r\nCould we please have links here for the code exemples or informations about custom datas ? \r\n\r\nThanks a lot :smiley: ", "Hi !\r\nSure. You can find more information in [the RAG examples readme](https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag#use-your-own-knowledge-source). Also feel free to take a look at [the code that shows how to do it step by step](https://github.com/huggingface/transformers/blob/master/examples/research_projects/rag/use_own_knowledge_dataset.py).", "from this link using snippet of code to download the Wikipedia data.(https://huggingface.co/datasets/wiki_dpr/blob/main/wiki_dpr.py)\r\n\r\nthis is the command to download all 50 batches of data : !wget https://dl.fbaipublicfiles.com/dpr/data/wiki_encoded/single/nq/wiki_passages_{1..50}.\r\n\r\nThen based on your system config merge the batches and go further.\r\n\r\n" ]
1,602
1,662
1,608
NONE
null
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7725/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7725/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7724
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7724/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7724/comments
https://api.github.com/repos/huggingface/transformers/issues/7724/events
https://github.com/huggingface/transformers/pull/7724
719,192,969
MDExOlB1bGxSZXF1ZXN0NTAxNDIzNTQw
7,724
Fix tf text class
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,602
1,686
1,602
CONTRIBUTOR
null
# What does this PR do? This PR fixes an issue in the `run_tf_text_classification.py` where the tokenizer was raising an error when the CSV file had 3 columns. Fixes #7706
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7724/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7724/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7724", "html_url": "https://github.com/huggingface/transformers/pull/7724", "diff_url": "https://github.com/huggingface/transformers/pull/7724.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7724.patch", "merged_at": 1602506716000 }
https://api.github.com/repos/huggingface/transformers/issues/7723
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7723/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7723/comments
https://api.github.com/repos/huggingface/transformers/issues/7723/events
https://github.com/huggingface/transformers/issues/7723
719,184,340
MDU6SXNzdWU3MTkxODQzNDA=
7,723
T5: Finetuning the language modelling objective on a new dataset
{ "login": "sb1992", "id": 10261100, "node_id": "MDQ6VXNlcjEwMjYxMTAw", "avatar_url": "https://avatars.githubusercontent.com/u/10261100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sb1992", "html_url": "https://github.com/sb1992", "followers_url": "https://api.github.com/users/sb1992/followers", "following_url": "https://api.github.com/users/sb1992/following{/other_user}", "gists_url": "https://api.github.com/users/sb1992/gists{/gist_id}", "starred_url": "https://api.github.com/users/sb1992/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sb1992/subscriptions", "organizations_url": "https://api.github.com/users/sb1992/orgs", "repos_url": "https://api.github.com/users/sb1992/repos", "events_url": "https://api.github.com/users/sb1992/events{/privacy}", "received_events_url": "https://api.github.com/users/sb1992/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "have a look on: https://github.com/huggingface/transformers/tree/master/examples\r\n\r\nfor conditional generation in the folder seq2seq is an example for finetuning", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,602
1,608
1,608
NONE
null
# ❓ Questions & Help I was wondering if there is a way I could fine tune T5 model on my own dataset. There are scripts for fine tuning help for GPT2, BERT and XLNET in language modelling examples, so was thinking if that could be extended to T5 as well? <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7723/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7723/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/7722
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7722/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7722/comments
https://api.github.com/repos/huggingface/transformers/issues/7722/events
https://github.com/huggingface/transformers/pull/7722
719,150,110
MDExOlB1bGxSZXF1ZXN0NTAxMzg5MjYy
7,722
Create Model-card for LIMIT-BERT
{ "login": "cooelf", "id": 7037265, "node_id": "MDQ6VXNlcjcwMzcyNjU=", "avatar_url": "https://avatars.githubusercontent.com/u/7037265?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cooelf", "html_url": "https://github.com/cooelf", "followers_url": "https://api.github.com/users/cooelf/followers", "following_url": "https://api.github.com/users/cooelf/following{/other_user}", "gists_url": "https://api.github.com/users/cooelf/gists{/gist_id}", "starred_url": "https://api.github.com/users/cooelf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cooelf/subscriptions", "organizations_url": "https://api.github.com/users/cooelf/orgs", "repos_url": "https://api.github.com/users/cooelf/repos", "events_url": "https://api.github.com/users/cooelf/events{/privacy}", "received_events_url": "https://api.github.com/users/cooelf/received_events", "type": "User", "site_admin": false }
[ { "id": 1838412367, "node_id": "MDU6TGFiZWwxODM4NDEyMzY3", "url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card", "name": "model card", "color": "92d5f4", "default": false, "description": "Related to pretrained model cards" } ]
closed
false
null
[]
[]
1,602
1,602
1,602
CONTRIBUTOR
null
# What does this PR do? Adds a model card Readme for the LIMIT-BERT language model
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7722/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7722/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7722", "html_url": "https://github.com/huggingface/transformers/pull/7722", "diff_url": "https://github.com/huggingface/transformers/pull/7722.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7722.patch", "merged_at": 1602694573000 }