url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/12645
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12645/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12645/comments
https://api.github.com/repos/huggingface/transformers/issues/12645/events
https://github.com/huggingface/transformers/issues/12645
941,991,374
MDU6SXNzdWU5NDE5OTEzNzQ=
12,645
`TFHubertModelTest.test_model_from_pretrained` is failing
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false } ]
[ "Added in [`cbd73bd#huggingface.co`](https://huggingface.co/facebook/hubert-base-ls960/commit/cbd73bd518689ab98a9370c74165f4875ccd97d5)" ]
1,626
1,626
1,626
MEMBER
null
The `TFHubertModelTest.test_model_from_pretrained` is failing because the TensorFlow checkpoint isn't available under `facebook/hubert-base-ls960`. Same with `TFHubertRobustModelTest.test_model_from_pretrained`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12645/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12645/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12644
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12644/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12644/comments
https://api.github.com/repos/huggingface/transformers/issues/12644/events
https://github.com/huggingface/transformers/pull/12644
941,970,767
MDExOlB1bGxSZXF1ZXN0Njg3NzY1MDU5
12,644
Only test the files impacted by changes in the diff
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "What happens if one only changes a file like `src/transformers/models/bert/__init__.py`? The only files that are impacted by this is `src/transformers/__init__` -> does this mean no tests are run? What if someone introduces a typo in this file that doesn't allow anymore to import `BertModel`? ", "Ah great point, I was supposed to add the direct_deps when the file is an init (so that if you change the bert init, it runs the model and tokenizers tests) but forgot! Will make a PR this afternoon!" ]
1,626
1,626
1,626
COLLABORATOR
null
# What does this PR do? This PR adds some utilities to only run the tests that are impacted by the diff in a PR, to have the CI run faster, save on CI costs and avoid hanging tests. For now the first stage of deployment only concerns PRs, the jobs run at each push (either on circle CI or GitHub actions) still run all the tests. To make this work, the new utility `tests_fetcher.py` works in three stages: 1. It analyzes the diff and to grab the added/deleted/modified files 2. It builds a internal map that contains for each module all the other modules that depend on it (recursively). For instance `trainer` depends on `trainer_utils` so that map says `trainer_utils` impacts `trainer`. It is recursive, so since `trainer_utils` depends on `file_utils`, the map says `file_utils` impacts `trainer_utils` and `trainer`. 3. It maps all the impacted files to their corresponding test files. Note that some files in the library may not have direct test files (for instance a model configuration file has no direct test) but we the impacted files computed above, a model configuration files impacts the corresponding modeling file, so changing a model configuration will run the tests of that model. The result is then run for each of the tests in circle CI. Note that for some tests (like text_examples and test_custom_tokenizer) we just check that there is at least some tests to run (so no trivial diff) but still run all the tests like before. In all the jobs, the output of the `tests_fetcher.py` is saved as an artifact for future debugging.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12644/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12644/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12644", "html_url": "https://github.com/huggingface/transformers/pull/12644", "diff_url": "https://github.com/huggingface/transformers/pull/12644.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12644.patch", "merged_at": 1626274615000 }
https://api.github.com/repos/huggingface/transformers/issues/12643
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12643/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12643/comments
https://api.github.com/repos/huggingface/transformers/issues/12643/events
https://github.com/huggingface/transformers/issues/12643
941,900,305
MDU6SXNzdWU5NDE5MDAzMDU=
12,643
Adding an argument to exclude some states (pretrained weights) from being loaded.
{ "login": "qqaatw", "id": 24835382, "node_id": "MDQ6VXNlcjI0ODM1Mzgy", "avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qqaatw", "html_url": "https://github.com/qqaatw", "followers_url": "https://api.github.com/users/qqaatw/followers", "following_url": "https://api.github.com/users/qqaatw/following{/other_user}", "gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}", "starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions", "organizations_url": "https://api.github.com/users/qqaatw/orgs", "repos_url": "https://api.github.com/users/qqaatw/repos", "events_url": "https://api.github.com/users/qqaatw/events{/privacy}", "received_events_url": "https://api.github.com/users/qqaatw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm in favor of this. For example, when I wanted to fine-tune DETR, I had to use the following hack to make it work:\r\n\r\n```\r\nmodel = DetrForObjectDetection.from_pretrained(\"facebook/detr-resnet-50\")\r\nstate_dict = model.state_dict()\r\n# Remove class weights\r\ndel state_dict[\"class_labels_classifier.weight\"]\r\ndel state_dict[\"class_labels_classifier.bias\"]\r\n# define new model with custom class classifier\r\nconfig = DetrConfig.from_pretrained(\"facebook/detr-resnet-50\", num_labels=10)\r\nmodel = DetrForObjectDetection(config)\r\nmodel.load_state_dict(state_dict, strict=False)\r\n```\r\n\r\nThis is because DETR has a head that has been fine-tuned on COCO, and it has 91 classes. However, when fine-tuning on my custom dataset, let's say it has 10 labels, then the classification head needs to be replaced, which is what I do above.\r\n\r\nIt would be easier if I could just do (with an API similar to what you propose above):\r\n\r\n```\r\nmodel = DetrForObjectDetection.from_pretrained(\"facebook/detr-resnet-50\", num_labels=10, excluded_keys = [\r\n \"class_labels_classifier.weight\",\r\n \"class_labels_classifier.bias\",\r\n ]\r\n)\r\n```\r\n\r\nThis way, you can easily replace the head of an already fine-tuned model with your custom classification head.\r\n\r\ncc @patil-suraj @LysandreJik @patrickvonplaten @sgugger ", "Interesting proposal! I would also be in favor of this to enable @qqaatw's use-case.\r\n\r\nFor your use-case @NielsRogge, while the proposal would also work, I'd favor something much simpler as it's more common to want to drop a head to load in the same architecture but with a randomly initialized layer. With the proposal here, it implies knowing the weight names and manually specifying them.\r\n\r\nIt can also be achieved with\r\n\r\n```\r\nmodel = DetrModel.from_pretrained(\"facebook/detr-resnet-50\")\r\nmodel.save_pretrained(\"directory\")\r\n\r\nmodel = DetrForObjectDetection.from_pretrained(\"directory\")\r\n```\r\nwhich will randomly initialize all layers that are new in `DetrForObjectDetection`.\r\n\r\nFor your use-case in particular Niels, I would be in favor of having an API like the following:\r\n```\r\nmodel = DetrForObjectDetection.from_pretrained(\"facebook/detr-resnet-50\", load_head=False)\r\n```\r\nIt would imply being aware of the head layers, which would probably be achieved by using a `model.get_classification_head` similar to the existing `model.get_output_embeddings` method.", "We have already discussed the second use case internally, and I think we came to the conclusion @NielsRogge code should work with a warning (like the ones we get when the head is different because we load a checkpoint for a task on another task).\r\n\r\nThe other use case presented in this issue is also interesting. Do we really need to add a new argument for it? We could treat it the same way: when trying to load the weights and there is a size mismatch, just ignore the weights and put them in the warning raised by the `from_pretrained` method.", "@LysandreJik I tried your first code block, however, `DetrForObjectDetection` has 2 heads (one for class labels, one for bounding boxes), and one typically only wants to randomly initialize the class labels classifier (and further train the bounding box regressor head). However, your code only works if you want to randomly initialize both heads (it prints the following warning):\r\n\r\n```\r\nSome weights of the model checkpoint at facebook/detr-resnet-50 were not used when initializing DetrModel: ['bbox_predictor.layers.1.bias', 'bbox_predictor.layers.0.weight', 'bbox_predictor.layers.0.bias', 'class_labels_classifier.bias', 'bbox_predictor.layers.1.weight', 'class_labels_classifier.weight', 'bbox_predictor.layers.2.weight', 'bbox_predictor.layers.2.bias']\r\n- This IS expected if you are initializing DetrModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing DetrModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of DetrForObjectDetection were not initialized from the model checkpoint at directory and are newly initialized: ['bbox_predictor.layers.1.bias', 'bbox_predictor.layers.0.weight', 'bbox_predictor.layers.0.bias', 'class_labels_classifier.bias', 'bbox_predictor.layers.1.weight', 'class_labels_classifier.weight', 'bbox_predictor.layers.2.weight', 'bbox_predictor.layers.2.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n```\r\n\r\nOf course, I don't think there are many models in the library right now that have multiple heads, but DETR is one such model.\r\n\r\n\r\n", "After discussing offline with @LysandreJik we will add a `ignore_mismatched_size` flag to `from_pretrained`. When activated, weights that don't have the right size will be ignored, which should cover both the use cases in this issue.\r\n\r\nI will work on this today.", "The feature has been implemented in #12664. Thanks @sgugger " ]
1,626
1,626
1,626
CONTRIBUTOR
null
# 🚀 Feature request Adding an argument in `from_pretrained` to exclude some states (pretrained weights) from being loaded. ## Motivation In general, we usually use `from_pretrained` method to load pretrained states, from CDN or local files, into the model. However, In case when I need to adjust the shape of certain layers (submodules), the errors will be raised due to mismatched shapes. For example, in the following snippets, I changed the embedding_size of Electra in order to tie the same embeddings as BERT in the subsequent code, but due to the mismatched shapes, many RuntimeErrors were raised in `module._load_from_state_dict`. ``` from transformers import BertModel, BertConfig, ElectraModel, ElectraConfig bert_config = BertConfig.from_pretrained('bert-base-uncased') bert_model = BertModel.from_pretrained('bert-base-uncased') electra_config = ElectraConfig.from_pretrained( 'google/electra-small-generator', embedding_size=bert_config.hidden_size ) electra_model = ElectraModel.from_pretrained('google/electra-small-generator', config=electra_config) ``` ``` Exception has occurred: RuntimeError Error(s) in loading state_dict for ElectraModel: size mismatch for electra.embeddings.word_embeddings.weight: copying a param with shape torch.Size([30522, 128]) from checkpoint, the shape in current model is torch.Size([30522, 768]). size mismatch for electra.embeddings.position_embeddings.weight: copying a param with shape torch.Size([512, 128]) from checkpoint, the shape in current model is torch.Size([512, 768]). size mismatch for electra.embeddings.token_type_embeddings.weight: copying a param with shape torch.Size([2, 128]) from checkpoint, the shape in current model is torch.Size([2, 768]). size mismatch for electra.embeddings.LayerNorm.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for electra.embeddings.LayerNorm.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for electra.embeddings_project.weight: copying a param with shape torch.Size([256, 128]) from checkpoint, the shape in current model is torch.Size([256, 768]). ``` Therefore, I think it would be better to add an argument like `excluded_keys` (as the following example) in `from_pretrained` to explicitly prevent certain states from being loaded or add an argument to automatically have the states with mismatched shapes not loaded. I know there are some workarounds such as loading all states first then tying each weight respectively, but that will result in a long and not concise code segment. Example: ``` electra_model = ElectraModel.from_pretrained( 'google/electra-small-generator', config=electra_config, excluded_keys = [ "electra.embeddings.word_embeddings.weight", "electra.embeddings.position_embeddings.weight", "electra.embeddings.token_type_embeddings.weight", "electra.embeddings.LayerNorm.weight", "electra.embeddings.LayerNorm.bias", "electra.embeddings_project.weight", "generator_predictions.LayerNorm.weight", "generator_predictions.LayerNorm.bias", "generator_predictions.dense.weight", "generator_predictions.dense.bias", "generator_lm_head.weight" ] ) ``` ## Your contribution If there is no other concern, and no one is implementing similar features, I would be happy to submit a PR for this. Any thoughts are welcomed :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12643/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12643/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12642
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12642/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12642/comments
https://api.github.com/repos/huggingface/transformers/issues/12642/events
https://github.com/huggingface/transformers/issues/12642
941,878,787
MDU6SXNzdWU5NDE4Nzg3ODc=
12,642
"token_type_ids" is discarded when using GenerationMixin in ”generation_utils.py“
{ "login": "nanzhao", "id": 4546867, "node_id": "MDQ6VXNlcjQ1NDY4Njc=", "avatar_url": "https://avatars.githubusercontent.com/u/4546867?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nanzhao", "html_url": "https://github.com/nanzhao", "followers_url": "https://api.github.com/users/nanzhao/followers", "following_url": "https://api.github.com/users/nanzhao/following{/other_user}", "gists_url": "https://api.github.com/users/nanzhao/gists{/gist_id}", "starred_url": "https://api.github.com/users/nanzhao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nanzhao/subscriptions", "organizations_url": "https://api.github.com/users/nanzhao/orgs", "repos_url": "https://api.github.com/users/nanzhao/repos", "events_url": "https://api.github.com/users/nanzhao/events{/privacy}", "received_events_url": "https://api.github.com/users/nanzhao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for your issue @nanzhao! In order make `OpenAIGPTLMHeadModel` work with `token_type_ids` we should add this line to `prepare_inputs_for_generation`:\r\n\r\nhttps://github.com/huggingface/transformers/blob/790f1c9545f4a83b97bf75640be82b2112c3efe7/src/transformers/models/gpt2/modeling_gpt2.py#L884", "Would you like to open a PR and give it a try? :-)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,626
1,630
1,630
NONE
null
## Environment info - `transformers` version: 4.8.2 - Platform: Darwin-19.6.0-x86_64-i386-64bit - Python version: 3.6.10 - PyTorch version (GPU?): 1.9.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help @patrickvonplaten @yjernite ## Information Model I am using OpenAIGPTLMHeadModel: I try to use class GenerationMixin in ”generation_utils.py“ to generate words for my pre-trained openai-gpt model, but I find a model performance degradation. My generation code segment is like this and I need put "input_ids" and "token_type_ids" for my gpt model: ` input_ids = torch.tensor(instance["input_ids"], dtype=torch.long, device=args.device).unsqueeze(0)` ` token_type_ids = torch.tensor(instance["token_type_ids"], dtype=torch.long, device=args.device).unsqueeze(0)` ` paras = {"token_type_ids": token_type_ids}` ` bos, eos, pad, speaker1, speaker2 = tokenizer.convert_tokens_to_ids(SPECIAL_TOKENS)` ` chat_history_ids = model.generate( input_ids, **paras, max_length=128, min_length=5, num_beams=1, pad_token_id=pad, use_cache=True, eos_token_id=eos, temperature=0.7, bos_token_id=bos, top_p=0.9, top_k=30, do_sample=True, repetition_penalty=1.03).cpu() ` But I find when it call forward in class OpenAIGPTLMHeadModel(OpenAIGPTPreTrainedModel), token_type_ids in forward is None. Although I already put "token_type_ids" in model.generate() with **paras. ## Expected behavior it seems the bug is here. This function in generation_utils.py discards my token_type_ids here: `def prepare_inputs_for_generation(self, input_ids: torch.LongTensor, **kwargs) -> Dict[str, Any]:` `return {"input_ids": input_ids}`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12642/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12642/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12641
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12641/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12641/comments
https://api.github.com/repos/huggingface/transformers/issues/12641/events
https://github.com/huggingface/transformers/issues/12641
941,786,331
MDU6SXNzdWU5NDE3ODYzMzE=
12,641
USE_TORCH while import transformers forever true
{ "login": "Talisberg", "id": 35142612, "node_id": "MDQ6VXNlcjM1MTQyNjEy", "avatar_url": "https://avatars.githubusercontent.com/u/35142612?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Talisberg", "html_url": "https://github.com/Talisberg", "followers_url": "https://api.github.com/users/Talisberg/followers", "following_url": "https://api.github.com/users/Talisberg/following{/other_user}", "gists_url": "https://api.github.com/users/Talisberg/gists{/gist_id}", "starred_url": "https://api.github.com/users/Talisberg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Talisberg/subscriptions", "organizations_url": "https://api.github.com/users/Talisberg/orgs", "repos_url": "https://api.github.com/users/Talisberg/repos", "events_url": "https://api.github.com/users/Talisberg/events{/privacy}", "received_events_url": "https://api.github.com/users/Talisberg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I am unsure of what the bug is and the reproducer you are suggesting. You are not supposed to change the source code of the library to activate `USE_TORCH`, you should set it as an environment variable.", "Please close issue,\nSomething wrong with my env\n\nבתאריך יום ב׳, 12 ביולי 2021 ב-17:33 מאת Sylvain Gugger <\n***@***.***>:\n\n> I am unsure of what the bug is and the reproducer you are suggesting. You\n> are not supposed to change the source code of the library to activate\n> USE_TORCH, you should set it as an environment variable.\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/12641#issuecomment-878330631>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AIMDXVAVVDRGGLYXT6BEBV3TXL4L5ANCNFSM5AGJ4YIQ>\n> .\n>\n" ]
1,626
1,626
1,626
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.8.0 - Platform: linux - Python version: 3.6 - PyTorch version (GPU?):1.8.1(No) - Tensorflow version (GPU?):2.0.1(No) - Using GPU in script?:No - Using distributed or parallel set-up in script?:No ### Who can help @sgugger @Rocketknight1 <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...):AutoTokenizer(Bert) The problem arises when using: * Importing transformers while Using TensorFlow backend error arises : ```transformers/file_utils.py``` ```if _torch_available: 280 torch_version = version.parse(importlib_metadata.version("torch")) --> 281 _torch_fx_available = (torch_version.major, torch_version.minor) == ( 282 TORCH_FX_REQUIRED_VERSION.major, 283 TORCH_FX_REQUIRED_VERSION.minor, AttributeError: 'Version' object has no attribute 'major' ``` The tasks I am working on is: * [ ] trying to use ```train_new_from_iterator``` on top of ```bert-base-cased``` * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. https://github.com/huggingface/transformers/pull/9441 2. row 67, forever True (```USE_TORCH```) So cannot reach (```USE_TF```) , row 80 <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior allow transformers import, when using TF <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12641/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12641/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12640
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12640/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12640/comments
https://api.github.com/repos/huggingface/transformers/issues/12640/events
https://github.com/huggingface/transformers/pull/12640
941,763,741
MDExOlB1bGxSZXF1ZXN0Njg3NTg5MTU4
12,640
fix typo in modeling_t5.py docstring
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
CONTRIBUTOR
null
fixes a small typo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12640/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12640/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12640", "html_url": "https://github.com/huggingface/transformers/pull/12640", "diff_url": "https://github.com/huggingface/transformers/pull/12640.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12640.patch", "merged_at": 1626107072000 }
https://api.github.com/repos/huggingface/transformers/issues/12639
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12639/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12639/comments
https://api.github.com/repos/huggingface/transformers/issues/12639/events
https://github.com/huggingface/transformers/pull/12639
941,752,316
MDExOlB1bGxSZXF1ZXN0Njg3NTc5NDE5
12,639
Refactored code to improve performance/employ best practices.
{ "login": "AllStars101-sudo", "id": 53670363, "node_id": "MDQ6VXNlcjUzNjcwMzYz", "avatar_url": "https://avatars.githubusercontent.com/u/53670363?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AllStars101-sudo", "html_url": "https://github.com/AllStars101-sudo", "followers_url": "https://api.github.com/users/AllStars101-sudo/followers", "following_url": "https://api.github.com/users/AllStars101-sudo/following{/other_user}", "gists_url": "https://api.github.com/users/AllStars101-sudo/gists{/gist_id}", "starred_url": "https://api.github.com/users/AllStars101-sudo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AllStars101-sudo/subscriptions", "organizations_url": "https://api.github.com/users/AllStars101-sudo/orgs", "repos_url": "https://api.github.com/users/AllStars101-sudo/repos", "events_url": "https://api.github.com/users/AllStars101-sudo/events{/privacy}", "received_events_url": "https://api.github.com/users/AllStars101-sudo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, and thank you for your contribution! It seems you were using a very outdated fork in order to open your PR (see the +5k -56k diff), as several files end up deleted and a lot of library improvements would be reverted were we to merge this PR.\r\n\r\nIn case these changes were intentional, let me point you to the following documentation: [**philosophy**](https://huggingface.co/transformers/philosophy.html).\r\n\r\nThe main idea is that all model/tokenizer code should be independent from other models/tokenizers. Manually editing, removing, or adding to a model or tokenizer should not impact any other whatsoever. This further allows to reduce the number of abstractions and gives access to the near-raw PyTorch/TensorFlow/Flax code of the models.\r\n\r\nFinally, we have built [tools](https://github.com/huggingface/transformers/tree/master/utils) so that our maintenance isn't elevated by the high amount of duplicated code and so that our code coverage remains complete.\r\n\r\nThank you for your effort!", "> Hello, and thank you for your contribution! It seems you were using a very outdated fork in order to open your PR (see the +5k -56k diff), as several files end up deleted and a lot of library improvements would be reverted were we to merge this PR.\r\n> \r\n> In case these changes were intentional, let me point you to the following documentation: [**philosophy**](https://huggingface.co/transformers/philosophy.html).\r\n> \r\n> The main idea is that all model/tokenizer code should be independent from other models/tokenizers. Manually editing, removing, or adding to a model or tokenizer should not impact any other whatsoever. This further allows to reduce the number of abstractions and gives access to the near-raw PyTorch/TensorFlow/Flax code of the models.\r\n> \r\n> Finally, we have built [tools](https://github.com/huggingface/transformers/tree/master/utils) so that our maintenance isn't elevated by the high amount of duplicated code and so that our code coverage remains complete.\r\n> \r\n> Thank you for your effort!\r\n\r\nRight, got it. I'll close this PR and create another one keeping the changes you suggested in mind. Thanks!" ]
1,626
1,626
1,626
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Refactors several segments of code in the ```scripts```,```src```,```tests```,```utils``` and ```setup.py``` and increases performance by a bit, using compression methods and newer practices.<br> No new functions or methods/models added, therefore no documentation changes were required. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12639/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12639/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12639", "html_url": "https://github.com/huggingface/transformers/pull/12639", "diff_url": "https://github.com/huggingface/transformers/pull/12639.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12639.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/12638
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12638/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12638/comments
https://api.github.com/repos/huggingface/transformers/issues/12638/events
https://github.com/huggingface/transformers/pull/12638
941,732,228
MDExOlB1bGxSZXF1ZXN0Njg3NTYyMjA0
12,638
[flax]fix jax array type check
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,626
1,626
1,626
MEMBER
null
# What does this PR do? Fixes #12584, #12578 On colab the `ModelOutput` class is returning empty tuples for jax arrays. This is because on colab TPU the type of jax array is `jax.interpreters.xla._DeviceArray` and the `is_tensor` function here https://github.com/huggingface/transformers/blob/2dd9440d0835782e41ae415a68e71fd15051c428/src/transformers/file_utils.py#L1796-L1798 expects `jaxlib.xla_extension.DeviceArray` or `jax.core.Tracer`. If the first argument is an array, the `is_tensor` returns `None` in which case the `ModelOutput` class expects the first argument to be a key-value container which is not the case here. So at the end, everything becomes `None` and the `ModelOutput` returns an empty tuple. Instead the `jnp.ndarray` type check works for jax array types `jaxlib.xla_extension.DeviceArray`, `jax.interpreters.xla._DeviceArray` and also the `ShardedDeviceArray`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12638/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12638/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12638", "html_url": "https://github.com/huggingface/transformers/pull/12638", "diff_url": "https://github.com/huggingface/transformers/pull/12638.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12638.patch", "merged_at": 1626083323000 }
https://api.github.com/repos/huggingface/transformers/issues/12637
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12637/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12637/comments
https://api.github.com/repos/huggingface/transformers/issues/12637/events
https://github.com/huggingface/transformers/issues/12637
941,664,852
MDU6SXNzdWU5NDE2NjQ4NTI=
12,637
Slower training speed under DeepSpeed
{ "login": "Gforky", "id": 4157614, "node_id": "MDQ6VXNlcjQxNTc2MTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4157614?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Gforky", "html_url": "https://github.com/Gforky", "followers_url": "https://api.github.com/users/Gforky/followers", "following_url": "https://api.github.com/users/Gforky/following{/other_user}", "gists_url": "https://api.github.com/users/Gforky/gists{/gist_id}", "starred_url": "https://api.github.com/users/Gforky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Gforky/subscriptions", "organizations_url": "https://api.github.com/users/Gforky/orgs", "repos_url": "https://api.github.com/users/Gforky/repos", "events_url": "https://api.github.com/users/Gforky/events{/privacy}", "received_events_url": "https://api.github.com/users/Gforky/received_events", "type": "User", "site_admin": false }
[ { "id": 2659267025, "node_id": "MDU6TGFiZWwyNjU5MjY3MDI1", "url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed", "name": "DeepSpeed", "color": "4D34F7", "default": false, "description": "" } ]
closed
false
null
[]
[ "Deepspeed is a project that has many different at times totally unrelated features.\r\n\r\nTherefore when you read in a blog that it made something 10x faster, you need to pay close attention to what was the task, and what was the model size, and how many hundreds of gpus, and optimizer, etc., etc.\r\n\r\nThe main goal of deepspeed is to enable training huge models which is not possible using bare pytorch. In particular when you can't fit your model onto a single GPU. Which means a lot more overhead. Therefore if you're going to compare a straightforward bare-bones pytorch to any other complex solution that enables scalabilty the former will almost always be faster or on par.\r\n\r\nThen as I started this comment Deepspeed has other tools, like faster optimizers, like 1-bit adam as posted here: https://www.deepspeed.ai/news/2020/09/08/onebit-adam-blog-post.html, which you haven't been using in your test. \r\n\r\nI hope this gave you a little bit of clarity of what to expect when.\r\n\r\nWe have only the main functionality integrated and lots of features are still pending as you can see here https://github.com/huggingface/transformers/issues/9606 - some of them probably require no integration but need to be tested, we just haven't had the time to work on those yet. And I think that list is far from being complete, since the Deepspeed team adds new features all the time.\r\n\r\nIf you're interesting in particular in a specific feature please first try and see if it already works with transformers/HF Trainer, if not, let's discuss the feasibility of its integration.", "> Deepspeed is a project that has many different at times totally unrelated features.\r\n> \r\n> Therefore when you read in a blog that it made something 10x faster, you need to pay close attention to what was the task, and what was the model size, and how many hudreds of gpus, and optimizer, etc., etc.\r\n> \r\n> The main goal of deepspeed is to enable training huge models which is not possible using bare pytorch. In particular when you can't fit your model onto a single GPU. Which means a lot more overhead. Therefore if you're going to compare a straightforward barebones pytorch to any other complex solution that enables scalabilty the former will almost always be faster or on par.\r\n> \r\n> Then as I started this comment Deepspeed has other tools, like faster optimizers, like 1-bit adam as posted here: https://www.deepspeed.ai/news/2020/09/08/onebit-adam-blog-post.html, which you haven't been using in your test.\r\n> \r\n> I hope this gave you a little bit of clarity of what to expect when.\r\n> \r\n> We have only the main functionality integrated and lots of features are still pending as you can see here #9606 - some of them probably require no integration but need to be tested, we just haven't had the time to work on those yet. And I think that list is far from being complete, since the Deepspeed team adds new features all the time.\r\n> \r\n> If you're interesting in particular in a specific feature please first try and see if it already works with transformers/HF Trainer, if not, let's discuss the feasibility of its integration.\r\n\r\nThank you for your reply Stas, I've been more clarified after read your detailed comment. I'll checkout more features and give you feedback if any progress.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,626
1,629
1,629
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.8.2 - Platform: nvcr.io/nvidia/pytorch:21.02-py3 container on CentOS7 - Python version: 3.8.5 - PyTorch version (GPU?): 1.8.0 with GPU - Tensorflow version (GPU?): not used - Using GPU in script?: YES, Tesla P40 - Using distributed or parallel set-up in script?: YES ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [X] the official example scripts: examples/pytorch/translation/run_translation.py * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [X] an official GLUE/SQUaD task: wwm16 * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Create container using `sudo docker run -d -it --runtime=nvidia --net=host --ipc=host -v /home/user/:/workspace nvcr.io/nvidia/pytorch:21.02-py3 bash` on a Linux server with CentOS7 and Tesla P40 GPUs. 2. Install python dependencies mentioned above. 3. Run run_translation.py with different parameters listed below: 1. DDP with fp16 open CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node 4 examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --per_device_train_batch_size 1 --output_dir output_dir --overwrite_output_dir --fp16 --do_train --max_train_samples 500 --num_train_epochs 1 --dataset_name wmt16 --dataset_config "ro-en" --source_lang en --target_lang ro 2. DDP without fp16 CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node 4 examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --per_device_train_batch_size 1 --output_dir output_dir --overwrite_output_dir --do_train --max_train_samples 500 --num_train_epochs 1 --dataset_name wmt16 --dataset_config "ro-en" --source_lang en --target_lang ro ``` 3. DeepSpeed ZeRO2 with fp16 open deepspeed examples/pytorch/translation/run_translation.py --deepspeed tests/deepspeed/ds_config_zero2.json --model_name_or_path t5-small --per_device_train_batch_size 1 --output_dir output_dir --overwrite_output_dir --fp16 --do_train --max_train_samples 500 --num_train_epochs 1 --dataset_name wmt16 --dataset_config "ro-en" --source_lang en --target_lang ro 4. DeepSpeed ZeRO2 without fp16 deepspeed examples/pytorch/translation/run_translation.py --deepspeed tests/deepspeed/ds_config_zero2.json --model_name_or_path t5-small --per_device_train_batch_size 1 --output_dir output_dir --overwrite_output_dir --do_train --max_train_samples 500 --num_train_epochs 1 --dataset_name wmt16 --dataset_config "ro-en" --source_lang en --target_lang ro 5. DeepSpeed ZeRO3 with fp16 open deepspeed examples/pytorch/translation/run_translation.py --deepspeed tests/deepspeed/ds_config_zero3.json --model_name_or_path t5-small --per_device_train_batch_size 1 --output_dir output_dir --overwrite_output_dir --fp16 --do_train --max_train_samples 500 --num_train_epochs 1 --dataset_name wmt16 --dataset_config "ro-en" --source_lang en --target_lang ro 4. Training metrics are listed according to the above experiments order: 1. ***** train metrics ***** epoch = 1.0 train_loss = 1.5905 train_runtime = 0:00:20.29 train_samples = 500 train_samples_per_second = 24.632 train_steps_per_second = 6.158 2. ***** train metrics ***** epoch = 1.0 train_loss = 1.482 train_runtime = 0:00:17.57 train_samples = 500 train_samples_per_second = 28.448 train_steps_per_second = 7.112 3. ***** train metrics ***** epoch = 1.0 train_loss = 1.6752 train_runtime = 0:00:32.45 train_samples = 500 train_samples_per_second = 15.406 train_steps_per_second = 3.851 4. ***** train metrics ***** epoch = 1.0 train_loss = 1.523 train_runtime = 0:00:20.15 train_samples = 500 train_samples_per_second = 24.813 train_steps_per_second = 6.203 5. ***** train metrics ***** epoch = 1.0 train_loss = 1.523 train_runtime = 0:00:20.15 train_samples = 500 train_samples_per_second = 24.813 train_steps_per_second = 6.203 <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior According to DeepSpeed's official document, its training speedup could up to 10 times faster. But in my experiments I could not get that much speedup. Since translation script is the tutorial task mentioned in the Transformers's "DeepSpeed Integration" document, so my expectation is a faster training speed. Is there any environment's limitations in my experiment? Or the speedup is not guaranteed? Thank you guys in advance for help me out. <!-- A clear and concise description of what you would expect to happen. --> @stas00
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12637/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12637/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12636
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12636/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12636/comments
https://api.github.com/repos/huggingface/transformers/issues/12636/events
https://github.com/huggingface/transformers/issues/12636
941,658,875
MDU6SXNzdWU5NDE2NTg4NzU=
12,636
Error on training XLNet. RuntimeError: CUDA error: device-side assert triggered
{ "login": "darwinharianto", "id": 44696192, "node_id": "MDQ6VXNlcjQ0Njk2MTky", "avatar_url": "https://avatars.githubusercontent.com/u/44696192?v=4", "gravatar_id": "", "url": "https://api.github.com/users/darwinharianto", "html_url": "https://github.com/darwinharianto", "followers_url": "https://api.github.com/users/darwinharianto/followers", "following_url": "https://api.github.com/users/darwinharianto/following{/other_user}", "gists_url": "https://api.github.com/users/darwinharianto/gists{/gist_id}", "starred_url": "https://api.github.com/users/darwinharianto/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/darwinharianto/subscriptions", "organizations_url": "https://api.github.com/users/darwinharianto/orgs", "repos_url": "https://api.github.com/users/darwinharianto/repos", "events_url": "https://api.github.com/users/darwinharianto/events{/privacy}", "received_events_url": "https://api.github.com/users/darwinharianto/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I found out that changing vocab_size to 32000 fixes this error.\r\n\r\nHow do I change this number to other than 32000?\r\n\r\nI made a custom tokenizer \r\n```\r\ntokenizer.train(files=paths, vocab_size=16000, special_tokens=special_tokens)\r\ntokenizer.save('unigram.json', pretty=True)\r\n```\r\n\r\nloaded it\r\n```\r\ntokenizer = Tokenizer.from_file('unigram.json')\r\ntokenizer = XLNetTokenizerFast(tokenizer_object=tokenizer)\r\n```\r\n\r\nusing this with vocab_size 16000 causing an error.\r\nHow do I load this custom tokenizer with XLNet?", "It seems to me that the issue here is that you're loading a specific tokenizer, `xlnet-base-cased`, with a dictionary of 32k tokens:\r\n```py\r\n>>> from transformers import XLNetTokenizerFast\r\n>>> tokenizer = XLNetTokenizerFast.from_pretrained(\"xlnet-base-cased\")\r\n>>> len(tokenizer)\r\n32000\r\n```\r\n\r\nBut you're then using a randomly initialized model that you initialized at 16k tokens. So if your model receives a token that has an ID superior to 15999, it will crash with the error above. " ]
1,626
1,626
1,626
NONE
null
Currently trying to make pretraining using XLNet architecture. script to reproduce: ``` def tokenize_function(examples): return tokenizer(examples["text"], padding="max_length", truncation=True) from transformers import XLNetTokenizerFast from tokenizers import Tokenizer from transformers import XLNetLMHeadModel from transformers import XLNetConfig from transformers import DataCollatorForLanguageModeling tokenizer = XLNetTokenizerFast.from_pretrained("xlnet-base-cased") config=XLNetConfig(vocab_size=16000, ) model = XLNetLMHeadModel(config=config) from datasets import load_dataset raw_datasets = load_dataset("imdb") tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) training_args = TrainingArguments( output_dir="./models/custom_pasona", overwrite_output_dir=True, num_train_epochs=1, per_device_train_batch_size=1, save_steps=1000, save_total_limit=2, prediction_loss_only=True, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=small_train_dataset, ) trainer.train() ``` This would results on an error ``` ING=1 python pretrain.py Reusing dataset imdb (~/.cache/huggingface/datasets/imdb/plain_text/1.0.0/e3c66f1788a67a89c7058d97ff62b6c30531e05b549de56d3ab91891f0561f9a) DatasetDict({ train: Dataset({ features: ['label', 'text'], num_rows: 25000 }) test: Dataset({ features: ['label', 'text'], num_rows: 25000 }) unsupervised: Dataset({ features: ['label', 'text'], num_rows: 50000 }) }) 0%| | 0/25 [00:00<?, ?ba/s]Asking to pad to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no padding. Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation. 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:17<00:00, 1.43ba/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:17<00:00, 1.46ba/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:35<00:00, 1.42ba/s] The following columns in the training set don't have a corresponding argument in `XLNetLMHeadModel.forward` and have been ignored: text. ***** Running training ***** Num examples = 1000 Num Epochs = 1 Instantaneous batch size per device = 1 Total train batch size (w. parallel, distributed & accumulation) = 1 Gradient Accumulation steps = 1 Total optimization steps = 1000 0%| | 0/1000 [00:00<?, ?it/s]/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: indexSelectLargeIndex: block: [32,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed. Traceback (most recent call last): File "~/Desktop/workspace/recommendation/pretrain.py", line 74, in <module> trainer.train() File "~/anaconda3/envs/deltalake/lib/python3.9/site-packages/transformers/trainer.py", line 1269, in train tr_loss += self.training_step(model, inputs) File "~/anaconda3/envs/deltalake/lib/python3.9/site-packages/transformers/trainer.py", line 1762, in training_step loss = self.compute_loss(model, inputs) File "~/anaconda3/envs/deltalake/lib/python3.9/site-packages/transformers/trainer.py", line 1794, in compute_loss outputs = model(**inputs) File "~/anaconda3/envs/deltalake/lib/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "~/anaconda3/envs/deltalake/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1432, in forward transformer_outputs = self.transformer( File "~/anaconda3/envs/deltalake/lib/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "~/anaconda3/envs/deltalake/lib/python3.9/site-packages/transformers/models/xlnet/modeling_xlnet.py", line 1189, in forward output_h = self.dropout(word_emb_k) File "~/anaconda3/envs/deltalake/lib/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "~/anaconda3/envs/deltalake/lib/python3.9/site-packages/torch/nn/modules/dropout.py", line 58, in forward return F.dropout(input, self.p, self.training, self.inplace) File "~/anaconda3/envs/deltalake/lib/python3.9/site-packages/torch/nn/functional.py", line 983, in dropout else _VF.dropout(input, p, training)) RuntimeError: CUDA error: device-side assert triggered ``` I could not find out what is the source of the problem. Why is this happening? Is there any walkthrough on training with XLNet?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12636/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12636/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12635
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12635/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12635/comments
https://api.github.com/repos/huggingface/transformers/issues/12635/events
https://github.com/huggingface/transformers/issues/12635
941,593,275
MDU6SXNzdWU5NDE1OTMyNzU=
12,635
Long-Short Transformer
{ "login": "schmidek", "id": 442328, "node_id": "MDQ6VXNlcjQ0MjMyOA==", "avatar_url": "https://avatars.githubusercontent.com/u/442328?v=4", "gravatar_id": "", "url": "https://api.github.com/users/schmidek", "html_url": "https://github.com/schmidek", "followers_url": "https://api.github.com/users/schmidek/followers", "following_url": "https://api.github.com/users/schmidek/following{/other_user}", "gists_url": "https://api.github.com/users/schmidek/gists{/gist_id}", "starred_url": "https://api.github.com/users/schmidek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/schmidek/subscriptions", "organizations_url": "https://api.github.com/users/schmidek/orgs", "repos_url": "https://api.github.com/users/schmidek/repos", "events_url": "https://api.github.com/users/schmidek/events{/privacy}", "received_events_url": "https://api.github.com/users/schmidek/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "A PyTorch implementation: https://github.com/lucidrains/long-short-transformer", "Cool work! However, models have a low chance of being added if there are no pre-trained weights available.", "Thanks for your interest in our work! We have released the code for ImageNet and LRA at [https://github.com/NVIDIA/transformer-ls](url). Pretrained weights for ImageNet are also available. We will release the character-level LM soon. ", "Hi @zhuchen03! - Since I would like to add your model to the HuggingFace I am wondering if the pretrained weights are also available for character-level LM?", "Hi @NielsRogge @zhuchen03 - I would like to implement these models. I will start with the ImageNet classification one." ]
1,626
1,672
null
CONTRIBUTOR
null
# 🌟 New model addition ## Model description https://arxiv.org/abs/2107.02192 In this paper, they propose Long-Short Transformer, an efficient self-attention mechanism for modeling long sequences with linear complexity for both language and vision tasks. It aggregates a novel long-range attention with dynamic projection to model distant correlations and a short-term attention to capture fine-grained local correlations. Transformer-LS can be applied to both autoregressive and bidirectional models without additional complexity. ## Open source status * [x] the model implementation is available: https://github.com/NVIDIA/transformer-ls * [x] the model weights are available: https://github.com/NVIDIA/transformer-ls * [x] who are the authors: Chen Zhu (@zhuchen03) and Wei Ping and Chaowei Xiao and Mohammad Shoeybi and Tom Goldstein and Anima Anandkumar and Bryan Catanzaro (NVIDIA, University of Maryland)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12635/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12635/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/12634
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12634/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12634/comments
https://api.github.com/repos/huggingface/transformers/issues/12634/events
https://github.com/huggingface/transformers/pull/12634
941,537,942
MDExOlB1bGxSZXF1ZXN0Njg3Mzk5MjIx
12,634
Add ByT5 option to example run_t5_mlm_flax.py
{ "login": "mapmeld", "id": 643918, "node_id": "MDQ6VXNlcjY0MzkxOA==", "avatar_url": "https://avatars.githubusercontent.com/u/643918?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mapmeld", "html_url": "https://github.com/mapmeld", "followers_url": "https://api.github.com/users/mapmeld/followers", "following_url": "https://api.github.com/users/mapmeld/following{/other_user}", "gists_url": "https://api.github.com/users/mapmeld/gists{/gist_id}", "starred_url": "https://api.github.com/users/mapmeld/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mapmeld/subscriptions", "organizations_url": "https://api.github.com/users/mapmeld/orgs", "repos_url": "https://api.github.com/users/mapmeld/repos", "events_url": "https://api.github.com/users/mapmeld/events{/privacy}", "received_events_url": "https://api.github.com/users/mapmeld/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I was trying this on TPU, but it ended with the following error message:\r\n\r\n```bash\r\nTraceback (most recent call last):\r\n File \"run_t5_mlm_flax.py\", line 728, in <module>\r\n model_inputs = data_collator(samples)\r\n File \"run_t5_mlm_flax.py\", line 276, in __call__\r\n batch[\"decoder_input_ids\"] = shift_tokens_right(\r\n File \"/home/stefan/transformers/src/transformers/models/t5/modeling_flax_t5.py\", line 55, in shift_tokens_right\r\n shifted_input_ids = jax.ops.index_update(shifted_input_ids, (..., 0), decoder_start_token_id)\r\n File \"/home/stefan/dev/lib/python3.8/site-packages/jax/_src/ops/scatter.py\", line 352, in index_update\r\n return _scatter_update(\r\n File \"/home/stefan/dev/lib/python3.8/site-packages/jax/_src/ops/scatter.py\", line 64, in _scatter_update\r\n y = jnp.asarray(y)\r\n File \"/home/stefan/dev/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py\", line 3082, in asarray\r\n return array(a, dtype=dtype, copy=False, order=order)\r\n File \"/home/stefan/dev/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py\", line 3042, in array\r\n lax._check_user_dtype_supported(_inferred_dtype, \"array\")\r\n File \"/home/stefan/dev/lib/python3.8/site-packages/jax/_src/lax/lax.py\", line 6963, in _check_user_dtype_supported\r\n raise TypeError(msg)\r\nTypeError: JAX only supports number and bool dtypes, got dtype object in array\r\n```\r\n\r\nI was using this config:\r\n\r\n```\r\nhttps://huggingface.co/google/byt5-base/raw/main/config.json\r\n```\r\n\r\nwith the following parameters:\r\n\r\n```bash\r\npython run_t5_mlm_flax.py --output_dir=\"${MODEL_DIR}\" --model_type=\"t5\" --config_name=\"${MODEL_DIR}\" --tokenizer_name=\"google/byt5-base\" --max_seq_length=\"512\" --per_device_train_batch_size=\"16\" --per_device_eval_batch_size=\"16\" --learning_rate=\"1e-3\" --weight_decay=\"0.001\" --warmup_steps=\"5000\" --overwrite_output_dir --num_train_epochs=\"10\" --logging_steps=\"500\" --save_steps=\"2500\" --eval_steps=\"2500\" --train_file /mnt/datasets/train.txt --validation_file /mnt/datasets/validation.txt\r\n```\r\n\r\n\r\n@patrickvonplaten do you have any hint how to fix this :thinking: ", "> I was trying this on TPU, but it ended with the following error message:\r\n> \r\n> ```shell\r\n> Traceback (most recent call last):\r\n> File \"run_t5_mlm_flax.py\", line 728, in <module>\r\n> model_inputs = data_collator(samples)\r\n> File \"run_t5_mlm_flax.py\", line 276, in __call__\r\n> batch[\"decoder_input_ids\"] = shift_tokens_right(\r\n> File \"/home/stefan/transformers/src/transformers/models/t5/modeling_flax_t5.py\", line 55, in shift_tokens_right\r\n> shifted_input_ids = jax.ops.index_update(shifted_input_ids, (..., 0), decoder_start_token_id)\r\n> File \"/home/stefan/dev/lib/python3.8/site-packages/jax/_src/ops/scatter.py\", line 352, in index_update\r\n> return _scatter_update(\r\n> File \"/home/stefan/dev/lib/python3.8/site-packages/jax/_src/ops/scatter.py\", line 64, in _scatter_update\r\n> y = jnp.asarray(y)\r\n> File \"/home/stefan/dev/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py\", line 3082, in asarray\r\n> return array(a, dtype=dtype, copy=False, order=order)\r\n> File \"/home/stefan/dev/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py\", line 3042, in array\r\n> lax._check_user_dtype_supported(_inferred_dtype, \"array\")\r\n> File \"/home/stefan/dev/lib/python3.8/site-packages/jax/_src/lax/lax.py\", line 6963, in _check_user_dtype_supported\r\n> raise TypeError(msg)\r\n> TypeError: JAX only supports number and bool dtypes, got dtype object in array\r\n> ```\r\n> \r\n> I was using this config:\r\n> \r\n> ```\r\n> https://huggingface.co/google/byt5-base/raw/main/config.json\r\n> ```\r\n> \r\n> with the following parameters:\r\n> \r\n> ```shell\r\n> python run_t5_mlm_flax.py --output_dir=\"${MODEL_DIR}\" --model_type=\"t5\" --config_name=\"${MODEL_DIR}\" --tokenizer_name=\"google/byt5-base\" --max_seq_length=\"512\" --per_device_train_batch_size=\"16\" --per_device_eval_batch_size=\"16\" --learning_rate=\"1e-3\" --weight_decay=\"0.001\" --warmup_steps=\"5000\" --overwrite_output_dir --num_train_epochs=\"10\" --logging_steps=\"500\" --save_steps=\"2500\" --eval_steps=\"2500\" --train_file /mnt/datasets/train.txt --validation_file /mnt/datasets/validation.txt\r\n> ```\r\n> \r\n> @patrickvonplaten do you have any hint how to fix this\r\n\r\nLet me try!", "@stefan-it I cannot reproduce the error. Can you try running the following:\r\n\r\n```bash\r\n./run_t5_mlm_flax.py \\\r\n --output_dir=\"${MODEL_DIR}\" \\\r\n --model_type=\"t5\" \\\r\n --config_name=\"${MODEL_DIR}\" \\\r\n --tokenizer_name=\"google/byt5-base\" \\\r\n --max_seq_length=\"128\" \\\r\n --per_device_train_batch_size=\"1\" \\\r\n --per_device_eval_batch_size=\"1\" \\\r\n --learning_rate=\"1e-3\" \\\r\n --weight_decay=\"0.001\" \\\r\n --warmup_steps=\"5000\" \\\r\n --overwrite_output_dir \\\r\n --num_train_epochs=\"10\" \\\r\n --logging_steps=\"500\" \\\r\n --save_steps=\"2500\" \\\r\n --eval_steps=\"2500\" \\\r\n --dataset_name=\"oscar\" \\\r\n --dataset_config_name=\"unshuffled_deduplicated_als\"\r\n```\r\n\r\nusing \r\n`https://huggingface.co/google/byt5-base/raw/main/config.json` as your config?\r\n\r\nThis uses a very small oscar dataset just to check that the script is correct. As you can see the script should run just fine.", "Hi @patrickvonplaten , your command is working - even with a sequence length of 512 and a batch size of 16. I'll check my dataset now 😅 Maybe some lines are too short...", "That's really interesting, I just filtered lines that contain less than five tokens and training is working. Thanks for your help :hugs: " ]
1,626
1,626
1,626
CONTRIBUTOR
null
Small change adding ByT5 option to the Flax T5 training example When model_type is `byt5`, use ByT5Tokenizer in place of T5TokenizerFast Example: https://colab.research.google.com/drive/1WcDRPYyvuMZDbWuhsS3hTaVyXxjqryPz?usp=sharing
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12634/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12634/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12634", "html_url": "https://github.com/huggingface/transformers/pull/12634", "diff_url": "https://github.com/huggingface/transformers/pull/12634.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12634.patch", "merged_at": 1626179997000 }
https://api.github.com/repos/huggingface/transformers/issues/12633
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12633/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12633/comments
https://api.github.com/repos/huggingface/transformers/issues/12633/events
https://github.com/huggingface/transformers/issues/12633
941,531,820
MDU6SXNzdWU5NDE1MzE4MjA=
12,633
Error pushing GPT2 flax training model to hub
{ "login": "BirgerMoell", "id": 1704131, "node_id": "MDQ6VXNlcjE3MDQxMzE=", "avatar_url": "https://avatars.githubusercontent.com/u/1704131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BirgerMoell", "html_url": "https://github.com/BirgerMoell", "followers_url": "https://api.github.com/users/BirgerMoell/followers", "following_url": "https://api.github.com/users/BirgerMoell/following{/other_user}", "gists_url": "https://api.github.com/users/BirgerMoell/gists{/gist_id}", "starred_url": "https://api.github.com/users/BirgerMoell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BirgerMoell/subscriptions", "organizations_url": "https://api.github.com/users/BirgerMoell/orgs", "repos_url": "https://api.github.com/users/BirgerMoell/repos", "events_url": "https://api.github.com/users/BirgerMoell/events{/privacy}", "received_events_url": "https://api.github.com/users/BirgerMoell/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "The training script `run_stream_trainer.py` is not an official training script no? Where can I find `run_stream_trainer.py` ? The error also does not seem to be related to pushing to the hub but rather with the line `keep=training_args.save_total_limit`.", "I noticed some strange git behaviour including the creation of another repo where the files were uploaded.\r\nhttps://huggingface.co/birgermoell/ckpt-10/tree/main\r\n\r\nThis is likely related to git and not related to transformers so I'm closing the issue and I'm hoping to resolve it.\r\n\r\nThank you so much for the help debugging.\r\n", "> The training script `run_stream_trainer.py` is not an official training script no? Where can I find `run_stream_trainer.py` ? The error also does not seem to be related to pushing to the hub but rather with the line `keep=training_args.save_total_limit`.\r\n\r\nIt's true that it's not an official script. Uploading the script manually now." ]
1,626
1,626
1,626
NONE
null
While training a GPT-2 model using the following scripts the model crashes while pushing to hub. I made the saving step 10 since I suspected was related so saving. ``` #!/usr/bin/env bash python3 swedish-gpt2-oscar/run_stream_trainer.py \ --output_dir="${MODEL_DIR}" \ --model_type="gpt2" \ --config_name="${MODEL_DIR}" \ --tokenizer_name="${MODEL_DIR}" \ --dataset_name="mc4" \ --dataset_config_name="sv" \ --do_train --do_eval \ --block_size="512" \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="64" \ --learning_rate="5e-3" --warmup_steps="1000" \ --adam_beta1="0.9" --adam_beta2="0.98" --weight_decay="0.01" \ --overwrite_output_dir \ --max_steps="100000" \ --decay_steps="100000" \ --logging_steps="500" \ --save_steps="10" \ --eval_steps="2500" \ --push_to_hub ``` The files that would be commited. https://huggingface.co/birgermoell/ckpt-10/commit/d256a3e1fc7dd9da4833c98a21ea689d3caede18 Stacktrace ``` Model weights saved in /home/bmoell/swedish-gpt2-oscar/ckpt-10/flax_model.msgpack 07/11/2021 20:56:44 - INFO - huggingface_hub.repository - Uploading LFS objects: 100% (1/1), 498 MB | 33 MB/s, done. Model pushed to the hub in this commit: https://huggingface.co/birgermoell/ckpt-10/commit/d256a3e1fc7dd9da4833c98a21ea689d3caede18 07/11/2021 20:56:45 - INFO - __main__ - checkpoint saved 07/11/2021 20:56:45 - INFO - absl - Saving checkpoint at step: 10 tcmalloc: large alloc 1373577216 bytes == 0x241732000 @ 0x7ff956432680 0x7ff956452bdd 0x7ff67595f20d 0x7ff67596d340 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff675968bd3 0x7ff6759691fe 0x504d56 0x56acb6 0x568d9a 0x5f5b33 0x56bc9b 0x5f5956 0x56aadf 0x5f5956 0x56fb87 0x568d9a 0x5f5b33 0x56bc9b 0x568d9a 0x5f5b33 0x56aadf tcmalloc: large alloc 2986590208 bytes == 0x293524000 @ 0x7ff956432680 0x7ff956452bdd 0x7ff67595f20d 0x7ff67596d340 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff67596ce87 0x7ff675968bd3 0x7ff6759691fe 0x504d56 0x56acb6 0x568d9a 0x5f5b33 0x56bc9b 0x5f5956 0x56aadf 0x5f5956 0x56fb87 0x568d9a 0x5f5b33 0x56bc9b 0x568d9a 0x5f5b33 0x56aadf 0x568d9a 0x68cdc7 0x67e161 tcmalloc: large alloc 1493295104 bytes == 0x1eff42000 @ 0x7ff956432680 0x7ff956453824 0x5f7b11 0x7ff675968c6f 0x7ff6759691fe 0x504d56 0x56acb6 0x568d9a 0x5f5b33 0x56bc9b 0x5f5956 0x56aadf 0x5f5956 0x56fb87 0x568d9a 0x5f5b33 0x56bc9b 0x568d9a 0x5f5b33 0x56aadf 0x568d9a 0x68cdc7 0x67e161 0x67e1df 0x67e281 0x67e627 0x6b6e62 0x6b71ed 0x7ff9562490b3 0x5f96de 07/11/2021 20:56:53 - INFO - absl - Saved checkpoint at swedish-gpt2-oscar/checkpoint_10 Traceback (most recent call last): File "swedish-gpt2-oscar/run_stream_trainer.py", line 818, in <module> main() File "swedish-gpt2-oscar/run_stream_trainer.py", line 805, in main save_checkpoint(training_args.output_dir, jax_utils.unreplicate(state), cur_step, keep=training_args.save_total_limit, overwrite=False) File "/home/bmoell/gpt2/lib/python3.8/site-packages/flax/training/checkpoints.py", line 139, in save_checkpoint if len(checkpoint_files) > keep: TypeError: '>' not supported between instances of 'int' and 'NoneType' https://symbolize.stripped_domain/r/?trace=7ff9562123f4,7ff95626820f,7f&map= *** SIGTERM received by PID 61911 (TID 61911) on cpu 29 from PID 60845; stack trace: *** PC: @ 0x7ff9562123f4 (unknown) do_futex_wait.constprop.0 @ 0x7ff94d15e800 976 (unknown) @ 0x7ff956268210 348884112 (unknown) @ 0x80 (unknown) (unknown) https://symbolize.stripped_domain/r/?trace=7ff9562123f4,7ff94d15e7ff,7ff95626820f,7f&map=2a762cd764e70bc90ae4c7f9747c08d7:7ff94021c000-7ff94d49d280 E0711 20:56:53.386296 61911 coredump_hook.cc:250] RAW: Remote crash gathering disabled for SIGTERM. E0711 20:56:54.372857 61911 process_state.cc:771] RAW: Raising signal 15 with default behavior 0%| | 11/100000 [01:37<245:42:13, 8.85s/it] ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12633/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12633/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12632
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12632/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12632/comments
https://api.github.com/repos/huggingface/transformers/issues/12632/events
https://github.com/huggingface/transformers/issues/12632
941,515,171
MDU6SXNzdWU5NDE1MTUxNzE=
12,632
Vocab Size does not change when adding new tokens
{ "login": "ncoop57", "id": 7613470, "node_id": "MDQ6VXNlcjc2MTM0NzA=", "avatar_url": "https://avatars.githubusercontent.com/u/7613470?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ncoop57", "html_url": "https://github.com/ncoop57", "followers_url": "https://api.github.com/users/ncoop57/followers", "following_url": "https://api.github.com/users/ncoop57/following{/other_user}", "gists_url": "https://api.github.com/users/ncoop57/gists{/gist_id}", "starred_url": "https://api.github.com/users/ncoop57/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ncoop57/subscriptions", "organizations_url": "https://api.github.com/users/ncoop57/orgs", "repos_url": "https://api.github.com/users/ncoop57/repos", "events_url": "https://api.github.com/users/ncoop57/events{/privacy}", "received_events_url": "https://api.github.com/users/ncoop57/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think you should use `print(len(tokenizer))` instead of `print(tokenizer.vocab_size)` (as the `vocab_size` is a fixed attribute, referring to the base vocabulary without any additional tokens). Refer to [this](https://github.com/huggingface/transformers/issues/1413#issuecomment-538083512) and [this](https://github.com/huggingface/transformers/blob/2dd9440d0835782e41ae415a68e71fd15051c428/src/transformers/tokenization_utils.py#L161).", "ah okay, didn't realize this was expected behavior. Thanks!" ]
1,626
1,626
1,626
CONTRIBUTOR
null
Env: - `transformers` version: 4.8.2 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.9.0+cu102 When adding new tokens to an existing tokenizer, the tokenizer's vocab size variable doesn't change. I believe it should be updated every time the tokens change. Here is a google colab to reproduce: https://colab.research.google.com/drive/1mC_eSmHOgA_F5fPX7AsUt86jAbC7iSSw?usp=sharing Specifics: ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("gpt2") current_size = tokenizer.vocab_size tokenizer.add_tokens(["new_token"]) tokenizer.vocab_size, current_size, len(tokenizer.vocab) ``` Outputs: (50257, 50257, 50258) The same happens when I do the following as well `tokenizer = AutoTokenizer.from_pretrained("gpt2", additional_special_tokens=["new_token"])`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12632/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12632/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12631
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12631/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12631/comments
https://api.github.com/repos/huggingface/transformers/issues/12631/events
https://github.com/huggingface/transformers/issues/12631
941,431,850
MDU6SXNzdWU5NDE0MzE4NTA=
12,631
TypeError: forward() got an unexpected keyword argument 'label' in main tutorial
{ "login": "monk1337", "id": 17107749, "node_id": "MDQ6VXNlcjE3MTA3NzQ5", "avatar_url": "https://avatars.githubusercontent.com/u/17107749?v=4", "gravatar_id": "", "url": "https://api.github.com/users/monk1337", "html_url": "https://github.com/monk1337", "followers_url": "https://api.github.com/users/monk1337/followers", "following_url": "https://api.github.com/users/monk1337/following{/other_user}", "gists_url": "https://api.github.com/users/monk1337/gists{/gist_id}", "starred_url": "https://api.github.com/users/monk1337/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/monk1337/subscriptions", "organizations_url": "https://api.github.com/users/monk1337/orgs", "repos_url": "https://api.github.com/users/monk1337/repos", "events_url": "https://api.github.com/users/monk1337/events{/privacy}", "received_events_url": "https://api.github.com/users/monk1337/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You missed a line in the tutorial:\r\n\r\n```python\r\ntokenized_datasets = tokenized_datasets.remove_columns([\"text\"]) # this you have\r\ntokenized_datasets = tokenized_datasets.rename_column(\"label\", \"labels\") # MISSED\r\ntokenized_datasets.set_format(\"torch\") # this you have\r\n```\r\n\r\nModel expects a column called `labels` not `label` so that is why it complains.", "I had to restart everything from scratch and it worked. Before that, I tried renaming the label to labels but got this error :\r\n\r\n\r\n\r\n```\r\nKeyError Traceback (most recent call last)\r\n<ipython-input-5-c7230e7411f6> in <module>()\r\n 27 tokenized_datasets.set_format(\"torch\") # this you have\r\n 28 \r\n---> 29 train_dataset = tokenized_datasets[\"train\"].shuffle(seed=42).select(range(1000))\r\n 30 eval_dataset = tokenized_datasets[\"test\"].shuffle(seed=42).select(range(1000))\r\n 31 train_dataloader = DataLoader(train_dataset, shuffle=True, batch_size=8)\r\n\r\n6 frames\r\n/usr/local/lib/python3.7/dist-packages/datasets/info.py in __post_init__(self)\r\n 177 for idx, template in enumerate(self.task_templates):\r\n 178 if isinstance(template, TextClassification):\r\n--> 179 labels = self.features[template.label_column].names\r\n 180 self.task_templates[idx] = TextClassification(\r\n 181 text_column=template.text_column, label_column=template.label_column, labels=labels\r\n\r\nKeyError: 'label'\r\n```" ]
1,626
1,626
1,626
NONE
null
I am following the instructions provided in https://huggingface.co/transformers/training.html and trying to use PyTorch API for Fine-tuning, here is the error I am getting ``` from datasets import load_dataset from transformers import AutoTokenizer from torch.utils.data import DataLoader from transformers import AutoModelForSequenceClassification from transformers import get_scheduler from transformers import AdamW import torch from tqdm.auto import tqdm raw_datasets = load_dataset("imdb") tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") def tokenize_function(examples): return tokenizer(examples["text"], padding="max_length", truncation=True) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) tokenized_datasets = tokenized_datasets.remove_columns(["text"]) tokenized_datasets.set_format("torch") small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000)) small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000)) train_dataloader = DataLoader(small_train_dataset, shuffle=True, batch_size=8) eval_dataloader = DataLoader(small_eval_dataset, batch_size=8) model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=2) optimizer = AdamW(model.parameters(), lr=5e-5) num_epochs = 3 num_training_steps = num_epochs * len(train_dataloader) lr_scheduler = get_scheduler( "linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps ) device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model.to(device) progress_bar = tqdm(range(num_training_steps)) model.train() for epoch in range(num_epochs): for batch in train_dataloader: batch = {k: v.to(device) for k, v in batch.items()} outputs = model(**batch) loss = outputs.loss loss.backward() optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) ``` Error ``` TypeError Traceback (most recent call last) <ipython-input-74-79930d537f14> in <module>() 54 for batch in train_dataloader: 55 batch = {k: v.to(device) for k, v in batch.items()} ---> 56 outputs = model(**batch) 57 loss = outputs.loss 58 loss.backward() /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], [] TypeError: forward() got an unexpected keyword argument 'label' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12631/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12631/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12630
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12630/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12630/comments
https://api.github.com/repos/huggingface/transformers/issues/12630/events
https://github.com/huggingface/transformers/pull/12630
941,377,247
MDExOlB1bGxSZXF1ZXN0Njg3Mjc3ODcz
12,630
[Examples][Flax] added test file in summarization example
{ "login": "bhadreshpsavani", "id": 26653468, "node_id": "MDQ6VXNlcjI2NjUzNDY4", "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhadreshpsavani", "html_url": "https://github.com/bhadreshpsavani", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,626
1,626
CONTRIBUTOR
null
# What does this PR do? Fixes #12527 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @sgugger, @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12630/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12630/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12630", "html_url": "https://github.com/huggingface/transformers/pull/12630", "diff_url": "https://github.com/huggingface/transformers/pull/12630.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12630.patch", "merged_at": 1626072314000 }
https://api.github.com/repos/huggingface/transformers/issues/12629
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12629/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12629/comments
https://api.github.com/repos/huggingface/transformers/issues/12629/events
https://github.com/huggingface/transformers/issues/12629
941,325,291
MDU6SXNzdWU5NDEzMjUyOTE=
12,629
How much of an improvement is DistilGPT-2 over an equivalent model trained without distilation?
{ "login": "offendo", "id": 29783125, "node_id": "MDQ6VXNlcjI5NzgzMTI1", "avatar_url": "https://avatars.githubusercontent.com/u/29783125?v=4", "gravatar_id": "", "url": "https://api.github.com/users/offendo", "html_url": "https://github.com/offendo", "followers_url": "https://api.github.com/users/offendo/followers", "following_url": "https://api.github.com/users/offendo/following{/other_user}", "gists_url": "https://api.github.com/users/offendo/gists{/gist_id}", "starred_url": "https://api.github.com/users/offendo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/offendo/subscriptions", "organizations_url": "https://api.github.com/users/offendo/orgs", "repos_url": "https://api.github.com/users/offendo/repos", "events_url": "https://api.github.com/users/offendo/events{/privacy}", "received_events_url": "https://api.github.com/users/offendo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,625
1,629
1,629
NONE
null
Hi! I'm working on a distillation project right now, and I was wondering if this information is available anywhere. I saw [this page](https://github.com/huggingface/transformers/tree/9ee66adadb2a8d6e04e8b18a1c9ea0b57c80642e/examples/research_projects/distillation) provides a comparison for `DistilGPT-2` vs `GPT-2`, but I don't see anything about the improvement of `DistilGPT-2` over an equivalent model (same parameters, etc.) trained in a traditional fashion. Any help would be greatly appreciated. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12629/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12629/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12628
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12628/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12628/comments
https://api.github.com/repos/huggingface/transformers/issues/12628/events
https://github.com/huggingface/transformers/issues/12628
941,304,668
MDU6SXNzdWU5NDEzMDQ2Njg=
12,628
GPTNeo Error Attempting to Generate Text
{ "login": "ncoop57", "id": 7613470, "node_id": "MDQ6VXNlcjc2MTM0NzA=", "avatar_url": "https://avatars.githubusercontent.com/u/7613470?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ncoop57", "html_url": "https://github.com/ncoop57", "followers_url": "https://api.github.com/users/ncoop57/followers", "following_url": "https://api.github.com/users/ncoop57/following{/other_user}", "gists_url": "https://api.github.com/users/ncoop57/gists{/gist_id}", "starred_url": "https://api.github.com/users/ncoop57/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ncoop57/subscriptions", "organizations_url": "https://api.github.com/users/ncoop57/orgs", "repos_url": "https://api.github.com/users/ncoop57/repos", "events_url": "https://api.github.com/users/ncoop57/events{/privacy}", "received_events_url": "https://api.github.com/users/ncoop57/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "Hey @ncoop57,\r\n\r\nThe reason for this error is that `input_ids.shape[1]` (the length of the input length) is larger then `max_length`. By default `max_length` of generate is 20 and in your case `input_ids.shape[1]` is > 20 which will error out. `max_length` define the number of total output tokens (not just the number of generated tokens). So if number of input tokens (`input_ids.shape[1]` is already > `max_length`) the model is told to not generate anything and will error out (we should put better error messages here - which is why I'm leaving this issue open).\r\n\r\nIn short to solve your problem, simply pass a higher `max_length` parameter:\r\n\r\n```\r\noutput_seq = model.generate(input_ids=inputs.input_ids, max_length=100)\r\n```", "@patrickvonplaten we should probably raise an error if cur_length is greater than `max_length`, otherwise, it seems it's hard to figure out." ]
1,625
1,631
null
CONTRIBUTOR
null
## Environment info - `transformers` version: 4.9.0.dev0 - Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.9.0+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.3.4 (tpu) - Jax version: 0.2.16 - JaxLib version: 0.1.68 - Using GPU in script?: no - Using distributed or parallel set-up in script?: no - Using TPU: Yes ### Who can help @patil-suraj and @patrickvonplaten Models: GPTNeo Library: - flax transformers --> ## Information Model I am using (Bert, XLNet ...): GPTNeo ## To reproduce Steps to reproduce the behavior: Here is a Google Colab for reproducing: https://colab.research.google.com/drive/1tba52h5t-BP3g13FMdPXVjKqpoLTlGvP?usp=sharing For convenience here is the error msg: ``` TypeError: dynamic_update_slice update shape must be smaller than operand shape, got update shape (1, 45) for operand shape (1, 20). ``` I was originally getting the same error as #12081. However, when I attempted to implement the same fix as in that issue, I got the above error. The error might be because I am using the "ForCausalLM" version of GPTNeo. However, there is no LMHead version ## Expected behavior Generate the output sequence
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12628/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12628/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/12627
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12627/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12627/comments
https://api.github.com/repos/huggingface/transformers/issues/12627/events
https://github.com/huggingface/transformers/issues/12627
941,301,047
MDU6SXNzdWU5NDEzMDEwNDc=
12,627
Add Flax Models to Pipelines
{ "login": "ncoop57", "id": 7613470, "node_id": "MDQ6VXNlcjc2MTM0NzA=", "avatar_url": "https://avatars.githubusercontent.com/u/7613470?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ncoop57", "html_url": "https://github.com/ncoop57", "followers_url": "https://api.github.com/users/ncoop57/followers", "following_url": "https://api.github.com/users/ncoop57/following{/other_user}", "gists_url": "https://api.github.com/users/ncoop57/gists{/gist_id}", "starred_url": "https://api.github.com/users/ncoop57/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ncoop57/subscriptions", "organizations_url": "https://api.github.com/users/ncoop57/orgs", "repos_url": "https://api.github.com/users/ncoop57/repos", "events_url": "https://api.github.com/users/ncoop57/events{/privacy}", "received_events_url": "https://api.github.com/users/ncoop57/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[]
1,625
1,626
null
CONTRIBUTOR
null
# 🚀 Feature request Hi y'all, I am trying a GPTNeo Flax model and am wanting to use it in the text generation pipeline. However, it is currently not supported. From looking at the current implementation of flax models and the text generation pipeline, it should be a relatively easy (famous last words) addition. ## Motivation HF is heavily integrating Flax models (which I think is awesome!) into the library and has parroted many of the already existing parts of the transformers library to flax models, similar to what was done for TF models. The addition of the support of Flax models in the pipeline API will help those who are working with pure Flax models, especially for applications that will be use the model to accomplish some task. ## Your contribution I would be willing to open a PR if one is not currently underway (I looked for one and didn't find any). However, I am new to flax so if the task is more difficult than I expect I probably will be not able to complete it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12627/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12627/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/12626
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12626/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12626/comments
https://api.github.com/repos/huggingface/transformers/issues/12626/events
https://github.com/huggingface/transformers/issues/12626
941,242,173
MDU6SXNzdWU5NDEyNDIxNzM=
12,626
can't load flax weights in PyTorch if flax model is saved with dtype `bfloat16`
{ "login": "iliemihai", "id": 2815308, "node_id": "MDQ6VXNlcjI4MTUzMDg=", "avatar_url": "https://avatars.githubusercontent.com/u/2815308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iliemihai", "html_url": "https://github.com/iliemihai", "followers_url": "https://api.github.com/users/iliemihai/followers", "following_url": "https://api.github.com/users/iliemihai/following{/other_user}", "gists_url": "https://api.github.com/users/iliemihai/gists{/gist_id}", "starred_url": "https://api.github.com/users/iliemihai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iliemihai/subscriptions", "organizations_url": "https://api.github.com/users/iliemihai/orgs", "repos_url": "https://api.github.com/users/iliemihai/repos", "events_url": "https://api.github.com/users/iliemihai/events{/privacy}", "received_events_url": "https://api.github.com/users/iliemihai/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[ { "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false } ]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,625
1,629
1,629
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: linux - Python version: 3,8 - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help - research_projects/r./run_clm_flax.py : @patrickvonplaten @patil-suraj --> ## Information Model I am using (gpt2): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: Script: 1. run conversion script 2. receive error `path = "./romanian-gpt2_80000/ckpt-80000"config = AutoConfig.from_pretrained(path) model = AutoModelForCausalLM.from_config(config) load_flax_checkpoint_in_pytorch_model(model, path+"/flax_model.msgpack") model.save_pretrained("./romanian-gpt2-large_80000"). ` Converting FLAX model to Pytorch gives the following error: TypeError: can't convert np.ndarray of type bfloat16. The only supported types are: float64, float32, float16, complex64, complex128, int64, int32, int16, int8, uint8, and bool. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Convert flax to pytorch <!-- A clear and concise description of what you would expect to happen. --> Adding argument --dtype="bfloat16" to run_clm_flax.py converts some of parameters to fp16, but it gives error when trying to convert the saved flax model to pytorch. A workaround is to convert all parameters of flax model to fp32 and convert then convert the flax model to pytorch. `def to_f32(t): return jax.tree_map(lambda x: x.astype(jnp.float32) if x.dtype == jnp.bfloat16 else x, t) model.params = to_f32(model.params)`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12626/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12626/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12625
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12625/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12625/comments
https://api.github.com/repos/huggingface/transformers/issues/12625/events
https://github.com/huggingface/transformers/pull/12625
941,219,248
MDExOlB1bGxSZXF1ZXN0Njg3MTYwNjA1
12,625
Flax Wav2Vec2 - Add venv section and fix training script
{ "login": "mariagrandury", "id": 57645283, "node_id": "MDQ6VXNlcjU3NjQ1Mjgz", "avatar_url": "https://avatars.githubusercontent.com/u/57645283?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariagrandury", "html_url": "https://github.com/mariagrandury", "followers_url": "https://api.github.com/users/mariagrandury/followers", "following_url": "https://api.github.com/users/mariagrandury/following{/other_user}", "gists_url": "https://api.github.com/users/mariagrandury/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariagrandury/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariagrandury/subscriptions", "organizations_url": "https://api.github.com/users/mariagrandury/orgs", "repos_url": "https://api.github.com/users/mariagrandury/repos", "events_url": "https://api.github.com/users/mariagrandury/events{/privacy}", "received_events_url": "https://api.github.com/users/mariagrandury/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,625
1,631
1,631
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> Training Facebook's Wav2Vec2 as explained in this README, I encountered several issues related to: - libraries that should be installed to support audio data and - the proposed training script. The purpose of this PR is to explain which libraries could be helpful to work with audio data and to update the current training script so it can be used out-of-the-box to train a Wav2Vec2 model. The error messages that motivated each of the changes in this PR are listed [here](https://github.com/nlp-en-es/wav2vec2-spanish/blob/main/differences_from_original.md). ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. I think @patrickvonplaten would be the right person to review this PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12625/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12625/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12625", "html_url": "https://github.com/huggingface/transformers/pull/12625", "diff_url": "https://github.com/huggingface/transformers/pull/12625.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12625.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/12624
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12624/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12624/comments
https://api.github.com/repos/huggingface/transformers/issues/12624/events
https://github.com/huggingface/transformers/pull/12624
941,212,616
MDExOlB1bGxSZXF1ZXN0Njg3MTU1NzMy
12,624
Add tokenizer_file parameter to PreTrainedTokenizerFast docstring
{ "login": "lewisbails", "id": 32473550, "node_id": "MDQ6VXNlcjMyNDczNTUw", "avatar_url": "https://avatars.githubusercontent.com/u/32473550?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewisbails", "html_url": "https://github.com/lewisbails", "followers_url": "https://api.github.com/users/lewisbails/followers", "following_url": "https://api.github.com/users/lewisbails/following{/other_user}", "gists_url": "https://api.github.com/users/lewisbails/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewisbails/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewisbails/subscriptions", "organizations_url": "https://api.github.com/users/lewisbails/orgs", "repos_url": "https://api.github.com/users/lewisbails/repos", "events_url": "https://api.github.com/users/lewisbails/events{/privacy}", "received_events_url": "https://api.github.com/users/lewisbails/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,626
1,626
CONTRIBUTOR
null
## What does this PR do? - Add tokenizer_file parameter to PreTrainedTokenizerFast docstring - References [this](https://github.com/huggingface/transformers/issues/12583#issuecomment-876613898) comment from @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12624/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12624/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12624", "html_url": "https://github.com/huggingface/transformers/pull/12624", "diff_url": "https://github.com/huggingface/transformers/pull/12624.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12624.patch", "merged_at": 1626090719000 }
https://api.github.com/repos/huggingface/transformers/issues/12623
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12623/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12623/comments
https://api.github.com/repos/huggingface/transformers/issues/12623/events
https://github.com/huggingface/transformers/issues/12623
941,202,030
MDU6SXNzdWU5NDEyMDIwMzA=
12,623
Inconsistent shapes between value and initializer for parameter: FlaxGPT2LMHeadModel
{ "login": "thisis-nkul", "id": 20254312, "node_id": "MDQ6VXNlcjIwMjU0MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/20254312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thisis-nkul", "html_url": "https://github.com/thisis-nkul", "followers_url": "https://api.github.com/users/thisis-nkul/followers", "following_url": "https://api.github.com/users/thisis-nkul/following{/other_user}", "gists_url": "https://api.github.com/users/thisis-nkul/gists{/gist_id}", "starred_url": "https://api.github.com/users/thisis-nkul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thisis-nkul/subscriptions", "organizations_url": "https://api.github.com/users/thisis-nkul/orgs", "repos_url": "https://api.github.com/users/thisis-nkul/repos", "events_url": "https://api.github.com/users/thisis-nkul/events{/privacy}", "received_events_url": "https://api.github.com/users/thisis-nkul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I don't exactly know how but this issue went away on its own after appearing unexpectedly out of nowhere. Closing the issue. ", "> I don't exactly know how but this issue went away on its own after appearing unexpectedly out of nowhere. Closing the issue.\r\n\r\nDear thisis-nkul, have you sovled this problem? How? " ]
1,625
1,664
1,625
NONE
null
Hello! I was trying to finetune GPT-2 medium model through flax on a custom (tokenized) dataset and I encoountered this error: `Inconsistent shapes between value and initializer for parameter "scale" in "/transformer/ln_f": (1024,), (0,)` Edit: The whole traceback is quite long and is reported [here](https://pastebin.com/UMa3BbxP). A very short version is mentioned at the end here. I'm using a PyTorch Dataset(with 1024 tokens per batch) and DataLoader(`batch_size=64`) with a `numpy_collate` function as mentioned at https://jax.readthedocs.io/en/latest/notebooks/Neural_Network_and_Data_Loading.html and then I'm yielding a "superbatch" of shape (8, 64, 1024) for multi-tpu using a custom function. I'm using pre-trained gpt-2 tokenizer along with FlaxGPT2LMHeadModel. Here is the code for the training loop: ``` for epoch in tqdm(range(1, num_epochs + 1), desc=f"Epoch ...", position=0, leave=True): rng, input_rng = jax.random.split(rng) # -- Train -- train_loader = make_superbatch() with tqdm(total=len(script_dataset), desc="Training...", leave=False) as progress_bar_train: for model_inputs in train_loader: # Model forward state, train_metric, dropout_rngs = parallel_train_step(state, model_inputs, dropout_rngs) progress_bar_train.update(1) progress_bar_train.write( f"Train... ({epoch}/{num_epochs} | Loss: {round(train_metric['loss'].mean(), 3)}, Learning Rate: {round(train_metric['learning_rate'].mean(), 6)})" ) ``` Here is the error that I'm encountering and the whole traceback: ``` Epoch ...: 0%| | 0/10 [00:00<?, ?it/s] Training...: 0%| | 0/1470930 [00:00<?, ?it/s] --------------------------------------------------------------------------- UnfilteredStackTrace Traceback (most recent call last) <ipython-input-29-5c831c772fc6> in <module>() 8 # Model forward ----> 9 state, train_metric, dropout_rngs = parallel_train_step(state, model_inputs, dropout_rngs) 10 47 frames UnfilteredStackTrace: flax.errors.ScopeParamShapeError: Inconsistent shapes between value and initializer for parameter "scale" in "/transformer/ln_f": (1024,), (0,). (https://flax.readthedocs.io/en/latest/flax.errors.html#flax.errors.ScopeParamShapeError) The stack trace below excludes JAX-internal frames. The preceding is the original exception that occurred, unmodified. The above exception was the direct cause of the following exception: ScopeParamShapeError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/flax/core/scope.py in param(self, name, init_fn, *init_args) 618 if jnp.shape(val) != jnp.shape(abs_val): 619 raise errors.ScopeParamShapeError(name, self.path_text, --> 620 jnp.shape(val), jnp.shape(abs_val)) 621 else: 622 if not self.is_mutable_collection('params'): ScopeParamShapeError: Inconsistent shapes between value and initializer for parameter "scale" in "/transformer/ln_f": (1024,), (0,). (https://flax.readthedocs.io/en/latest/flax.errors.html#flax.errors.ScopeParamShapeError) ``` I'm guessing this `1024` comes from the number of tokens per batch. How do I resolve this error? Any help would be much appreciated. Thank You.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12623/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12623/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12622
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12622/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12622/comments
https://api.github.com/repos/huggingface/transformers/issues/12622/events
https://github.com/huggingface/transformers/issues/12622
941,172,092
MDU6SXNzdWU5NDExNzIwOTI=
12,622
unclear `prepare_seq2seq_batch` deprecation
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Why is `__call__` hard to understand? It's the regular Python method for when the tokenizer is called directly on inputs. How would you formulate that better?\r\n\r\nFor the `with_target_tokenizer` it's a typo indeed, it should be `as_target_tokenizer`.\r\n\r\nAs for an example, this is what is used in every example script, see for instance the [run_translation](https://github.com/huggingface/transformers/blob/9adff7a0f49f88a6cc718a1d30088988dc78bb6a/examples/pytorch/translation/run_translation.py#L414) script. \r\n\r\nI'm curious, where did you still find a reference to this method? It's been removed from all examples and documentation normally (and has been deprecated five months ago).", "It's totally obvious once I see an example that I was now able to find since you gave the correct context manager name, it is so not obvious from the warning message. Moreover, none of the tokenizers document that as suggested by the warning's message. They do document the specifics of its usage.\r\n\r\nI made an attempt at another version here: https://github.com/huggingface/transformers/pull/12669", "> I'm curious, where did you still find a reference to this method? It's been removed from all examples and documentation normally (and has been deprecated five months ago).\r\n\r\nIn several of the scripts I used in the past to make tiny models.\r\n\r\nI'm curious in turn why was this wrapper deprecated? To make things more explicit? Looks like a lot more code to write instead of the wrapper.\r\n", "I stumbled upon this issue when googling the warning. For the translation task this\r\n`tokenized_text = tokenizer.prepare_seq2seq_batch([text], return_tensors='pt')`\r\nhas to be replaced by this:\r\n```\r\nwith tokenizer.as_target_tokenizer():\r\n tokenized_text = tokenizer(text, return_tensors='pt')\r\n```\r\nWhich is much clearer than using `prepare_seq2seq_batch`, but for anyone coming from other languages but python, the concept of `__call__` might not be transparent in first place :)", "I'm getting the same text, not the translated one when I change from `prepare_seq2seq_batch` to `as_target_tokenizer`" ]
1,625
1,637
1,626
CONTRIBUTOR
null
When using `prepare_seq2seq_batch` the user now gets: > transformers-master/src/transformers/tokenization_utils_base.py:3277: FutureWarning: `prepare_seq2seq_batch` is deprecated and will be removed in version 5 of 🤗 Transformers. Use the regular `__call__` method to prepare your inputs and the tokenizer under the `with_target_tokenizer` context manager to prepare your targets. See the documentation of your specific tokenizer for more details. It's very hard to act on as, I'm not sure what "regular `__call__` method" refers to and I could find any tokenizer documentation that ever mentions `with_target_tokenizer`. Perhaps this is an unintended typo? was it meant to be `with target_tokenizer`? `with FooTokenizer`? Please kindly suggest a more user-friendly deprecation and at least one example or a link to such. Thank you. @sgugger, @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12622/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12622/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12621
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12621/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12621/comments
https://api.github.com/repos/huggingface/transformers/issues/12621/events
https://github.com/huggingface/transformers/issues/12621
941,149,654
MDU6SXNzdWU5NDExNDk2NTQ=
12,621
can't pickle <class 'types.AutoModelForCausalLM'>
{ "login": "lancekung", "id": 19167336, "node_id": "MDQ6VXNlcjE5MTY3MzM2", "avatar_url": "https://avatars.githubusercontent.com/u/19167336?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lancekung", "html_url": "https://github.com/lancekung", "followers_url": "https://api.github.com/users/lancekung/followers", "following_url": "https://api.github.com/users/lancekung/following{/other_user}", "gists_url": "https://api.github.com/users/lancekung/gists{/gist_id}", "starred_url": "https://api.github.com/users/lancekung/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lancekung/subscriptions", "organizations_url": "https://api.github.com/users/lancekung/orgs", "repos_url": "https://api.github.com/users/lancekung/repos", "events_url": "https://api.github.com/users/lancekung/events{/privacy}", "received_events_url": "https://api.github.com/users/lancekung/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello! Could you provide a code example that yields this error? Thank you!", "```\r\nimport pickle\r\nfrom transformers import AutoModelForCausalLM\r\n\r\npickle.dumps(AutoModelForCausalLM)\r\n```\r\nI think it's comes from the fact those are autogenerated.", "> ```\r\n> import pickle\r\n> from transformers import AutoModelForCausalLM\r\n> \r\n> pickle.dumps(AutoModelForCausalLM)\r\n> ```\r\n> \r\n> I think it's comes from the fact those are autogenerated.\r\n\r\nthanks for your help, but I tested based on your modification in #12654, a new problem arises:\r\n\r\n\r\n\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/multiprocessing/process.py\", line 315, in _bootstrap\r\n self.run()\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/multiprocessing/process.py\", line 108, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"/media/cfs/gonglixing/9Nctl/gpt_v2/run_clm_v3.py\", line 509, in init_process\r\n fn(rank, size)\r\n File \"/media/cfs/gonglixing/9Nctl/gpt_v2/run_clm_v3.py\", line 456, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/media/cfs/gonglixing/9Nctl/opensource/transformers-master/src/transformers/trainer.py\", line 1275, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"/media/cfs/gonglixing/9Nctl/opensource/transformers-master/src/transformers/trainer.py\", line 1778, in training_step\r\n self.scaler.scale(loss).backward()\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/torch/tensor.py\", line 245, in backward\r\n torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/torch/autograd/__init__.py\", line 145, in backward\r\n Variable._execution_engine.run_backward(\r\nSystemError: <built-in method run_backward of torch._C._EngineBase object at 0x7f06bfae6b30> returned NULL without setting an error\r\n\r\n" ]
1,625
1,626
1,626
NONE
null
Hi, a new problem has arisen we can pickle "LazyModule" now, but can't pickle <class 'types.AutoModelForCausalLM'> @stas00 @patrickvonplaten, @LysandreJik Traceback (most recent call last): File "/usr/local/anaconda3/envs/py38/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/usr/local/anaconda3/envs/py38/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/media/cfs/gonglixing/9Nctl/gpt_v2/run_clm_v3.py", line 509, in init_process fn(rank, size) File "/media/cfs/gonglixing/9Nctl/gpt_v2/run_clm_v3.py", line 367, in main tokenized_datasets = raw_datasets.map( File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 471, in map { File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 472, in <dictcomp> k: dataset.map( File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1736, in map transformed_shards = [r.get() for r in results] File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1736, in <listcomp> transformed_shards = [r.get() for r in results] File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/pool.py", line 771, in get raise self._value File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/pool.py", line 537, in _handle_tasks put(task) File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/connection.py", line 209, in send self._send_bytes(_ForkingPickler.dumps(obj)) File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/reduction.py", line 54, in dumps cls(buf, protocol, *args, **kwds).dump(obj) File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py",line 498, in dump StockPickler.dump(self, obj) File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 487, in dump self.save(obj) File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 901, in save_tuple save(element) File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py",line 990, in save_module_dict StockPickler.save_dict(pickler, obj) File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 971, in save_dict self._batch_setitems(obj.items()) File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 997, in _batch_setitems save(v) File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py",line 1493, in save_function pickler.save_reduce(_create_function, (obj.__code__, File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 692, in save_reduce save(args) File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 901, in save_tuple save(element) File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py",line 990, in save_module_dict StockPickler.save_dict(pickler, obj) File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 971, in save_dict self._batch_setitems(obj.items()) File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 997, in _batch_setitems save(v) File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py",line 1439, in save_type StockPickler.save_global(pickler, obj, name=name) File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 1070, in save_global raise PicklingError( _pickle.PicklingError: Can't pickle <class 'types.AutoModelForCausalLM'>: it's notfound as types.AutoModelForCausalLM _Originally posted by @lancekung in https://github.com/huggingface/transformers/issues/12549#issuecomment-877537851_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12621/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12621/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12620
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12620/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12620/comments
https://api.github.com/repos/huggingface/transformers/issues/12620/events
https://github.com/huggingface/transformers/pull/12620
941,140,042
MDExOlB1bGxSZXF1ZXN0Njg3MTAxOTgw
12,620
[doc] fix anchor
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,625
1,625
CONTRIBUTOR
null
mixed rst with md, fixing the anchor
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12620/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12620/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12620", "html_url": "https://github.com/huggingface/transformers/pull/12620", "diff_url": "https://github.com/huggingface/transformers/pull/12620.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12620.patch", "merged_at": 1625881708000 }
https://api.github.com/repos/huggingface/transformers/issues/12619
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12619/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12619/comments
https://api.github.com/repos/huggingface/transformers/issues/12619/events
https://github.com/huggingface/transformers/pull/12619
941,078,495
MDExOlB1bGxSZXF1ZXN0Njg3MDUwMzA5
12,619
Add tokenizers class mismatch detection between `cls` and checkpoint
{ "login": "europeanplaice", "id": 38364983, "node_id": "MDQ6VXNlcjM4MzY0OTgz", "avatar_url": "https://avatars.githubusercontent.com/u/38364983?v=4", "gravatar_id": "", "url": "https://api.github.com/users/europeanplaice", "html_url": "https://github.com/europeanplaice", "followers_url": "https://api.github.com/users/europeanplaice/followers", "following_url": "https://api.github.com/users/europeanplaice/following{/other_user}", "gists_url": "https://api.github.com/users/europeanplaice/gists{/gist_id}", "starred_url": "https://api.github.com/users/europeanplaice/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/europeanplaice/subscriptions", "organizations_url": "https://api.github.com/users/europeanplaice/orgs", "repos_url": "https://api.github.com/users/europeanplaice/repos", "events_url": "https://api.github.com/users/europeanplaice/events{/privacy}", "received_events_url": "https://api.github.com/users/europeanplaice/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I revised the code based on your reviews. ", "I want to ask you to refactor the logic.\r\nThank you for offering!", "@SaulLu could you confirm you're happy with the changes? I think this is good to be merged on my side, thanks for the adjustments @europeanplaice.", "@SaulLu @sgugger\r\nWe made a excellent job! Thank you very much for your help!" ]
1,625
1,626
1,626
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #12416 This PR detects a mismatch between `cls` and a checkpoint a user intends to load. However, It can't find a mismatch when a config doesn't contain the tokenizer's information. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12619/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12619/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12619", "html_url": "https://github.com/huggingface/transformers/pull/12619", "diff_url": "https://github.com/huggingface/transformers/pull/12619.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12619.patch", "merged_at": 1626529942000 }
https://api.github.com/repos/huggingface/transformers/issues/12618
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12618/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12618/comments
https://api.github.com/repos/huggingface/transformers/issues/12618/events
https://github.com/huggingface/transformers/issues/12618
941,014,016
MDU6SXNzdWU5NDEwMTQwMTY=
12,618
validation metrics not being logged by Trainer
{ "login": "neel04", "id": 11617870, "node_id": "MDQ6VXNlcjExNjE3ODcw", "avatar_url": "https://avatars.githubusercontent.com/u/11617870?v=4", "gravatar_id": "", "url": "https://api.github.com/users/neel04", "html_url": "https://github.com/neel04", "followers_url": "https://api.github.com/users/neel04/followers", "following_url": "https://api.github.com/users/neel04/following{/other_user}", "gists_url": "https://api.github.com/users/neel04/gists{/gist_id}", "starred_url": "https://api.github.com/users/neel04/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neel04/subscriptions", "organizations_url": "https://api.github.com/users/neel04/orgs", "repos_url": "https://api.github.com/users/neel04/repos", "events_url": "https://api.github.com/users/neel04/events{/privacy}", "received_events_url": "https://api.github.com/users/neel04/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You are not providing a reproducer (which would include the data on which to run the script) so we can reproduce your problem. Re-tested the script on TPU and it does run evaluation every eval_steps, as provided. A few things that could be the problem in your case:\r\n- since you set gradient_accumulation_steps = 500, your evaluation will only happen every 500 x eval_steps, so make sure you have enough training samples to get to that point (you do not provide the size of your trainingset)\r\n- you could have an empty dataset (less than one batch), which would make the evaluation phase empty as well.", "```py\r\n!touch dataset.txt\r\nimport random\r\nf = open('./dataset.txt', 'w')\r\n\r\nfor lines in range(50):\r\n f.write(' '.join(m for m in [str(random.randint(0, 40000)) for i in range(16000)]) + '\\n') #16000 words/(numbers) in one line, with random numbers from 0-40000 only.\r\n\r\nf.close()\r\n```\r\nshould create 50 sequences; I am using 22,500 for my training and 2,500 for validation. My batch size is `1` due to the long length of sequences, hence I don't believe that I have <1.\r\n\r\nAttached is my validation [file](https://drive.google.com/file/d/1-6-db2cM-jpN7rpzXxWg_MpLspr6alWb/view?usp=sharing) of (uncompressed) 116MB. My training file is about 1.3GB.\r\n\r\nPerhaps it may be a problem with my validation dataset, but I don't spot it on the surface.", "Additionally, I had put a run for 5 hours ~ (`5 epochs`). In the logs from wandb, it had obviously completed in 5 hours - however, the script wouldn't stop running for some reason which in turn wouldn't trigger `wand.finish()`. From the logs, it seems that the script was running for 3 hours more after training, which seems pretty mysterious. \r\n\r\nI don't understand why I am getting weird behaviour. For reference, this is my cell:-\r\n```py\r\n%%bash\r\npython xla_spawn.py --num_cores=8 ./run_mlm.py --output_dir=\"./results\" \\\r\n --model_type=\"big_bird\" \\\r\n --config_name=\"./config\" \\\r\n --tokenizer_name=\"./tokenizer\" \\\r\n --train_file=\"./dataset.txt\" \\\r\n --validation_file=\"./val.txt\" \\\r\n --line_by_line=\"True\" \\\r\n --max_seq_length=\"16000\" \\\r\n --weight_decay=\"0.01\" \\\r\n --per_device_train_batch_size=\"1\" \\\r\n --per_device_eval_batch_size=\"1\" \\\r\n --learning_rate=\"3e-4\" \\\r\n --tpu_num_cores='8' \\\r\n --warmup_steps=\"1000\" \\\r\n --overwrite_output_dir \\\r\n --pad_to_max_length \\\r\n --num_train_epochs=5 \\\r\n --adam_beta1=0.9 \\\r\n --adam_beta2=0.98 \\\r\n --do_train \\\r\n --do_eval \\\r\n #--logging_steps=200 \\\r\n --evaluation_strategy=\"steps\" \\\r\n --eval_steps=250 \\\r\n --eval_accumulation_steps=200 \\\r\n --report_to=\"all\" \\\r\n --logging_dir='./logs' \\\r\n --skip_memory_metrics='False' \\\r\n --gradient_accumulation_steps=500 \\\r\n --use_fast_tokenizer='True' \\\r\n --logging_first_step='True' \\\r\n #1> >(tee -a ./content/drive/MyDrive/music_dataset/logs/stdout.log) \\\r\n #2> >(tee -a ./content/drive/MyDrive/music_dataset/logs/stderr.log >&2)\r\n```\r\n**EDIT:-** @sgugger Another thing I found, was that despite putting the flag the strategy adjusted by the model is \"no\" [`evaluation_strategy=IntervalStrategy.NO`] which should have been `steps`", "Mmm, if the `evaluation_strategy` is set to no, the problem is that the bash command is badly interpreted. It seems you are running it in a notebook, I don't know how that usually works, but the problem is that all the arguments you typed are not properly consumed.\r\nYou should try this in a terminal.", "I didn't think of that :100: but it still doesn't work :disappointed: \r\n\r\nI am writing it to a bash file and running it that way; I also put it all the flags together in one single command but that doesn't seem to work either. Trying in a terminal yields same results. \r\n\r\nThe problem is that it's not logging the loss at all after the initial one, forget the eval loss. \r\n```py\r\npython3 xla_spawn.py --num_cores=8 ./run_mlm.py --output_dir=\"./results\" --model_type=\"big_bird\" --config_name=\"./config\" --tokenizer_name=\"./tokenizer\" --train_file=\"./dataset.txt\" --validation_file=\"./val.txt\" --line_by_line=\"True\" --max_seq_length=\"16000\" --weight_decay=\"0.01\" --per_device_train_batch_size=\"1\" --per_device_eval_batch_size=\"1\" --learning_rate=\"3e-4\" --tpu_num_cores='8' --warmup_steps=\"1000\" --overwrite_output_dir --pad_to_max_length --num_train_epochs=5 --adam_beta1=0.9 --adam_beta2=0.98 --do_train --do_eval --logging_steps=200 --evaluation_strategy=\"steps\" --eval_steps=200 --eval_accumulation_steps=200 --report_to=\"all\" --logging_dir='./logs' --skip_memory_metrics='False' --gradient_accumulation_steps=150 --use_fast_tokenizer='True' --logging_first_step='True' \r\n```\r\nA better view:-\r\n```py\r\n\r\nper_device_eval_batch_size=1,\r\nper_device_train_batch_size=1,\r\nprediction_loss_only=False,\r\npush_to_hub=False,\r\npush_to_hub_model_id=results,\r\npush_to_hub_organization=None,\r\npush_to_hub_token=None,\r\nremove_unused_columns=True,\r\nreport_to=['tensorboard', 'wandb'],\r\nresume_from_checkpoint=None,\r\nrun_name=./results,\r\nsave_on_each_node=False,\r\nsave_steps=500,\r\nsave_strategy=IntervalStrategy.STEPS,\r\nsave_total_limit=None,\r\nseed=42,\r\nsharded_ddp=[],\r\nskip_memory_metrics=False,\r\ntpu_metrics_debug=False,\r\ntpu_num_cores=8,\r\nuse_legacy_prediction_loop=False,\r\nwarmup_ratio=0.0,\r\nwarmup_steps=1000,\r\nweight_decay=0.01,\r\n)\r\n```", "hmm...finally got it to work; not sure what I did but removing `logging_steps`, disabling gradient accumulation and eval accumulation helps a lot - along with using `python3 ....[command]` than `python ...[cmd]` which shouldn't be an issue, but I really don't know why I have to sacrifice accuracy/features for logging to work :thinking: ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,625
1,629
1,629
NONE
null
### Who can help - trainer: @sgugger ## Information Model I am using (Bert, XLNet ...): BigBird The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Details here (https://discuss.huggingface.co/t/no-loss-being-logged-when-running-mlm-script-colab/8134) The validation/eval loss is not being logged at all when using wandb or tensorboard - suffice to say its not being logged by Trainer. Tried different settings for the script, none of which yield any results.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12618/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12618/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12617
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12617/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12617/comments
https://api.github.com/repos/huggingface/transformers/issues/12617/events
https://github.com/huggingface/transformers/pull/12617
940,983,937
MDExOlB1bGxSZXF1ZXN0Njg2OTcxMTQ5
12,617
TF summarization example
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,626
1,626
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12617/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12617/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12617", "html_url": "https://github.com/huggingface/transformers/pull/12617", "diff_url": "https://github.com/huggingface/transformers/pull/12617.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12617.patch", "merged_at": 1626101918000 }
https://api.github.com/repos/huggingface/transformers/issues/12616
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12616/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12616/comments
https://api.github.com/repos/huggingface/transformers/issues/12616/events
https://github.com/huggingface/transformers/issues/12616
940,948,591
MDU6SXNzdWU5NDA5NDg1OTE=
12,616
Weird outputs by `opus-mt-en-es`
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Fixed via #12662 " ]
1,625
1,628
1,628
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: TPU VM - Python version: 3.8.10 ### Who can help @patrickvonplaten @patil-suraj ## Information Model I am using: FlaxMarianMTModel ## To reproduce I used this code for beam size 2 and 4. Funnily the outputs looked almost same just that Oh changed to no in beam_size=2 ``` from transformers import MarianTokenizer, FlaxMarianMTModel model = FlaxMarianMTModel.from_pretrained('Helsinki-NLP/opus-mt-en-es', from_pt=True) tokenizer = MarianTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-es') text = "Living Room, The Sheridan House! Your Minneapolis Home!" input_ids = tokenizer(text, max_length=64, return_tensors='jax', truncation=True) sequences = model.generate(**input_ids, early_stopping=True, max_length=64, num_beams=2).sequences tokenizer.batch_decode(sequences, skip_special_tokens=True, max_length=64) ``` For num_beams = 2 output: 'Sala, ¡La Casa Sheridan, tu hogar de Minneapolis, ¡No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no' For num_beams = 4 output: '¡Sala de estar, la Casa Sheridan, tu hogar de Minneapolis, ¡Oh, ¡Oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh,' ## Expected behavior Shouldn't give 'oh' or 'no' in outputs.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12616/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12616/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12615
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12615/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12615/comments
https://api.github.com/repos/huggingface/transformers/issues/12615/events
https://github.com/huggingface/transformers/pull/12615
940,938,828
MDExOlB1bGxSZXF1ZXN0Njg2OTMzMDg1
12,615
[FLax] Fix marian docs 2
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,625
1,625
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Follow-up PR from #12614 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12615/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12615/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12615", "html_url": "https://github.com/huggingface/transformers/pull/12615", "diff_url": "https://github.com/huggingface/transformers/pull/12615.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12615.patch", "merged_at": 1625851737000 }
https://api.github.com/repos/huggingface/transformers/issues/12614
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12614/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12614/comments
https://api.github.com/repos/huggingface/transformers/issues/12614/events
https://github.com/huggingface/transformers/pull/12614
940,912,336
MDExOlB1bGxSZXF1ZXN0Njg2OTEyMzIy
12,614
[Flax Marian] Add marian flax example
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,625
1,625
MEMBER
null
This PR adds a better example and leaves a note that `early_stopping=True` should be used for FlaxMarian
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12614/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12614/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12614", "html_url": "https://github.com/huggingface/transformers/pull/12614", "diff_url": "https://github.com/huggingface/transformers/pull/12614.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12614.patch", "merged_at": 1625850118000 }
https://api.github.com/repos/huggingface/transformers/issues/12613
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12613/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12613/comments
https://api.github.com/repos/huggingface/transformers/issues/12613/events
https://github.com/huggingface/transformers/issues/12613
940,876,556
MDU6SXNzdWU5NDA4NzY1NTY=
12,613
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.HalfTensor [12, 4096, 1]], which is output 0 of ViewBackward, is at version 1; expected version 0 instead.
{ "login": "Sepelbaum", "id": 52469561, "node_id": "MDQ6VXNlcjUyNDY5NTYx", "avatar_url": "https://avatars.githubusercontent.com/u/52469561?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sepelbaum", "html_url": "https://github.com/Sepelbaum", "followers_url": "https://api.github.com/users/Sepelbaum/followers", "following_url": "https://api.github.com/users/Sepelbaum/following{/other_user}", "gists_url": "https://api.github.com/users/Sepelbaum/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sepelbaum/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sepelbaum/subscriptions", "organizations_url": "https://api.github.com/users/Sepelbaum/orgs", "repos_url": "https://api.github.com/users/Sepelbaum/repos", "events_url": "https://api.github.com/users/Sepelbaum/events{/privacy}", "received_events_url": "https://api.github.com/users/Sepelbaum/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Hello! Could you provide the information required by the template, please? Especially the code that you used, as it's hard to help without it. Thanks", "I have a similar problem during Finetuning LED for Summarization Task in Colab, with the following error message:\r\n-----------------------------\r\nRuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [12, 4096, 1]], which is output 0 of ViewBackward, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!\r\n------------------------------\r\nThe Settings for the Training are as follow:\r\n\r\nTraining Set: 17 Samples, each with less than 4000 tokens.\r\nAs for environment, I ran !pip install -r requirements.txt, where requirements come from the latest master branch of longformer.\r\n----------------------\r\ntransformers @ git+http://github.com/ibeltagy/transformers.git@longformer_encoder_decoder#egg=transformers\r\npytorch-lightning @ git+http://github.com/ibeltagy/[email protected]_fixes#egg=pytorch-lightning\r\ntorch>=1.6.0\r\ntensorboardX\r\ntest-tube==0.7.5\r\nnlp\r\nrouge_score\r\n-----------------------------------\r\n\r\nCUDA for the colab session was: \r\nSun Jul 18 03:58:07 2021 \r\n+-----------------------------------------------------------------------------+\r\n| NVIDIA-SMI 470.42.01 Driver Version: 460.32.03 CUDA Version: 11.2 |\r\n|-------------------------------+----------------------+----------------------+\r\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\r\n| | | MIG M. |\r\n|===============================+======================+======================|\r\n| 0 Tesla P100-PCIE... Off | 00000000:00:04.0 Off | 0 |\r\n| N/A 44C P0 30W / 250W | 0MiB / 16280MiB | 0% Default |\r\n| | | N/A |\r\n+-------------------------------+----------------------+----------------------+ \r\n+-----------------------------------------------------------------------------+\r\n| Processes: |\r\n| GPU GI CI PID Type Process name GPU Memory |\r\n| ID ID Usage |\r\n|=============================================================================|\r\n| No running processes found |\r\n+-----------------------------------------------------------------------------+\r\n\r\nOther Training Configurations are as follow:\r\n\r\nloaded from pretrained is the \"allenai/led-base-16384\" with HuggingFace.\r\n\r\nmax_input_length = 4096\r\nmin_output_length = 256\r\nmax_output_length = 512\r\nbatch_size = 2\r\n\r\n# set generate hyperparameters\r\nled.config.encoder_layers=6\r\nled.config.decoder_layers=6\r\nled.config.attention_window=128 # left and right so total 256\r\nled.config.num_beams = 2\r\nled.config.length_penalty = 2.0\r\nled.config.early_stopping = True\r\nled.config.no_repeat_ngram_size = 3\r\n\r\n# adjust output length according to training and val datasets\r\nled.config.max_length = max_output_length # now at 256\r\nled.config.min_length = min_output_length # now at 512\r\n\r\n# enable fp16 apex training\r\ntraining_args = Seq2SeqTrainingArguments(\r\n predict_with_generate=True,\r\n evaluation_strategy=\"epoch\",\r\n per_device_train_batch_size=batch_size,\r\n per_device_eval_batch_size=batch_size,\r\n fp16=True,\r\n output_dir=path_models,\r\n logging_steps=5,\r\n eval_steps=10,\r\n save_steps=10,\r\n save_total_limit=4,\r\n load_best_model_at_end=True,\r\n gradient_accumulation_steps=4, \r\n num_train_epochs=6,\r\n)\r\n\r\ntrainer = Seq2SeqTrainer(\r\n model=led,\r\n tokenizer=tokenizer,\r\n args=training_args,\r\n compute_metrics=compute_metrics,\r\n train_dataset=train_dataset,\r\n eval_dataset=val_dataset,\r\n)\r\n\r\n\r\nEnabling \"torch.autograd.set_detect_anomaly(True)\", point to the following:\r\n\r\n/led/modeling_led.py\", line 589, in _compute_attn_output_with_global_indices\r\n attn_probs_only_global.transpose(1, 2), value_vectors_only_global.transpose(1, 2)\r\n\r\nIt seems that the global attention calculation made changes to torch and somehow created some conflicts with gradient computation in terms tracking steps. \r\n\r\nI had successfully trained larger samples (600+ samples) with up to 8192 input tokens, with generate length between 256 and 512 , attention window size = 512 (1024 total from both side), using the led-base checkpoint. So seeing this error message is a bit frustrating. Any help is highly appreciated. Let me know if you need more information. Thank you.\r\n-----------------------------\r\n***** Running training *****\r\n Num examples = 17\r\n Num Epochs = 6\r\n Instantaneous batch size per device = 2\r\n Total train batch size (w. parallel, distributed & accumulation) = 8\r\n Gradient Accumulation steps = 4\r\n Total optimization steps = 12\r\n [ 3/12 00:06 < 00:57, 0.16 it/s, Epoch 0.89/6]\r\nEpoch\tTraining Loss\tValidation Loss\r\n**/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py:149: UserWarning: Error detected in BmmBackward0. Traceback of forward call that caused the error:**\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/autograd/function.py\", line 87, in apply\r\n return self._forward_cls.backward(self, *args) # type: ignore[attr-defined]\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/utils/checkpoint.py\", line 122, in backward\r\n outputs = ctx.run_function(*detached_inputs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/led/modeling_led.py\", line 1816, in custom_forward\r\n return module(*inputs, is_global_attn, output_attentions)\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/led/modeling_led.py\", line 915, in forward\r\n output_attentions=output_attentions,\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/led/modeling_led.py\", line 726, in forward\r\n output_attentions=output_attentions,\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/led/modeling_led.py\", line 282, in forward\r\n is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero,\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/led/modeling_led.py\", line 589, in _compute_attn_output_with_global_indices\r\n attn_probs_only_global.transpose(1, 2), value_vectors_only_global.transpose(1, 2)\r\n (Triggered internally at /pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:104.)\r\n allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag\r\n/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py:149: UserWarning: \r\n\r\nPrevious calculation was induced by CheckpointFunctionBackward. Traceback of forward call that induced the previous calculation:\r\n File \"/usr/lib/python3.7/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/lib/python3.7/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py\", line 16, in <module>\r\n app.launch_new_instance()\r\n File \"/usr/local/lib/python3.7/dist-packages/traitlets/config/application.py\", line 845, in launch_instance\r\n app.start()\r\n File \"/usr/local/lib/python3.7/dist-packages/ipykernel/kernelapp.py\", line 499, in start\r\n self.io_loop.start()\r\n File \"/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py\", line 132, in start\r\n self.asyncio_loop.run_forever()\r\n File \"/usr/lib/python3.7/asyncio/base_events.py\", line 541, in run_forever\r\n self._run_once()\r\n File \"/usr/lib/python3.7/asyncio/base_events.py\", line 1786, in _run_once\r\n handle._run()\r\n File \"/usr/lib/python3.7/asyncio/events.py\", line 88, in _run\r\n self._context.run(self._callback, *self._args)\r\n File \"/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py\", line 122, in _handle_events\r\n handler_func(fileobj, events)\r\n File \"/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py\", line 300, in null_wrapper\r\n return fn(*args, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py\", line 451, in _handle_events\r\n self._handle_recv()\r\n File \"/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py\", line 480, in _handle_recv\r\n self._run_callback(callback, msg)\r\n File \"/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py\", line 434, in _run_callback\r\n callback(*args, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py\", line 300, in null_wrapper\r\n return fn(*args, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py\", line 283, in dispatcher\r\n return self.dispatch_shell(stream, msg)\r\n File \"/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py\", line 233, in dispatch_shell\r\n handler(stream, idents, msg)\r\n File \"/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py\", line 399, in execute_request\r\n user_expressions, allow_stdin)\r\n File \"/usr/local/lib/python3.7/dist-packages/ipykernel/ipkernel.py\", line 208, in do_execute\r\n res = shell.run_cell(code, store_history=store_history, silent=silent)\r\n File \"/usr/local/lib/python3.7/dist-packages/ipykernel/zmqshell.py\", line 537, in run_cell\r\n return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py\", line 2718, in run_cell\r\n interactivity=interactivity, compiler=compiler, result=result)\r\n File \"/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py\", line 2828, in run_ast_nodes\r\n if self.run_code(code, result):\r\n File \"/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py\", line 2882, in run_code\r\n exec(code_obj, self.user_global_ns, self.user_ns)\r\n File \"<ipython-input-74-3b02fb48d903>\", line 1, in <module>\r\n trainer.train()\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/trainer.py\", line 1269, in train\r\n tr_loss += self.training_step(model, inputs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/trainer.py\", line 1762, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/trainer.py\", line 1794, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/led/modeling_led.py\", line 2362, in forward\r\n return_dict=return_dict,\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/led/modeling_led.py\", line 2206, in forward\r\n return_dict=return_dict,\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/led/modeling_led.py\", line 1826, in forward\r\n is_index_global_attn,\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/utils/checkpoint.py\", line 211, in checkpoint\r\n return CheckpointFunction.apply(function, preserve, *args)\r\n (Triggered internally at /pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:109.)\r\n allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\n<ipython-input-74-3b02fb48d903> in <module>()\r\n----> 1 trainer.train()\r\n 2 #resume_from_checkpoint=True\r\n\r\n6 frames\r\n/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)\r\n 147 Variable._execution_engine.run_backward(\r\n 148 tensors, grad_tensors_, retain_graph, create_graph, inputs,\r\n--> 149 allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag\r\n 150 \r\n 151 \r\n\r\nRuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [12, 4096, 1]], which is output 0 of ViewBackward, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!", "> /led/modeling_led.py\", line 589, in _compute_attn_output_with_global_indices\r\n> attn_probs_only_global.transpose(1, 2), value_vectors_only_global.transpose(1, 2)\r\n\r\nI seem to fix the problem by changing the following, by detaching the torch before transpose operation:\r\nfrom\r\n\r\n/led/modeling_led.py\", line 589, in _compute_attn_output_with_global_indices\r\nattn_probs_only_global.transpose(1, 2), value_vectors_only_global.transpose(1, 2)\r\n\r\nto \r\n/led/modeling_led.py\", line 589, in _compute_attn_output_with_global_indices\r\nattn_probs_only_global.detach().transpose(1, 2), value_vectors_only_global.detach().transpose(1, 2)", "I'm getting exactly the same issue and it works fine if i don't specify a global attention mask, which leads me to believe its in the merge function in forward.", "@Herais Detach would remove the tensors from the computation graph, wouldn't it be preferable to use .clone() instead? ", "> @Herais Detach would remove the tensors from the computation graph, wouldn't it be preferable to use .clone() instead?\r\n\r\nI think you are right, I was wondering about what detach does to the computational map, especially with the gradient accumulation set to True. Using clone() also solves the versioning problem, I would like to see how it does to predictions, will update. Thank you=)\r\n\r\nI was testing global attention at the beginning of the document and the global attention at the beginning of each paragraph..", "Hi, I also encountered this exact same bug when using the longformer for sequence classification. I had successfully trained this model previously before oversampling as well as a LED for summarization so I was thrown off at first when I got it. I realized that the model kept throwing an error at the last batch and when comparing the length of my data to my total batch size (batch_size=2 and gradient_accumulation=4) I realized that my last batch was a batch size of 1. I dropped a single row and then I was able to train the model successfully. I recently turned on gradient_checkpointing and ran it again (batch_size=7 and gradient_accumulation=4) and the error was triggered again when my last batch was 22/28 if you count gradient accumulation, so once again the batch size of 1 created the error.", "Hi - is there a preferred fix for this? I'm blocked on it right now. I can just clone the offending tensor but want to make sure that's the preferred behavior.", "Sorry I'm a bit lost on this issue. Could someone add a **minimum** reproducible code snippet that allows us to reproduce the error?", "I think most people here are running into issues on the backward pass of the Longformer E-D.\r\n\r\nI will share my code in a bit but I'm curious if the provided colab works. If I were to reproduce my bug, it would be similar to the colab.\r\n\r\nhttps://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v", "I tried cloning the offending tensor but it didn't seem to resolve it . Here's my stack trace\r\n\r\n`(fresh) griadams@ip-172-31-19-18:~/CompMedDsumEval/src/comp_med_dsum_eval/baselines/longformer$ pythons main.py -debug\r\nUsing GPUS --> 4...\r\nNum GPUs --> 1\r\nGPU available: True, used: True\r\nTPU available: False, using: 0 TPU cores\r\nUsing native 16bit precision.\r\nStarting training...\r\nLOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]\r\nwandb: W&B syncing is set to `offline` in this directory. Run `wandb online` or set WANDB_MODE=online to enable cloud syncing.\r\n\r\n | Name | Type | Params\r\n------------------------------------------------------\r\n0 | model | LEDForConditionalGeneration | 161 M \r\n------------------------------------------------------\r\n161 M Trainable params\r\n0 Non-trainable params\r\n161 M Total params\r\n647.378 Total estimated model params size (MB)\r\nValidation sanity check: 0it [00:00, ?it/s]/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/data_loading.py:102: UserWarning: The dataloader, val dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 64 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.\r\n rank_zero_warn(\r\n/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/data_loading.py:102: UserWarning: The dataloader, train dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 64 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.\r\n rank_zero_warn(\r\nEpoch 0: 0%| | 0/16512 [00:00<?, ?it/s]/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/autograd/__init__.py:147: UserWarning: Error detected in BmmBackward0. Traceback of forward call that caused the error:\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/autograd/function.py\", line 87, in apply\r\n return self._forward_cls.backward(self, *args) # type: ignore[attr-defined]\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/utils/checkpoint.py\", line 122, in backward\r\n outputs = ctx.run_function(*detached_inputs)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/transformers/models/led/modeling_led.py\", line 1816, in custom_forward\r\n return module(*inputs, is_global_attn, output_attentions)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/transformers/models/led/modeling_led.py\", line 908, in forward\r\n attn_outputs = self.self_attn(\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/transformers/models/led/modeling_led.py\", line 719, in forward\r\n self_outputs = self.longformer_self_attn(\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/transformers/models/led/modeling_led.py\", line 277, in forward\r\n attn_output = self._compute_attn_output_with_global_indices(\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/transformers/models/led/modeling_led.py\", line 588, in _compute_attn_output_with_global_indices\r\n attn_output_only_global = torch.matmul(\r\n (Triggered internally at /pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:104.)\r\n Variable._execution_engine.run_backward(\r\n/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/autograd/__init__.py:147: UserWarning: \r\n\r\nPrevious calculation was induced by CheckpointFunctionBackward. Traceback of forward call that induced the previous calculation:\r\n File \"/home/griadams/CompMedDsumEval/src/comp_med_dsum_eval/baselines/longformer/main.py\", line 137, in <module>\r\n run(args)\r\n File \"/home/griadams/CompMedDsumEval/src/comp_med_dsum_eval/baselines/longformer/main.py\", line 101, in run\r\n trainer.fit(model)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py\", line 460, in fit\r\n self._run(model)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py\", line 758, in _run\r\n self.dispatch()\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py\", line 799, in dispatch\r\n self.accelerator.start_training(self)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py\", line 96, in start_training\r\n self.training_type_plugin.start_training(trainer)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py\", line 144, in start_training\r\n self._results = trainer.run_stage()\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py\", line 809, in run_stage\r\n return self.run_train()\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py\", line 871, in run_train\r\n self.train_loop.run_training_epoch()\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py\", line 499, in run_training_epoch\r\n batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py\", line 738, in run_training_batch\r\n self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py\", line 434, in optimizer_step\r\n model_ref.optimizer_step(\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/core/lightning.py\", line 1403, in optimizer_step\r\n optimizer.step(closure=optimizer_closure)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py\", line 214, in step\r\n self.__optimizer_step(*args, closure=closure, profiler_name=profiler_name, **kwargs)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py\", line 134, in __optimizer_step\r\n trainer.accelerator.optimizer_step(optimizer, self._optimizer_idx, lambda_closure=closure, **kwargs)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py\", line 325, in optimizer_step\r\n make_optimizer_step = self.precision_plugin.pre_optimizer_step(\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/native_amp.py\", line 93, in pre_optimizer_step\r\n result = lambda_closure()\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py\", line 732, in train_step_and_backward_closure\r\n result = self.training_step_and_backward(\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py\", line 823, in training_step_and_backward\r\n result = self.training_step(split_batch, batch_idx, opt_idx, hiddens)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py\", line 290, in training_step\r\n training_step_output = self.trainer.accelerator.training_step(args)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py\", line 204, in training_step\r\n return self.training_type_plugin.training_step(*args)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py\", line 155, in training_step\r\n return self.lightning_module.training_step(*args, **kwargs)\r\n File \"/home/griadams/CompMedDsumEval/src/comp_med_dsum_eval/baselines/longformer/model.py\", line 36, in training_step\r\n output = self.model(**batch, use_cache=False)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/transformers/models/led/modeling_led.py\", line 2346, in forward\r\n outputs = self.led(\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/transformers/models/led/modeling_led.py\", line 2198, in forward\r\n encoder_outputs = self.encoder(\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/transformers/models/led/modeling_led.py\", line 1820, in forward\r\n layer_outputs = torch.utils.checkpoint.checkpoint(\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/utils/checkpoint.py\", line 211, in checkpoint\r\n return CheckpointFunction.apply(function, preserve, *args)\r\n (Triggered internally at /pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:109.)\r\n Variable._execution_engine.run_backward(\r\n[W python_anomaly_mode.cpp:104] Warning: Error detected in CheckpointFunctionBackward. Traceback of forward call that caused the error:\r\n File \"/home/griadams/CompMedDsumEval/src/comp_med_dsum_eval/baselines/longformer/main.py\", line 137, in <module>\r\n run(args)\r\n File \"/home/griadams/CompMedDsumEval/src/comp_med_dsum_eval/baselines/longformer/main.py\", line 101, in run\r\n trainer.fit(model)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py\", line 460, in fit\r\n self._run(model)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py\", line 758, in _run\r\n self.dispatch()\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py\", line 799, in dispatch\r\n self.accelerator.start_training(self)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py\", line 96, in start_training\r\n self.training_type_plugin.start_training(trainer)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py\", line 144, in start_training\r\n self._results = trainer.run_stage()\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py\", line 809, in run_stage\r\n return self.run_train()\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py\", line 871, in run_train\r\n self.train_loop.run_training_epoch()\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py\", line 499, in run_training_epoch\r\n batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py\", line 738, in run_training_batch\r\n self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py\", line 434, in optimizer_step\r\n model_ref.optimizer_step(\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/core/lightning.py\", line 1403, in optimizer_step\r\n optimizer.step(closure=optimizer_closure)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py\", line 214, in step\r\n self.__optimizer_step(*args, closure=closure, profiler_name=profiler_name, **kwargs)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py\", line 134, in __optimizer_step\r\n trainer.accelerator.optimizer_step(optimizer, self._optimizer_idx, lambda_closure=closure, **kwargs)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py\", line 325, in optimizer_step\r\n make_optimizer_step = self.precision_plugin.pre_optimizer_step(\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/native_amp.py\", line 93, in pre_optimizer_step\r\n result = lambda_closure()\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py\", line 732, in train_step_and_backward_closure\r\n result = self.training_step_and_backward(\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py\", line 823, in training_step_and_backward\r\n result = self.training_step(split_batch, batch_idx, opt_idx, hiddens)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py\", line 290, in training_step\r\n training_step_output = self.trainer.accelerator.training_step(args)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py\", line 204, in training_step\r\n return self.training_type_plugin.training_step(*args)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py\", line 155, in training_step\r\n return self.lightning_module.training_step(*args, **kwargs)\r\n File \"/home/griadams/CompMedDsumEval/src/comp_med_dsum_eval/baselines/longformer/model.py\", line 36, in training_step\r\n output = self.model(**batch, use_cache=False)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/transformers/models/led/modeling_led.py\", line 2346, in forward\r\n outputs = self.led(\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/transformers/models/led/modeling_led.py\", line 2198, in forward\r\n encoder_outputs = self.encoder(\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/transformers/models/led/modeling_led.py\", line 1820, in forward\r\n layer_outputs = torch.utils.checkpoint.checkpoint(\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/utils/checkpoint.py\", line 211, in checkpoint\r\n return CheckpointFunction.apply(function, preserve, *args)\r\n (function _print_stack)\r\nTraceback (most recent call last):\r\n File \"/home/griadams/CompMedDsumEval/src/comp_med_dsum_eval/baselines/longformer/main.py\", line 137, in <module>\r\n run(args)\r\n File \"/home/griadams/CompMedDsumEval/src/comp_med_dsum_eval/baselines/longformer/main.py\", line 101, in run\r\n trainer.fit(model)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py\", line 460, in fit\r\n self._run(model)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py\", line 758, in _run\r\n self.dispatch()\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py\", line 799, in dispatch\r\n self.accelerator.start_training(self)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py\", line 96, in start_training\r\n self.training_type_plugin.start_training(trainer)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py\", line 144, in start_training\r\n self._results = trainer.run_stage()\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py\", line 809, in run_stage\r\n return self.run_train()\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py\", line 871, in run_train\r\n self.train_loop.run_training_epoch()\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py\", line 499, in run_training_epoch\r\n batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py\", line 738, in run_training_batch\r\n self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py\", line 434, in optimizer_step\r\n model_ref.optimizer_step(\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/core/lightning.py\", line 1403, in optimizer_step\r\n optimizer.step(closure=optimizer_closure)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py\", line 214, in step\r\n self.__optimizer_step(*args, closure=closure, profiler_name=profiler_name, **kwargs)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py\", line 134, in __optimizer_step\r\n trainer.accelerator.optimizer_step(optimizer, self._optimizer_idx, lambda_closure=closure, **kwargs)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py\", line 325, in optimizer_step\r\n make_optimizer_step = self.precision_plugin.pre_optimizer_step(\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/native_amp.py\", line 93, in pre_optimizer_step\r\n result = lambda_closure()\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py\", line 732, in train_step_and_backward_closure\r\n result = self.training_step_and_backward(\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py\", line 836, in training_step_and_backward\r\n self.backward(result, optimizer, opt_idx)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py\", line 869, in backward\r\n result.closure_loss = self.trainer.accelerator.backward(\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py\", line 308, in backward\r\n output = self.precision_plugin.backward(\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/native_amp.py\", line 62, in backward\r\n closure_loss = super().backward(model, closure_loss, optimizer, opt_idx, should_accumulate, *args, **kwargs)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py\", line 79, in backward\r\n model.backward(closure_loss, optimizer, opt_idx)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/core/lightning.py\", line 1275, in backward\r\n loss.backward(*args, **kwargs)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/_tensor.py\", line 255, in backward\r\n torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/autograd/__init__.py\", line 147, in backward\r\n Variable._execution_engine.run_backward(\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/autograd/function.py\", line 87, in apply\r\n return self._forward_cls.backward(self, *args) # type: ignore[attr-defined]\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/utils/checkpoint.py\", line 138, in backward\r\n torch.autograd.backward(outputs_with_grad, args_with_grad)\r\n File \"/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/autograd/__init__.py\", line 147, in backward\r\n Variable._execution_engine.run_backward(\r\nRuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.HalfTensor [12, 6144, 1]], which is output 0 of ViewBackward, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!\r\n\r\nwandb: Waiting for W&B process to finish, PID 125448\r\nwandb: Program failed with code 1. \r\nwandb: Find user logs for this run at: /efs/griadams/weights/default/wandb/offline-run-20210809_103548-2aq43v1n/logs/debug.log\r\nwandb: Find internal logs for this run at: /efs/griadams/weights/default/wandb/offline-run-20210809_103548-2aq43v1n/logs/debug-internal.log\r\nwandb: You can sync this run to the cloud by running:\r\nwandb: wandb sync /efs/griadams/weights/default/wandb/offline-run-20210809_103548-2aq43v1n`", "First time I see an error message from PyTorch that says \"Good luck!\" haha. This will be complex then I guess", "Okey, but I still don't have a code example that let's me reproduce this error I'm afraid :D \r\n\r\nThe official colab here: https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing seems to work just fine", "I'm getting this error as well using Longformer. This seems to be happening at the very end of my training. I'm assuming that it might be happening because there is a batch that has fewer number of examples than batch size. Maybe that could be something that should be tried? I'm currently investigating this issue on my end and I'll share more information if I find something.", "Similar problem here. It happens at the end of the first epoch in my case, when the batch size is smaller.\r\n\r\n`File \"/home/user/.local/lib/python3.8/site-packages/transformers/trainer.py\", line 1269, in train\r\n tr_loss += self.training_step(model, inputs)\r\n\r\n File \"/home/user/.local/lib/python3.8/site-packages/transformers/trainer.py\", line 1780, in training_step\r\n loss.backward()\r\n\r\n File \"/home/user/.conda/envs/transformers/lib/python3.8/site-packages/torch/_tensor.py\", line 255, in backward\r\n torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)\r\n\r\n File \"/home/user/.conda/envs/transformers/lib/python3.8/site-packages/torch/autograd/__init__.py\", line 147, in backward\r\n Variable._execution_engine.run_backward(\r\n\r\nRuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [12, 4096, 1]], which is output 0 of ViewBackward, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).`", "This has to do with is_global_attn=True, else there is no problem.\r\n\r\nEDIT : downgrading to torch 1.7 works for me", "@patrickvonplaten @ibeltagy could you please advise?\r\n\r\nThanks,\r\nAlessandro", "Hi all,\r\n\r\nThe very same issue `RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation` occurred for me during a continued pre-training, i.e., warm-start a Lonformer model from the miniLMv2 checkpoint and contiue training the model with an MLM objective. I use the standard HF script, i.e., `run_mlm.py` provided in the examples. I have an ugly temporary solution down the lines, so please read, if interested.\r\n\r\nI personally altered the tokenization pre-processing to provide custom global attention masks in a every separator token `</s>`, which I aim to use as a paragraph separator:\r\n\r\n```python\r\ndef tokenize_function(examples):\r\n # Remove empty lines\r\n examples[text_column_name] = [\r\n line for line in examples[text_column_name] if len(line) > 0 and not line.isspace()\r\n ]\r\n batch = tokenizer(\r\n examples[text_column_name],\r\n padding=padding,\r\n truncation=True,\r\n max_length=max_seq_length,\r\n # We use this option because DataCollatorForLanguageModeling (see below) is more efficient when it\r\n # receives the `special_tokens_mask`.\r\n return_special_tokens_mask=True,\r\n )\r\n # provide custom global attention mask\r\n batch.data['global_attention_mask'] = [[1 if token_id in [tokenizer.cls_token_id, tokenizer.sep_token_id]\r\n else 0 for token_id in seq] for seq in batch.data['input_ids']]\r\n return batch\r\n```\r\n\r\nAfter 1186 training steps, the aforementioned error occurred...\r\n\r\n# Solution\r\n\r\nIn order to be able to train the model -until there is a proper solution- I \"hacked\" the `Trainer` class in the `train` function, wrapping this part of the code in a try-except block:\r\n\r\nhttps://github.com/huggingface/transformers/blob/010965dcde8ce9526f6a7e6e2c3f36276c153708/src/transformers/trainer.py#L1277-L1286\r\n\r\nI copy-pasted the `trainer.py` in a new personal file `mytrainer.py` and did the following minor update, which moves to the next mini batch (step), while it also zero-out the gradients:\r\n\r\n```python\r\ntry:\r\n if (\r\n ((step + 1) % args.gradient_accumulation_steps != 0)\r\n and args.local_rank != -1\r\n and args._no_sync_in_gradient_accumulation\r\n ):\r\n # Avoid unnecessary DDP synchronization since there will be no backward pass on this example.\r\n with model.no_sync():\r\n tr_loss += self.training_step(model, inputs)\r\n else:\r\n tr_loss += self.training_step(model, inputs)\r\nexcept:\r\n tr_loss += 0\r\n logger.warning(f'Issue at training step {step} !!! Training continues...')\r\n model.zero_grad()\r\n continue\r\n```\r\n\r\nI re-run the code, which started from the latest checkpoint `checkpoint-1100` and passed the tricky part successfully:\r\n\r\n```\r\n09/11/2021 20:03:34 - WARNING - mytrainer - Issue at training step 1187 !!! Training continues...\r\n```\r\n\r\nSo far there is not further issue and the training loss is keep decreasing 😄 \r\n\r\n```\r\n{'loss': 4.12, 'learning_rate': 9.724264705882353e-06, 'epoch': 2.19}\r\n{'loss': 4.0383, 'learning_rate': 9.632352941176471e-06, 'epoch': 2.36}\r\n{'loss': 3.8487, 'learning_rate': 9.448529411764707e-06, 'epoch': 2.7}\r\n{'eval_loss': 3.653672456741333, 'eval_runtime': 61.6433, 'eval_samples_per_second': 8.111, 'eval_steps_per_second': 1.022, 'epoch': 3.0}\r\n```\r\n", "@iliaschalkidis thanks for the update. Even thought this goes around the issue, it looks like there is something fundamentally wrong with the current implementation? I hope that @patrickvonplaten or @ibeltagy could comment on this 🙏", "@aleSuglia that's absolutely true and that's why I describe my solution as a \"dirty\" hack trying to avoid seg faults by skipping a few param updates when this weird error occur.\r\n\r\nLet's hope for a real solution in the underlying issue.", "@iliaschalkidis actually, now that you have a try/except in place for that issue, why don't you serialise the faulty batch and share it in a Colab so that @patrickvonplaten or @ibeltagy can play around with it? I think that would be terribly useful to debug!", "The problem comes from LongformerSelfAttention for longformer. If this happens for another model, its probably from its SelfAttention module too.", "@iliaschalkidis any chances to get the faulty batch out of your training?", "Not yet, sorry. I'm currently (pre-)training the models. I'll try to add a save functionality in the `except` handling and save a tricky batch later this week. \r\n\r\nFWIW I agree with @benderama3 ; I also have a feeling that this inconsistency is a by-product of the really complicated attention code, i.e., there are multiple `reshape` and `gather` -like computations with dynamically inferred shapes :P ", "Some other edge cases that I've spotted:\r\n\r\n```\r\nRuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.HalfTensor [12, 1024, 46]], which is output 0 of ViewBackward, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).\r\n Variable._execution_engine.run_backward(\r\n Variable._execution_engine.run_backward(\r\nRuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.HalfTensor [12, 1024, 37]], which is output 0 of ViewBackward, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).\r\nRuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.HalfTensor [12, 1024, 43]], which is output 0 of ViewBackward, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).\r\n Variable._execution_engine.run_backward(\r\nRuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.HalfTensor [12, 1536, 73]], which is output 0 of ViewBackward, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).\r\n```", "@patrickvonplaten , \r\n\r\nHere's the Colab that I got this problem. Finally got a chance o strip down the notebook code. The error comes up 5 to 10 minutes into training.\r\n\r\n[https://colab.research.google.com/drive/1ZoYJaJZmhygKBEAb5gPm2MaySdFOqgbo?usp=sharing](url)\r\n\r\nError message was:\r\nRuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.HalfTensor [12, 16384, 16]], which is output 0 of ViewBackward, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!", "@Herais thanks for sharing your notebook. I've simplified it to make it easier for people to reproduce the bug and dissect the actual model code: https://colab.research.google.com/drive/13rKxs6Ype0kDEBlnywsGynE2zpzv2CR-#scrollTo=h7k8m9OV8xIR", "cool, thank you.", "@patrickvonplaten @ibeltagy I'm happy to send a PR with the fix. There are some in-place operations that require `clone` to work. Let me know if you're interested!", "@aleSuglia and @Herais thanks for diving into this issue! We would happily welcome a PR to see the code changes and what needs to be fixed. \r\n\r\nThank you!" ]
1,625
1,631
1,631
NONE
null
When I run trainer to fine-tune pertained long former for sequence classification I get the following error: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.HalfTensor [12, 4096, 1]], which is output 0 of ViewBackward, is at version 1; expected version 0 instead. I'm not sure how to debug this as the error points me to internal processes handled by the trainer: Traceback (most recent call last): File "finetune_longformer_3.py", line 126, in <module> trainer.train() File "/......./conda/envs/diss/lib/python3.8/site-packages/transformers/trainer.py", line 1269, in train tr_loss += self.training_step(model, inputs) File "/....../conda/envs/diss/lib/python3.8/site-packages/transformers/trainer.py", line 1772, in training_step self.scaler.scale(loss).backward() File "/......../conda/envs/diss/lib/python3.8/site-packages/torch/_tensor.py", line 255, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/........./conda/envs/diss/lib/python3.8/site-packages/torch/autograd/__init__.py", line 147, in backward Variable._execution_engine.run_backward( any help would be much appreciated!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12613/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12613/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12612
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12612/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12612/comments
https://api.github.com/repos/huggingface/transformers/issues/12612/events
https://github.com/huggingface/transformers/pull/12612
940,863,349
MDExOlB1bGxSZXF1ZXN0Njg2ODcxNzEx
12,612
[Flax] Fix mt5 auto
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,625
1,625
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Correctly loads MT5 in Flax from auto model. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12612/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12612/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12612", "html_url": "https://github.com/huggingface/transformers/pull/12612", "diff_url": "https://github.com/huggingface/transformers/pull/12612.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12612.patch", "merged_at": 1625848384000 }
https://api.github.com/repos/huggingface/transformers/issues/12611
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12611/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12611/comments
https://api.github.com/repos/huggingface/transformers/issues/12611/events
https://github.com/huggingface/transformers/pull/12611
940,842,577
MDExOlB1bGxSZXF1ZXN0Njg2ODU0MTg2
12,611
Better heuristic for token-classification pipeline.
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false } ]
[]
1,625
1,627
1,627
CONTRIBUTOR
null
# What does this PR do? Relooking at the problem makes thing actually much simpler, when we look at ids from a tokenizer, we have no way in **general** to recover if some substring is part of a word or not. However, within the pipeline, with offsets we still have access to the original string, so we can simply look if previous character (if it exists) of a token, is actually a space. This will obviously be wrong for tokenizers that contain spaces within tokens, tokenizers where offsets include spaces too (Don't think there are a lot). If will incorrectly fuse any punctuation too ! (" I am a robot!"). But that is already much better than what currently happens. This heuristic hopefully is fully bc and still can handle non-word based tokenizers. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes https://github.com/huggingface/transformers/issues/11887 Fixes https://github.com/huggingface/transformers/issues/12593 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @LysandreJik Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12611/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12611/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12611", "html_url": "https://github.com/huggingface/transformers/pull/12611", "diff_url": "https://github.com/huggingface/transformers/pull/12611.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12611.patch", "merged_at": 1627309286000 }
https://api.github.com/repos/huggingface/transformers/issues/12610
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12610/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12610/comments
https://api.github.com/repos/huggingface/transformers/issues/12610/events
https://github.com/huggingface/transformers/issues/12610
940,794,893
MDU6SXNzdWU5NDA3OTQ4OTM=
12,610
Unable to load mT5 with FlaxAutoModelForSeq2SeqLM
{ "login": "peregilk", "id": 9079808, "node_id": "MDQ6VXNlcjkwNzk4MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/9079808?v=4", "gravatar_id": "", "url": "https://api.github.com/users/peregilk", "html_url": "https://github.com/peregilk", "followers_url": "https://api.github.com/users/peregilk/followers", "following_url": "https://api.github.com/users/peregilk/following{/other_user}", "gists_url": "https://api.github.com/users/peregilk/gists{/gist_id}", "starred_url": "https://api.github.com/users/peregilk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/peregilk/subscriptions", "organizations_url": "https://api.github.com/users/peregilk/orgs", "repos_url": "https://api.github.com/users/peregilk/repos", "events_url": "https://api.github.com/users/peregilk/events{/privacy}", "received_events_url": "https://api.github.com/users/peregilk/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[]
1,625
1,625
1,625
CONTRIBUTOR
null
## Environment info - `transformers` version: 4.9.0.dev0 - Platform: TPU VM - Python version: 3.8.10 ### Who can help @patrickvonplaten ## Trying to load mT5 ('google/mt5-small') with FlaxAutoModelForSeq2SeqLM leads to the following error: ``` >>> import transformers >>> model = FlaxAutoModelForSeq2SeqLM.from_pretrained('google/mt5-small') ValueError: Unrecognized configuration class <class 'transformers.models.mt5.configuration_mt5.MT5Config'> for this kind of AutoModel: FlaxAutoModelForSeq2SeqLM. Model type should be one of BartConfig, T5Config. ``` Loading the same model with FlaxT5ForConditionalGeneration works fine. @patrickvonplaten suggested in Slack #flax-jax-community-week that the issue might be caused by missing MT5Config.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12610/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12610/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12609
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12609/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12609/comments
https://api.github.com/repos/huggingface/transformers/issues/12609/events
https://github.com/huggingface/transformers/pull/12609
940,749,976
MDExOlB1bGxSZXF1ZXN0Njg2Nzc1ODcy
12,609
Fix arg count for partial functions
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,625
1,625
COLLABORATOR
null
# What does this PR do? As pointed out in #12605, the count for the number of arguments in the `model_init` was not working for partial functions. This PR fixes that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12609/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12609/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12609", "html_url": "https://github.com/huggingface/transformers/pull/12609", "diff_url": "https://github.com/huggingface/transformers/pull/12609.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12609.patch", "merged_at": 1625837083000 }
https://api.github.com/repos/huggingface/transformers/issues/12608
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12608/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12608/comments
https://api.github.com/repos/huggingface/transformers/issues/12608/events
https://github.com/huggingface/transformers/pull/12608
940,736,816
MDExOlB1bGxSZXF1ZXN0Njg2NzY0ODY4
12,608
[Flax] Fix cur step flax examples
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,625
1,625
MEMBER
null
Thanks a mille @m3hrdadfi !
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12608/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12608/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12608", "html_url": "https://github.com/huggingface/transformers/pull/12608", "diff_url": "https://github.com/huggingface/transformers/pull/12608.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12608.patch", "merged_at": 1625835088000 }
https://api.github.com/repos/huggingface/transformers/issues/12607
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12607/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12607/comments
https://api.github.com/repos/huggingface/transformers/issues/12607/events
https://github.com/huggingface/transformers/pull/12607
940,684,090
MDExOlB1bGxSZXF1ZXN0Njg2NzE5NjE5
12,607
T5 mlm Flax streaming example
{ "login": "gsarti", "id": 16674069, "node_id": "MDQ6VXNlcjE2Njc0MDY5", "avatar_url": "https://avatars.githubusercontent.com/u/16674069?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gsarti", "html_url": "https://github.com/gsarti", "followers_url": "https://api.github.com/users/gsarti/followers", "following_url": "https://api.github.com/users/gsarti/following{/other_user}", "gists_url": "https://api.github.com/users/gsarti/gists{/gist_id}", "starred_url": "https://api.github.com/users/gsarti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gsarti/subscriptions", "organizations_url": "https://api.github.com/users/gsarti/orgs", "repos_url": "https://api.github.com/users/gsarti/repos", "events_url": "https://api.github.com/users/gsarti/events{/privacy}", "received_events_url": "https://api.github.com/users/gsarti/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,625
1,632
1,632
CONTRIBUTOR
null
# Added T5 mlm Flax streaming example This PR adds an example script for T5 MLM Pretraining using 🤗 Datasets streaming feature. A new script `run_mlm_t5_flax_stream.py` is added in the `jax-projects/dataset-streaming` folder, and the `README.md` is updated accordingly with a training example for `t5-small` on the `mc4/en` corpus in streaming mode. As mentioned in the Slack channel, I ran some preliminary tests on mc4/it and I get pretty weird results (train loss converges very early, eval metrics remain very low), possibly due to some problem with the adapted collating/tokenization, so this PR would greatly benefit from reviewing before merging. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Who can review? @patrickvonplaten @patil-suraj
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12607/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12607/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12607", "html_url": "https://github.com/huggingface/transformers/pull/12607", "diff_url": "https://github.com/huggingface/transformers/pull/12607.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12607.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/12606
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12606/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12606/comments
https://api.github.com/repos/huggingface/transformers/issues/12606/events
https://github.com/huggingface/transformers/issues/12606
940,658,370
MDU6SXNzdWU5NDA2NTgzNzA=
12,606
Remote process received SIGTERM on 96 core tpu-vm during group_text map on datasets
{ "login": "yhavinga", "id": 3098618, "node_id": "MDQ6VXNlcjMwOTg2MTg=", "avatar_url": "https://avatars.githubusercontent.com/u/3098618?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yhavinga", "html_url": "https://github.com/yhavinga", "followers_url": "https://api.github.com/users/yhavinga/followers", "following_url": "https://api.github.com/users/yhavinga/following{/other_user}", "gists_url": "https://api.github.com/users/yhavinga/gists{/gist_id}", "starred_url": "https://api.github.com/users/yhavinga/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yhavinga/subscriptions", "organizations_url": "https://api.github.com/users/yhavinga/orgs", "repos_url": "https://api.github.com/users/yhavinga/repos", "events_url": "https://api.github.com/users/yhavinga/events{/privacy}", "received_events_url": "https://api.github.com/users/yhavinga/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I added a notebook that shows that the error occurs during the map of the group_texts function on https://huggingface.co/flax-community/t5-base-dutch/blob/main/Load_token_group_dataset.ipynb", "When executing the above notebook with 8 processing threads on my local machine, there are no errors.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,625
1,629
1,629
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.0.dev0 - Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.9.0+cu102 (False) - Tensorflow version (GPU?): 2.5.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.3.4 (tpu) - Jax version: 0.2.16 - JaxLib version: 0.1.68 - Using GPU in script?: no - Using distributed or parallel set-up in script?: yes ### Who can help t5: @patrickvonplaten I am using the t5 model with sentencepiece tokenizer trained from scratch : https://huggingface.co/flax-community/t5-small-dutch/blob/main/tokenizer.json ## Information Model I am using (T5): The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) Following training the tokenizer for t5 and running t5_mlm from: https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/README.md The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) Pre-training T5 on Dutch oscar deduplicated nl ## To reproduce Steps to reproduce the behavior: (unfortunately this script will download and process the complete oscar deduplicated nl corpus which will take ~1 hour, apologies!) 1. setup transformer env on 96 core machine like in the example flax language-modeling example README linked above on the latest master 2. git clone https://huggingface.co/flax-community/t5-small-dutch 3. cd t5-small-dutch 4. ln -s ~/transformers/examples/flax/language-modeling/run_t5_mlm_flax.py run_t5_mlm_flax.py 5. ln -s ~/transformers/examples/flax/language-modeling/t5_tokenizer_model.py t5_tokenizer_model.py 6. ./run_t5_oscar.sh and capture output. 7. In the output, look for SIGTERM During preprocessing, this text looks like a child process has crashed. ``` https://symbolize.stripped_domain/r/?trace=526cb0,7f98cb33820f,9222bf&map= *** SIGTERM received by PID 326216 (TID 326216) on cpu 51 from PID 323401; stack trace: *** PC: @ 0x526cb0 (unknown) (unknown) @ 0x7f969b419800 976 (unknown) @ 0x7f98cb338210 (unknown) (unknown) @ 0x9222c0 (unknown) (unknown) https://symbolize.stripped_domain/r/?trace=526cb0,7f969b4197ff,7f98cb33820f,9222bf&map=2a762cd764e70bc90ae4c7f9747c08d7:7f968e4d7000-7f969b758280 E0709 10:28:22.655698 326216 coredump_hook.cc:250] RAW: Remote crash gathering disabled for SIGTERM. E0709 10:28:22.689142 326216 process_state.cc:771] RAW: Raising signal 15 with default behavior ``` ## Expected behavior No processes are expected to crash.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12606/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12606/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12605
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12605/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12605/comments
https://api.github.com/repos/huggingface/transformers/issues/12605/events
https://github.com/huggingface/transformers/issues/12605
940,657,283
MDU6SXNzdWU5NDA2NTcyODM=
12,605
🐛 `model_init` fails when its a partially evaluated funtion.
{ "login": "Guillem96", "id": 21279306, "node_id": "MDQ6VXNlcjIxMjc5MzA2", "avatar_url": "https://avatars.githubusercontent.com/u/21279306?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Guillem96", "html_url": "https://github.com/Guillem96", "followers_url": "https://api.github.com/users/Guillem96/followers", "following_url": "https://api.github.com/users/Guillem96/following{/other_user}", "gists_url": "https://api.github.com/users/Guillem96/gists{/gist_id}", "starred_url": "https://api.github.com/users/Guillem96/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Guillem96/subscriptions", "organizations_url": "https://api.github.com/users/Guillem96/orgs", "repos_url": "https://api.github.com/users/Guillem96/repos", "events_url": "https://api.github.com/users/Guillem96/events{/privacy}", "received_events_url": "https://api.github.com/users/Guillem96/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I can reproduce. I would argue that this is more a bug in Python than the `Trainer` but will try to find a way to fix this. In the meantime, you should avoid using partials for `model_init` :-)", "Thanks for the quick answer! I'll avoid using partials for now 👍🏼", "Should be fixed in the PR above!" ]
1,625
1,626
1,626
CONTRIBUTOR
null
https://github.com/huggingface/transformers/blob/65e27215ba991450e30aac1bf06f7f4e889e77fb/src/transformers/trainer.py#L908 Inspect module usage here does not take into account the cases when the user provides a partially evaluated function, causing an exception. Since the issue is related to the `trainer` API I tag you @sgugger Example: ```python import functools import inspect checkpoint = "..." fn = lambda: AutoModel.from_pretrained(checkpoint ) print(len(inspect.signature(fn).parameters)) # Outpus: 0, then no trial expected everything works fine def fn1(model_base_checkpoint): return AutoModel.from_pretrained(model_base_checkpoint) model_init_fn = functools.partial(fn1, model_base_checkpoint=checkpoint ) print(len(inspect.signature(fn).parameters)) # Outputs 1, then the call_model_init tries to pass the ray or optuna trial which results in # model_init() got multiple values for argument 'model_base_checkpoint' exception ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12605/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12605/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12604
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12604/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12604/comments
https://api.github.com/repos/huggingface/transformers/issues/12604/events
https://github.com/huggingface/transformers/pull/12604
940,490,978
MDExOlB1bGxSZXF1ZXN0Njg2NTUzOTY1
12,604
Add LayoutLMv2 + LayoutXLM
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> Thanks a lot for adding this model!\r\n> For LayoutXLM, I don't think we need a new page if we can use the same architecture and tokenizer without changes. Just mention on the doc page the architecture does both.\r\n> \r\n> Don't forget to add the model to the main README!\r\n@sgugger \r\n\r\nJust want to point out that LayoutLMv2's tokenizer is subclass of `BertTokenizer `, while LayoutXLM's tokenizer is subclass on `XLMRobertaTokenizer` (and this make LayoutLMv2 cross-lingual)\r\n\r\nAs far as I know, this is the only difference between LayoutLMv2 and LayoutXLM's", "@jasonkit thanks for pointing that out, I will create a separate `LayoutXLMTokenizer` which inherits from `XLMRobertaTokenizer`.\r\n", "Note that is the tokenizer is the same as a `XLMRobertaTokenizer`, you don't need to create a new class, you can just the set the right `tokenizer_class` in the config.", "Hmm ok, I see that this wasn't done for [`LayoutLMTokenizer`](https://github.com/huggingface/transformers/blob/c07334c12e95f18a404d448e6c7d1eee05b8a61e/src/transformers/models/layoutlm/tokenization_layoutlm.py#L46), which was created, but is actually just `BertTokenizer`. Can you point to an example where this was done?", "Sure: there is `BigBirdPegasus` for instance that uses the same tokenizer as `BigBird`: [here](https://huggingface.co/google/bigbird-pegasus-large-arxiv/blob/main/config.json) is an example of config file for a checkpoint of `BigBirdPegasus` that sets the tokenizer class.", "Can't wait to test this ;) Thanks for the community effort! ", "@sgugger after internal discussion, I have created a new `LayoutLMv2Processor`. A `Processor` combines a `FeatureExtractor` (which handles the image-related stuff) and a `Tokenizer` (which handles the text-related stuff). So this is ideal for multi-modal models. Processors have previously been defined for Wav2Vec2 and CLIP. \r\n\r\nHowever, there's a difference between the processors defined for Wav2Vec2/CLIP and the one for LayoutLMv2. The former processors can either be a feature extractor or tokenizer at one particular moment (they are just a wrapper around both). The processor for LayoutLMv2 on the other hand applies both in a sequence, since it first uses the feature extractor to apply OCR on the document images to get words + bounding boxes, which are then provided to the tokenizer, which converts them to token-level `input_ids`, `attention_mask`, `token_type_ids` and `bbox`. By combining the feature extractor and the tokenizer, the processor really does everything for the user: you just give it a document image as input, and the inputs required for the model come out. Also note that one can initialize the feature extractor with either `apply_ocr` to `True` or `False`, depending on whether the user wants to apply OCR himself on the document images, or whether he wants to use PyTesseract (which the feature extractor uses by default). For now, there are 5 different use cases for the processor, see the integration tests in `test_processor_layoutlmv2.py` to see them all.\r\n\r\nAlso, an additional feature (which I think people will like), is that one can optionally also provide word-level labels to the processor, and these will then automatically be converted to token-level `labels`. You could see it a bit as if `tokenize_and_align` function is incorporated into the processor (actually in the tokenizer - but I assume people could just use the processor).\r\n\r\nHappy to get your review :) as you will see, `LayoutLMv2FeatureExtractor` is fairly minimal, it does two things: 1) resize images to 224x224 and optionally, 2) apply OCR to get words + boxes. `LayoutLMv2Tokenizer` is a bit more extensive (it also handles padding/truncation of token-level bounding boxes etc.). Finally, `LayoutLMv2Processor` makes everything more simple by just having one front-facing API.", "@NielsRogge from what I can tell, the fast tokenizer is no longer supported in this PR. When using the existing impl of LayoutLMv2Tokenizer in the context of token classification/sequence labeling, I've been following the original repos arguments:\r\n```python\r\n padding=\"max_length\",\r\n pad_to_multiple_of=8,\r\n max_length=512,\r\n truncation=True,\r\n return_overflowing_tokens=True,\r\n is_split_into_words=True,\r\n ```\r\n as a means of creating multiple sequences from longer input samples. I believe `return_overflowing_tokens` is unsupported by the tokenizer in this PR without a Fast implementation. Is there a different way to achieve multiple sequences per input sample with the new tokenizer?", "Hi @dcyoung,\r\n\r\nI'm currently working on implementing a fast tokenizer, but the slow tokenizer supports the `return_overflowing_tokens` argument. \r\n\r\nThe API of the tokenizer is a bit more extensive for LayoutLMv2. You can pass a list of words and corresponding (normalized) boxes, and the tokenizer will automatically turn everything into token-level `input_ids`, `attention_mask`, `token_type_ids` and `bbox`. It will also pad/truncate boxes if you specify the relevant arguments. Small example:\r\n\r\n\r\n```\r\nfrom transformers import LayoutLMv2Tokenizer\r\n\r\ntokenizer = LayoutLMv2Tokenizer.from_pretrained(\"microsoft/layoutlmv2-base-uncased\")\r\n\r\nwords = [\"hello\", \"world\"]\r\nboxes = [[1,2,3,4], [5,6,7,8]]\r\n\r\nencoded_inputs = tokenizer(words, boxes=boxes, return_tensors=\"pt\")\r\n```\r\n\r\nCan you try it out? It will also return overflowing token boxes if you want it to. ", "> Can you try it out? It will also return overflowing token boxes if you want it to.\r\n\r\nYup. That works fine for me. Though, I'm wondering about trying to create batches of sequences from a single \"long\" input sample which overflows the 512 token limit. This is for SER tasks where I'd like to consider every token on a document, requiring splitting the original sequence into multiple 512 token sequences. Previously, the `tokenize_and_align_labels` and `DataCollatorForKeyValueExtraction` implementations accomplished this behavior. I'm curious how best to achieve the same behavior using this new setup. \r\n \r\n```python\r\n tokenizer = LayoutLMv2Tokenizer.from_pretrained(\r\n \"microsoft/layoutlmv2-base-uncased\",\r\n )\r\n\r\n n = 2000\r\n words = n * [\"hello\"]\r\n boxes = n * [[1, 2, 3, 4]]\r\n\r\n encoded_inputs = tokenizer(\r\n words,\r\n boxes=boxes,\r\n padding=\"max_length\",\r\n pad_to_multiple_of=8,\r\n max_length=512,\r\n truncation=True,\r\n return_overflowing_tokens=True,\r\n is_split_into_words=True,\r\n return_tensors=\"pt\",\r\n )\r\n print(encoded_inputs.keys())\r\n for k, v in encoded_inputs.items():\r\n print(k, v.size())\r\n```\r\n```bash\r\ndict_keys(['overflowing_tokens', 'overflowing_token_boxes', 'num_truncated_tokens', 'input_ids', 'bbox', 'token_type_ids', 'attention_mask'])\r\noverflowing_tokens torch.Size([1, 1490])\r\noverflowing_token_boxes torch.Size([1, 1490, 4])\r\nnum_truncated_tokens torch.Size([1])\r\ninput_ids torch.Size([1, 512])\r\nbbox torch.Size([1, 512, 4])\r\ntoken_type_ids torch.Size([1, 512])\r\nattention_mask torch.Size([1, 512])\r\n```\r\n\r\nI see now from the outputs above, that the tokenizer does return overflow tokens. However, I don't see the `overflow_to_sample_mapping` KVP which was previously used by `tokenize_and_align_labels`. Does the current tokenizer support this behavior atm? If so, what arguments yield this batching behavior? And if not do you have a suggestion on the easiest way of achieving something similar? \r\n\r\nWould this require splitting the `overflowing_tokens` and `overflowing_token_boxes` into new sequences and manually adding the special tokens, as well as pad the last sample < 512 tokens? Or alternatively, tokenizing without truncation... and use a data collator which splits, and pads? ", "@NielsRogge I took a pass at batching the overflow tokens. In the Processor, i added some logic to modify the `encoded_inputs` like so:\r\n\r\n```python\r\nclass LayoutLMv2Processor:\r\n ...\r\n\r\n def prepare_overflow(self, encoded_inputs: BatchEncoding) -> List[BatchEncoding]:\r\n num_truncated_tokens = max(\r\n 0, int(encoded_inputs.get(\"num_truncated_tokens\", [0])[0])\r\n )\r\n max_source_tokens_per_sample = 510\r\n num_extra_samples = ceil(num_truncated_tokens / max_source_tokens_per_sample)\r\n extra_encoded_inputs = []\r\n for i in range(num_extra_samples):\r\n start_idx = i * max_source_tokens_per_sample\r\n tokens = encoded_inputs[\"overflowing_tokens\"][0][\r\n start_idx : start_idx + max_source_tokens_per_sample\r\n ].tolist()\r\n boxes = encoded_inputs[\"overflowing_token_boxes\"][0][\r\n start_idx : start_idx + max_source_tokens_per_sample\r\n ].tolist()\r\n labels = encoded_inputs[\"overflowing_labels\"][0][\r\n start_idx : start_idx + max_source_tokens_per_sample\r\n ].tolist()\r\n seq_len = len(tokens)\r\n\r\n padded = self.tokenizer._pad(\r\n encoded_inputs={\r\n \"input_ids\": [101] + tokens + [102],\r\n \"bbox\": [[0, 0, 0, 0]] + boxes + [[1000, 1000, 1000, 1000]],\r\n \"token_type_ids\": (2 + seq_len) * [0],\r\n \"labels\": [-100] + labels + [-100],\r\n \"attention_mask\": (2 + seq_len) * [1],\r\n },\r\n max_length=512,\r\n padding_strategy=PaddingStrategy.MAX_LENGTH,\r\n pad_to_multiple_of=8,\r\n return_attention_mask=True,\r\n )\r\n extra_encoded_inputs.append(\r\n {\r\n \"image\": torch.clone(encoded_inputs[\"image\"]),\r\n **{k: torch.tensor(v).unsqueeze(0) for k, v in padded.items()},\r\n }\r\n )\r\n\r\n return extra_encoded_inputs\r\n\r\n```\r\n\r\nHowever, this required adding an additional `overflowing_labels` during tokenization similar to the current calculation of `overflowing_token_boxes` or `overflowing_tokens`. This is a small change but easier accomplished in the tokenizer source than after the fact. \r\n\r\nUsing this processor, i am able to generate batches of sequences from a long input sequence. While I haven't had a chance to thoroughly test, I am able to run this batch through the model just fine to produce corresponding logits. Ex: \r\n\r\n```python\r\nencoded_inputs= processor(\r\n img,\r\n words,\r\n boxes=bboxes,\r\n word_labels=word_label_ids,\r\n return_tensors=\"pt\",\r\n padding=\"max_length\",\r\n pad_to_multiple_of=8,\r\n max_length=512,\r\n truncation=True,\r\n return_overflowing_tokens=True,\r\n is_split_into_words=True,\r\n batch_overflow=True,\r\n)\r\nextra_encoded_inputs = processor.prepare_overflow(encoded_inputs)\r\nfor model_inputs in [encoded_inputs] + extra_encoded_inputs:\r\n outputs = model(**model_inputs)\r\n print(\"Predicted Logits: \", outputs.logits.size())\r\n```\r\n\r\nDoes this seem like a reasonable approach, and if so... would it be possible to add the `overflow_labels` changes to the tokenizer? Perhaps you can think of a better abstraction for batching process within the tokenizer itself?", "> I see now from the outputs above, that the tokenizer does return overflow tokens. However, I don't see the overflow_to_sample_mapping KVP which was previously used by tokenize_and_align_labels. Does the current tokenizer support this behavior atm? If so, what arguments yield this batching behavior? And if not do you have a suggestion on the easiest way of achieving something similar?\r\n\r\nThe `overflow_to_sample_mapping` is something that is only supported by fast tokenizers. I'm currently working on `LayoutLMv2TokenizerFast`. I'll merge it with this branch once it's ready. Thanks for your feedback!\r\n\r\n> Are you planning to add the LayoutLMv2/XLMForRelationExtraction models that we can find in the original repo?\r\n\r\nYes, but perhaps in a future PR, because it's not clear to me how they use the model at inference time.\r\n\r\nIf you have other questions, can you please post them elsewhere instead of on this thread? Just to keep this PR a bit clean :) perhaps we can set up a Slack channel to discuss this model. If you can give me your email address, I'll set it up.\r\n\r\nThanks!", "> > I see now from the outputs above, that the tokenizer does return overflow tokens. However, I don't see the overflow_to_sample_mapping KVP which was previously used by tokenize_and_align_labels. Does the current tokenizer support this behavior atm? If so, what arguments yield this batching behavior? And if not do you have a suggestion on the easiest way of achieving something similar?\r\n> \r\n> The `overflow_to_sample_mapping` is something that is only supported by fast tokenizers. I'm currently working on `LayoutLMv2TokenizerFast`. I'll merge it with this branch once it's ready. Thanks for your feedback!\r\n> \r\n> > Are you planning to add the LayoutLMv2/XLMForRelationExtraction models that we can find in the original repo?\r\n> \r\n> Yes, but perhaps in a future PR, because it's not clear to me how they use the model at inference time.\r\n> \r\n> If you have other questions, can you please post them elsewhere instead of on this thread? Just to keep this PR a bit clean :) perhaps we can set up a Slack channel to discuss this model. If you can give me your email address, I'll set it up.\r\n> \r\n> Thanks!\r\n\r\nYou're right about redirecting me to a dedicated channel. Here is my email: [email protected].\r\n\r\nThank you!", "> Just wondering whether the model can be used in fp16?\r\n\r\nYes, the model can be used in fp16 (just added a [notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/FUNSD/Fine_tuning_LayoutLMv2ForTokenClassification_on_FUNSD_using_HuggingFace_Trainer.ipynb) which uses fp16 with HuggingFace's Trainer)." ]
1,625
1,630
1,630
CONTRIBUTOR
null
# What does this PR do? This PR adds Microsoft's [LayoutLMv2](https://arxiv.org/abs/2012.14740) and [LayoutXLM](https://arxiv.org/abs/2104.08836) models, in PyTorch. The latter is a multilingual version of LayoutLMv2. For now, I have not yet added any documentation related to LayoutXLM, I'm not sure whether we need a new model directory + documentation page for that one, since one can load a LayoutXLM model like so: `model = LayoutLMv2Model.from_pretrained("microsoft/layoutxlm-base")`. LayoutLMv2 is an improvement of [LayoutLM](https://huggingface.co/transformers/model_doc/layoutlm.html) (improves SOTA across several benchmarks, including new ones), by incorporating visual, text and layout information to understand scanned documents. [Detectron2](https://github.com/facebookresearch/detectron2) is used for its visual backbone (which is a ResNeXt-FPN). The original repo only has `LayoutLMv2Model` and `LayoutLMv2ForTokenClassification`. However, in the paper they also use the model to classify document images (on RVL-CDIP), and perform visual question answering (on DocVQA). Therefore, I've added `LayoutLMv2ForSequenceClassification` and `LayoutLMv2ForQuestionAnswering`. I've modelled them like they were described in the paper, but there's no official implementation to be found. Fixes #11932 #12194 ## Who can review? @LysandreJik @sgugger To do: - [x] fix tests (there's still one test failing, namely `test_initialization`) => Lysandre would be great if you can help me fix that one. It has to do with one of the layers of the backbone. Integration test is also added. - [x] install Detectron2 + pytesseract to run all tests on CircleCI. - [x] perhaps define custom `ModelOutputs,` as the length of the hidden states and attentions is actually `seq_length + config.image_feature_pool_shape[0] * config.image_feature_pool_shape[1]` instead of just `seq_length`-> update: will add a comment to the "Tips" section in the documentation instead. - [x] write documentation about `LayoutLMv2FeatureExtractor`, `LayoutLMv2Tokenizer` and `LayoutLMv2Processor` - [x] make some more demo notebooks. Notes: - [x] I know some variable names could maybe be named better (like for example `rel_pos_bias` in the configuration). However, if we update the names, then people will not longer be able to easily convert models from the original repo to HuggingFace and vice versa. The authors did use HuggingFace for their entire codebase (they used Transformers, the Trainer, Datasets,...). The model is already uploaded by the authors on the [hub](https://huggingface.co/microsoft/layoutlmv2-base-uncased). - [x] There is still some code included in the modeling file for distributed training, namely to convert to SyncBatchNorm instead of BatchNorm when distributed training is available. I guess these are to be removed? UPDATE: moved to separate method.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12604/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12604/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12604", "html_url": "https://github.com/huggingface/transformers/pull/12604", "diff_url": "https://github.com/huggingface/transformers/pull/12604.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12604.patch", "merged_at": 1630319742000 }
https://api.github.com/repos/huggingface/transformers/issues/12603
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12603/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12603/comments
https://api.github.com/repos/huggingface/transformers/issues/12603/events
https://github.com/huggingface/transformers/issues/12603
940,448,660
MDU6SXNzdWU5NDA0NDg2NjA=
12,603
Facing Issue while loading pytorch model as flax model
{ "login": "bhadreshpsavani", "id": 26653468, "node_id": "MDQ6VXNlcjI2NjUzNDY4", "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhadreshpsavani", "html_url": "https://github.com/bhadreshpsavani", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "using `FlaxAutoModelForSeq2SeqLM` solved the issue, It was typing mistake" ]
1,625
1,625
1,625
CONTRIBUTOR
null
I am trying to convert pytorch model in flax so i can train in downstream task I wrote converting script like this ```python from transformers import AutoConfig, FlaxAutoModelForMaskedLM config = AutoConfig.from_pretrained("./") model = FlaxAutoModelForMaskedLM.from_pretrained("./", from_pt=True, config=config) model.save_pretrained("./") ``` By taking this [reference](https://huggingface.co/transformers/model_doc/auto.html#transformers.FlaxAutoModelForSeq2SeqLM) This was the logs ``` Traceback (most recent call last): File "convert_to_flax.py", line 3, in <module> model = FlaxAutoModelForMaskedLM.from_pretrained("./", from_pt=True, config=config) File "/home/bhadresh/transformers/src/transformers/models/auto/auto_factory.py", line 387, in from_pretrained raise ValueError( ValueError: Unrecognized configuration class <class 'transformers.models.t5.configuration_t5.T5Config'> for this kind of AutoModel: FlaxAutoModelForMaskedLM. Model type should be one of RobertaConfig, BertConfig, BigBirdConfig, BartConfig, ElectraConfig. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12603/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12603/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12602
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12602/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12602/comments
https://api.github.com/repos/huggingface/transformers/issues/12602/events
https://github.com/huggingface/transformers/issues/12602
940,427,281
MDU6SXNzdWU5NDA0MjcyODE=
12,602
How to transfer fine-tuned model from python to rust?
{ "login": "amiyamandal-dev", "id": 42173775, "node_id": "MDQ6VXNlcjQyMTczNzc1", "avatar_url": "https://avatars.githubusercontent.com/u/42173775?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amiyamandal-dev", "html_url": "https://github.com/amiyamandal-dev", "followers_url": "https://api.github.com/users/amiyamandal-dev/followers", "following_url": "https://api.github.com/users/amiyamandal-dev/following{/other_user}", "gists_url": "https://api.github.com/users/amiyamandal-dev/gists{/gist_id}", "starred_url": "https://api.github.com/users/amiyamandal-dev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amiyamandal-dev/subscriptions", "organizations_url": "https://api.github.com/users/amiyamandal-dev/orgs", "repos_url": "https://api.github.com/users/amiyamandal-dev/repos", "events_url": "https://api.github.com/users/amiyamandal-dev/events{/privacy}", "received_events_url": "https://api.github.com/users/amiyamandal-dev/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,625
1,625
NONE
null
# 🚀 Feature request Since the maximum model in huggingface is in pytorch or tf. But I have used to BART-Large in rust-bert find out that the overall execution of BART-Large in rust is significantly lower than python. But don't know how to transfer fine-tuned pytorch model to rust env. Feature needed:- 1 . Need to make a copy of all the models of huggingface to rust-bert to gain performance in execution. 2 . Need to develop a standard way to transfer huggingface python model to rust env.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12602/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12602/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12601
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12601/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12601/comments
https://api.github.com/repos/huggingface/transformers/issues/12601/events
https://github.com/huggingface/transformers/issues/12601
940,354,836
MDU6SXNzdWU5NDAzNTQ4MzY=
12,601
Cannot load .pt model using Transformers
{ "login": "14H034160212", "id": 23516191, "node_id": "MDQ6VXNlcjIzNTE2MTkx", "avatar_url": "https://avatars.githubusercontent.com/u/23516191?v=4", "gravatar_id": "", "url": "https://api.github.com/users/14H034160212", "html_url": "https://github.com/14H034160212", "followers_url": "https://api.github.com/users/14H034160212/followers", "following_url": "https://api.github.com/users/14H034160212/following{/other_user}", "gists_url": "https://api.github.com/users/14H034160212/gists{/gist_id}", "starred_url": "https://api.github.com/users/14H034160212/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/14H034160212/subscriptions", "organizations_url": "https://api.github.com/users/14H034160212/orgs", "repos_url": "https://api.github.com/users/14H034160212/repos", "events_url": "https://api.github.com/users/14H034160212/events{/privacy}", "received_events_url": "https://api.github.com/users/14H034160212/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "What is your .pt model? Where did you obtain it from?", "While we don't have the complete details following is a possible solution:\r\nYou can initialize your model using transformers and simply load the weights using\r\n`model.load_state_dict(torch.load('model.pt'))`\r\n\r\nIn case this is not what you're looking for please add further details.", "Hi @Ap1075, Thanks for your reply. It is working to load the `model.pt` if I define the `model` class, but do you know if I want to load the tokenizer from the `model.pt`. How can I do that? For example, I can load the tokenizer by this way from huggingface `tokenizer = AutoTokenizer.from_pretrained(pretrained_model, do_lower_case=True)`, but I cannot do that if the `pretrained_model='model.pt'`.", "Also, for this command, `model.load_state_dict(torch.load('model.pt'))`, if what is the `model` from `model.load_state_dict()`? How to define the model here? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,625
1,630
1,630
NONE
null
Hi, I want to use Transformers to load .pt model. How can I do that? I know how to load .bin using transformers, but I do not know how to load .pt model using transformers. Thanks. config = config_class.from_pretrained( args.config_name if args.config_name else args.model_name_or_path, num_labels=num_labels, finetuning_task=args.task_name, cache_dir=args.cache_dir if args.cache_dir else None, ) tokenizer = tokenizer_class.from_pretrained( args.tokenizer_name if args.tokenizer_name else args.model_name_or_path, do_lower_case=args.do_lower_case, cache_dir=args.cache_dir if args.cache_dir else None, ) model = model_class.from_pretrained( args.model_name_or_path, from_tf=bool(".ckpt" in args.model_name_or_path), config=config, cache_dir=args.cache_dir if args.cache_dir else None, )
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12601/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12601/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12600
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12600/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12600/comments
https://api.github.com/repos/huggingface/transformers/issues/12600/events
https://github.com/huggingface/transformers/issues/12600
940,324,452
MDU6SXNzdWU5NDAzMjQ0NTI=
12,600
Custom tokenizer from Tokenizers library
{ "login": "darwinharianto", "id": 44696192, "node_id": "MDQ6VXNlcjQ0Njk2MTky", "avatar_url": "https://avatars.githubusercontent.com/u/44696192?v=4", "gravatar_id": "", "url": "https://api.github.com/users/darwinharianto", "html_url": "https://github.com/darwinharianto", "followers_url": "https://api.github.com/users/darwinharianto/followers", "following_url": "https://api.github.com/users/darwinharianto/following{/other_user}", "gists_url": "https://api.github.com/users/darwinharianto/gists{/gist_id}", "starred_url": "https://api.github.com/users/darwinharianto/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/darwinharianto/subscriptions", "organizations_url": "https://api.github.com/users/darwinharianto/orgs", "repos_url": "https://api.github.com/users/darwinharianto/repos", "events_url": "https://api.github.com/users/darwinharianto/events{/privacy}", "received_events_url": "https://api.github.com/users/darwinharianto/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "https://github.com/huggingface/transformers/issues/11722\r\n\r\nfound solution here" ]
1,625
1,626
1,626
NONE
null
Hi, thank you for the library. I have a few questions regarding training from scratch. I used this as a reference on how to train new language model. https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb I wanted to train XLNet for my corpus. First, I could train with tokenizers by ``` tokenizer = SentencePieceUnigramTokenizer() tokenizer._tokenizer.normalizer = Sequence([NFKC(), Replace("\n", "")]) tokenizer.train(files=paths, vocab_size=16000, special_tokens=[ "<s>", "</s>", "<pad>", "<mask>", "<unk>", ]) tokenizer.save_model("./new_tokenizer") ``` Then I have to use transformers library to train ``` from transformers import XLNetConfig config = XLNetConfig( vocab_size=16000, ) from transformers import XLNetTokenizerFast tokenizer = XLNetTokenizerFast.from_pretrained("./new_tokenizer", max_len=512) ``` this throws is a folder error. ``` terminate called after throwing an instance of 'std:: iOS failure' what(): basic filebuf::underflow error reading the file: Is a directory Aborted (core dumped) ``` How do I load my trained tokenizer?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12600/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12600/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12599
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12599/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12599/comments
https://api.github.com/repos/huggingface/transformers/issues/12599/events
https://github.com/huggingface/transformers/pull/12599
940,314,409
MDExOlB1bGxSZXF1ZXN0Njg2NDA1NTMz
12,599
Point to the right file for hybrid CLIP
{ "login": "edugp", "id": 17855740, "node_id": "MDQ6VXNlcjE3ODU1NzQw", "avatar_url": "https://avatars.githubusercontent.com/u/17855740?v=4", "gravatar_id": "", "url": "https://api.github.com/users/edugp", "html_url": "https://github.com/edugp", "followers_url": "https://api.github.com/users/edugp/followers", "following_url": "https://api.github.com/users/edugp/following{/other_user}", "gists_url": "https://api.github.com/users/edugp/gists{/gist_id}", "starred_url": "https://api.github.com/users/edugp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/edugp/subscriptions", "organizations_url": "https://api.github.com/users/edugp/orgs", "repos_url": "https://api.github.com/users/edugp/repos", "events_url": "https://api.github.com/users/edugp/events{/privacy}", "received_events_url": "https://api.github.com/users/edugp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,626
1,626
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [X ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12599/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12599/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12599", "html_url": "https://github.com/huggingface/transformers/pull/12599", "diff_url": "https://github.com/huggingface/transformers/pull/12599.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12599.patch", "merged_at": 1626072383000 }
https://api.github.com/repos/huggingface/transformers/issues/12598
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12598/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12598/comments
https://api.github.com/repos/huggingface/transformers/issues/12598/events
https://github.com/huggingface/transformers/issues/12598
940,275,140
MDU6SXNzdWU5NDAyNzUxNDA=
12,598
`tokenizer.special_tokens_map` has stringified list for "additional_special_tokens" value.
{ "login": "erip", "id": 2348806, "node_id": "MDQ6VXNlcjIzNDg4MDY=", "avatar_url": "https://avatars.githubusercontent.com/u/2348806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/erip", "html_url": "https://github.com/erip", "followers_url": "https://api.github.com/users/erip/followers", "following_url": "https://api.github.com/users/erip/following{/other_user}", "gists_url": "https://api.github.com/users/erip/gists{/gist_id}", "starred_url": "https://api.github.com/users/erip/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/erip/subscriptions", "organizations_url": "https://api.github.com/users/erip/orgs", "repos_url": "https://api.github.com/users/erip/repos", "events_url": "https://api.github.com/users/erip/events{/privacy}", "received_events_url": "https://api.github.com/users/erip/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "That seems like an issue indeed! Pinging @SaulLu ", "Thank you for your issue @erip! This seems to be a bug to me as well, I just opened a PR #12759 that should solve this problem. Now the command:\r\n```python\r\nfrom transformers import AutoTokenizer\r\nm = AutoTokenizer.from_pretrained('xlm-roberta-base')\r\nm.add_special_tokens({\"additional_special_tokens\": [\"<space>\"]})\r\nprint(m.special_tokens_map['additional_special_tokens'] == ['<space>'])\r\n```\r\nshould output:\r\n```\r\nTrue\r\n```" ]
1,625
1,626
1,626
CONTRIBUTOR
null
## Environment info - `transformers` version: 4.8.2 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.10 - PyTorch version (GPU?): 1.9.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik maybe? ## Information Model I am using (Bert, XLNet ...): XLMRoberta The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. `>>> from transformers import XLMRobertaTokenizer` 2. `>>> m = XLMRobertaTokenizer.from_pretrained('xlm-roberta-base')` 3. `>>> m.add_special_tokens({"additional_special_tokens": ["<space>"]})` 4. `>>> m.special_tokens_map['additional_special_tokens'] == "['<space>']" # True` ## Expected behavior The value should be a list containing the special characters. :-) The work-around is to use the `additional_special_tokens` attribute directly.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12598/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12598/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12597
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12597/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12597/comments
https://api.github.com/repos/huggingface/transformers/issues/12597/events
https://github.com/huggingface/transformers/pull/12597
940,208,596
MDExOlB1bGxSZXF1ZXN0Njg2MzE0Njgz
12,597
[doc] fix broken ref
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,625
1,625
CONTRIBUTOR
null
add missing `:` @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12597/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12597/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12597", "html_url": "https://github.com/huggingface/transformers/pull/12597", "diff_url": "https://github.com/huggingface/transformers/pull/12597.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12597.patch", "merged_at": 1625778661000 }
https://api.github.com/repos/huggingface/transformers/issues/12596
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12596/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12596/comments
https://api.github.com/repos/huggingface/transformers/issues/12596/events
https://github.com/huggingface/transformers/pull/12596
940,169,244
MDExOlB1bGxSZXF1ZXN0Njg2MjgxMDgy
12,596
Translate README.md to Simplified Chinese
{ "login": "JetRunner", "id": 22514219, "node_id": "MDQ6VXNlcjIyNTE0MjE5", "avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JetRunner", "html_url": "https://github.com/JetRunner", "followers_url": "https://api.github.com/users/JetRunner/followers", "following_url": "https://api.github.com/users/JetRunner/following{/other_user}", "gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}", "starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions", "organizations_url": "https://api.github.com/users/JetRunner/orgs", "repos_url": "https://api.github.com/users/JetRunner/repos", "events_url": "https://api.github.com/users/JetRunner/events{/privacy}", "received_events_url": "https://api.github.com/users/JetRunner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The Chinese translations looks good to me!", "LGTM.", "@JetRunner I can help with the translation of Traditional Chinese. Would you like me submitting a PR or any kinds of assistance?", "> @JetRunner I can help with the translation of Traditional Chinese. Would you like me submitting a PR or any kinds of assistance?\r\n\r\nSure! Please do so. I recommend you convert the simplified Chinese version to traditional Chinese with a software and then polish it (e.g., replace `软件` with `軟體`) - in this way we can keep the two versions consistent (which is desirable for future maintenance).\r\n\r\nThanks a lot for your help! @qqaatw ", "@JetRunner No problem, I'll work on this soon. Once the PR is opened, I'll ping you for reviewing." ]
1,625
1,626
1,626
CONTRIBUTOR
null
This is part of the Hugging Face document translation project. I appreciate it if anyone (from Hong Kong / Taiwan) could help verify the traditional Chinese version (which is still WIP).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12596/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12596/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12596", "html_url": "https://github.com/huggingface/transformers/pull/12596", "diff_url": "https://github.com/huggingface/transformers/pull/12596.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12596.patch", "merged_at": 1626110394000 }
https://api.github.com/repos/huggingface/transformers/issues/12595
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12595/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12595/comments
https://api.github.com/repos/huggingface/transformers/issues/12595/events
https://github.com/huggingface/transformers/pull/12595
940,153,470
MDExOlB1bGxSZXF1ZXN0Njg2MjY3Njk2
12,595
[Flax] Add flax marian
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,625
1,625
MEMBER
null
This PR adds Flax Marian
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12595/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12595/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12595", "html_url": "https://github.com/huggingface/transformers/pull/12595", "diff_url": "https://github.com/huggingface/transformers/pull/12595.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12595.patch", "merged_at": 1625827333000 }
https://api.github.com/repos/huggingface/transformers/issues/12594
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12594/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12594/comments
https://api.github.com/repos/huggingface/transformers/issues/12594/events
https://github.com/huggingface/transformers/issues/12594
940,104,434
MDU6SXNzdWU5NDAxMDQ0MzQ=
12,594
GPT-2 asking for Padding Token
{ "login": "MarcM0", "id": 30278842, "node_id": "MDQ6VXNlcjMwMjc4ODQy", "avatar_url": "https://avatars.githubusercontent.com/u/30278842?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MarcM0", "html_url": "https://github.com/MarcM0", "followers_url": "https://api.github.com/users/MarcM0/followers", "following_url": "https://api.github.com/users/MarcM0/following{/other_user}", "gists_url": "https://api.github.com/users/MarcM0/gists{/gist_id}", "starred_url": "https://api.github.com/users/MarcM0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MarcM0/subscriptions", "organizations_url": "https://api.github.com/users/MarcM0/orgs", "repos_url": "https://api.github.com/users/MarcM0/repos", "events_url": "https://api.github.com/users/MarcM0/events{/privacy}", "received_events_url": "https://api.github.com/users/MarcM0/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "```python\r\ntokenizer.pad_token = tokenizer.eos_token\r\n```\r\n\r\nis the recommended way to fix the warning :-) ", "alright thank you, so eos instead of unknown right?", "Upon further research, it seems they default to the same thing anyways https://huggingface.co/transformers/model_doc/gpt2.html" ]
1,625
1,625
1,625
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.8.2 - Platform: Windows 10 (Google Collab) - Python version: Python 3.6.9 - PyTorch version (GPU?):1.8.1+cu102 - Tensorflow version (GPU?): NA - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help Models: - gpt2: @patrickvonplaten, @LysandreJik Library: - tokenizers: @LysandreJik ## Information Model I am using (gpt2-medium): The problem arises when using: Trainer, DataCollatorForLanguageModeling,GPT2Tokenizer The tasks I am working on is: * Triplets is a series of sequences I want gpt2 to train on The Error: ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`. I put in this line which seems to fix the issue `tokenizer.pad_token = tokenizer.unk_token ` but I'm not sure if it makes sense for gpt-2 ## To reproduce Steps to reproduce the behavior: Make a csv with column title "triplet" then anything below Run the following code in google collab ---------------------------------------------------------------------------------------------------------------- ``` !pip install pandas !pip install transformers !pip install datasets !pip3 install torch==1.8.1+cu102 torchvision==0.9.1+cu102 torchaudio===0.8.1 -f https://download.pytorch.org/whl/torch_stable.html import pandas as pd from transformers import GPT2LMHeadModel, GPT2Tokenizer import numpy as np import random import torch from torch.utils.data import Dataset, DataLoader from transformers import GPT2Tokenizer, GPT2LMHeadModel, AdamW, get_linear_schedule_with_warmup,AutoTokenizer, DataCollatorForLanguageModeling, AutoConfig, Trainer, TrainingArguments,AutoModelForCausalLM from tqdm import tqdm, trange import torch.nn.functional as F import csv from datasets import load_dataset,load_metric import io from google.colab import files print("upload 'train.csv'") uploaded = files.upload() #version of gpt we use model_version = 'gpt2-medium' #create the dataset raw_datasets = load_dataset('csv', data_files=['train.csv']) #raw_datasets["validation"] = (load_dataset('csv', data_files=['validate.csv']))["train"] print(raw_datasets) print(raw_datasets["train"][1]) #initialize tokenizer and model tokenizer = GPT2Tokenizer.from_pretrained(model_version) model = GPT2LMHeadModel.from_pretrained(model_version) #vvv this makes the error go away but it doesn't seem to produce a proper attention task #tokenizer.pad_token = tokenizer.unk_token #prevents error where there is no token. Doesn't matter since I pad properly in the collator? #https://huggingface.co/patrickvonplaten/bert2gpt2-cnn_dailymail-fp16#training-script #helper for tokenizing everything def tokenize_function(examples): return tokenizer(examples["triplet"], truncation=True) #tokenize all our data tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) #gets rid of original string data tokenized_datasets=tokenized_datasets.remove_columns(["triplet"]) print(tokenized_datasets) print(tokenized_datasets["train"]["input_ids"][1]) #collate data data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) #training args (you can control hyperprarameters from here, I just put output directory) training_args = TrainingArguments(("Finetuned")) trainer = Trainer( model, training_args, train_dataset=tokenized_datasets["train"], #eval_dataset=tokenized_datasets["validation"], #compute_metrics=compute_metrics, data_collator=data_collator, tokenizer=tokenizer, ) trainer.train() trainer.save_model() ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12594/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12594/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12593
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12593/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12593/comments
https://api.github.com/repos/huggingface/transformers/issues/12593/events
https://github.com/huggingface/transformers/issues/12593
940,079,275
MDU6SXNzdWU5NDAwNzkyNzU=
12,593
XLM-RoBERTa NER extraction breaks/splitting the words !
{ "login": "dummynov1", "id": 83328005, "node_id": "MDQ6VXNlcjgzMzI4MDA1", "avatar_url": "https://avatars.githubusercontent.com/u/83328005?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dummynov1", "html_url": "https://github.com/dummynov1", "followers_url": "https://api.github.com/users/dummynov1/followers", "following_url": "https://api.github.com/users/dummynov1/following{/other_user}", "gists_url": "https://api.github.com/users/dummynov1/gists{/gist_id}", "starred_url": "https://api.github.com/users/dummynov1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dummynov1/subscriptions", "organizations_url": "https://api.github.com/users/dummynov1/orgs", "repos_url": "https://api.github.com/users/dummynov1/repos", "events_url": "https://api.github.com/users/dummynov1/events{/privacy}", "received_events_url": "https://api.github.com/users/dummynov1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Cc @Narsil ", "Hi @dummynov1 ,\r\n\r\nYou are using `grouped_entities` which will only to attempt to fuse *valid* entities (B-PER, I-PER, I-PER). Any break in that structure won't get merged and you might break words up.\r\n\r\nWe recently added other aggregation strategies ( https://huggingface.co/transformers/main_classes/pipelines.html?highlight=aggregation_strategy#transformers.TokenClassificationPipeline ) but they only work for word aware tokenizers (which is not the case of roberta).\r\n\r\nYour issue is not isolated, so I actually looked into it, and I think I figured a better heuristic that you could end up using: https://github.com/huggingface/transformers/pull/12611", "> Hi @dummynov1 ,\r\n> \r\n> You are using `grouped_entities` which will only to attempt to fuse _valid_ entities (B-PER, I-PER, I-PER). Any break in that structure won't get merged and you might break words up.\r\n> \r\n> We recently added other aggregation strategies ( https://huggingface.co/transformers/main_classes/pipelines.html?highlight=aggregation_strategy#transformers.TokenClassificationPipeline ) but they only work for word aware tokenizers (which is not the case of roberta).\r\n> \r\n> Your issue is not isolated, so I actually looked into it, and I think I figured a better heuristic that you could end up using: #12611\r\n\r\nCould you elaborate, what needs to be done to fix this.? Should i use the aggregation strategies, but i'm using transformers 4.6.0 (have to use this version only, due to other dependencies).", "You won't be able to fix it correctly in a super reliable way. Simply because `xlm` doesn't know what a \"word\" is.\r\n**The only real fix you can do is make the model better by more finetuning, with more data probably. (To get correct tags on all your tokens)**\r\n\r\nThat being with the proposed PR you will be able to have a bit of a better heuristic that might be good enough for you:\r\nyou will be able to write:\r\n\r\n```python\r\nfrom transformers import pipeline, AutoTokenizer, AutoModelForTokenClassification\r\n \r\ntokenizer = AutoTokenizer.from_pretrained(\"xlm-roberta-large-finetuned-conll03-english\")\r\nmodel = AutoModelForTokenClassification.from_pretrained(\"xlm-roberta-large-finetuned-conll03-english\")\r\n\r\nner_model = pipeline(\"ner\", model = model, tokenizer = tokenizer, aggregation_strategy = \"max\")\r\n\r\ntext = \"Brennan Nov2018\"\r\nner_model(text)\r\n```\r\n\r\nBecause `xlm` doesn't know words, everything non space will be treated as a word\r\n\r\n- \"Brenn\" \"ann\" will be fused as intended\r\n- \"Some\", \"one\", \",\" (\"Someone,\" ) too unfortunately.\r\n- Any punctuation within string really. Any formatting within `yaml`, `json`, `markdown` etc..", "@Narsil Could you advise if there is a model on HuggingFace hub that is \"word-aware\"? I am not sure if I understand it properly, but in my mind, none of the BERT models are actually \"word-aware\".\r\n\r\nI struggled with this problem earlier last year, and did a lot of search online without a solution. I ended up with an ugly patch downstream to absorb this problem. So thanks for making some improvements to the pipelines.", "Hi @ninjalu,\r\n\r\nDo you mind explaining a little more what your issue is ?\r\nWithout context it's a bit hard to guide you correctly.\r\n\r\nTokenizers \"word-aware\" are the ones with `continuing_subword_prefix` set (`tokenizer.backend_tokenizer.model.continuing_subword_prefix` variable, if it exists). But most likely you shouldn't choose a tokenizer based purely on this, but probably first on considerations like what data it was trained on and the leveraging you can use in the underlying model (if you're doing fine-tuning for instance, it's better to pick a good model for your target data/langage than starting the whole model+tokenizer from scratch)" ]
1,625
1,647
1,627
NONE
null
I have been using the huggingface xlm-roberta-large-finetuned-conll03-english model NER pipeline for extracting Names, Location and Organization Entities. But i'm facing an issue now and then with certain entity extraction from short sentences where a word is broken down into sub-word tokens with different entity types. Code used as below ``` from transformers import pipeline, AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large-finetuned-conll03-english") model = AutoModelForTokenClassification.from_pretrained("xlm-roberta-large-finetuned-conll03-english") ner_model = pipeline("ner", model = model, tokenizer = tokenizer, grouped_entities = True) text = "Brennan Nov2018" ner_model(text) ``` output: ` [ { "entity_group": "PER", "score": 0.6225427985191345, "word": "Brenn", "start": 0, "end": 5 }, { "entity_group": "LOC", "score": 0.759472668170929, "word": "an", "start": 5, "end": 7 } ] ` ![image](https://user-images.githubusercontent.com/83328005/124965311-9dcdbe00-e03f-11eb-927f-904ee641d564.png) Even though i'm using `grouped_entities = True` , i'm still getting some words broken down into 2 different entity groups. Is there a way to prevent this to happen and to return only complete words as entity ? - PyTorch Version : 1.7.1 - transformers : 4.6.0 - Python : 3.8.5
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12593/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12593/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12592
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12592/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12592/comments
https://api.github.com/repos/huggingface/transformers/issues/12592/events
https://github.com/huggingface/transformers/pull/12592
940,013,701
MDExOlB1bGxSZXF1ZXN0Njg2MTQ4NDYw
12,592
Add Flax sprint project evaluation section
{ "login": "osanseviero", "id": 7246357, "node_id": "MDQ6VXNlcjcyNDYzNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/osanseviero", "html_url": "https://github.com/osanseviero", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "repos_url": "https://api.github.com/users/osanseviero/repos", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,625
1,625
MEMBER
null
Add a section on the project evaluation. As more jury members get confirmed, we can extend the list
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12592/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12592/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12592", "html_url": "https://github.com/huggingface/transformers/pull/12592", "diff_url": "https://github.com/huggingface/transformers/pull/12592.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12592.patch", "merged_at": 1625813550000 }
https://api.github.com/repos/huggingface/transformers/issues/12591
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12591/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12591/comments
https://api.github.com/repos/huggingface/transformers/issues/12591/events
https://github.com/huggingface/transformers/pull/12591
939,950,372
MDExOlB1bGxSZXF1ZXN0Njg2MDkzOTQ3
12,591
Fix MT5 init
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,625
1,625
COLLABORATOR
null
# What does this PR do? This PR fixes the MT5 init to make sure to always have the tokenizer available (even if tokenizers or sentencepiece is not available). Fixes #12588
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12591/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12591/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12591", "html_url": "https://github.com/huggingface/transformers/pull/12591", "diff_url": "https://github.com/huggingface/transformers/pull/12591.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12591.patch", "merged_at": 1625757138000 }
https://api.github.com/repos/huggingface/transformers/issues/12590
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12590/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12590/comments
https://api.github.com/repos/huggingface/transformers/issues/12590/events
https://github.com/huggingface/transformers/pull/12590
939,937,714
MDExOlB1bGxSZXF1ZXN0Njg2MDgzMTA1
12,590
flax model parallel training
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is typically done using using various load balancing methods, e.g. deepspeed pipe has:\r\nhttps://www.deepspeed.ai/tutorials/pipeline/#load-balancing-pipeline-modules\r\n\r\npytorch has these too but I can't find any mentions of these in their docs. \r\nHave to go to the source:\r\nhttps://github.com/pytorch/pytorch/blob/58adaaba60441c1ed59f35389598aabf91a772dd/torch/distributed/pipeline/sync/_balance/__init__.py\r\n```\r\ndef balance_cost\r\ndef balance_by_time\r\ndef balance_by_size\r\n```\r\nIs that what you're referring to, @patrickvonplaten \r\n\r\n" ]
1,625
1,626
1,626
MEMBER
null
# What does this PR do? Adds model parallel training example for GPTNeo using jax's [`pjit`](https://jax.readthedocs.io/en/latest/jax.experimental.pjit.html) transformation. (This example probably just works on a single TPU v3-8). This should enable training bigger models like 1.3B GPTNeo on a single TPU V3-8. The `partition.py` file defines the `PyTree` of the `PartitionSpec` file which describes how the model parameters will be sharded. The actual sharding is automatically handled by `pjit`. The key idea is to `pjit` the entire training step function. To do that we - Define the mesh structure. - Define `PartitionSpec` for every input argument and return value of the pjitted function. The axis names that are used here should match the axis names used in `PartitionSpec`. This means we need the spec for our parameter and optimizer state PyTrees - The structure of the `PyTree` of `PartitionSpec` needs to match the structure of the `PyTree` of the actual values. - Call the pijitted fun in a mesh context. Below is a not-so minimal code-snippet that describes the approach ```python # init our model model = FlaxGPTNeoForCausalLM.from_pretrained("gpt-neo-125M") # get the partition spec for model params param_spec = set_partitions(unfreeze(model.params)) # get optimizer optim = optax.adamw(learning_rate=decay_fn) # mesh defination mesh_devices = np.array(jax.devices()).reshape(1, jax.local_device_count()) def get_initial_state(params): state = optim.init(params) return tuple(state), params # init optim in abstract way, this just returns the PyTree of opt_state with shapes # so we can get the PartitionSpec for opt_state using the tree shapes = jax.tree_map(lambda x: x.shape, model.params) state = jax.eval_shape(get_initial_state, shapes) # Get the opt spec def get_opt_spec(x): if isinstance(x, dict): return param_spec return None opt_state_spec, param_spec = jax.tree_map( get_opt_spec, state, is_leaf=lambda x: isinstance(x, (dict, optax.EmptyState)) ) # Now actually initialize the opt state # this also takes care of sharding the opt and param state according to the spec. p_get_initial_state = pjit( get_initial_state, in_axis_resources=None, out_axis_resources=(opt_state_spec, param_spec), ) with mesh(mesh_devices, ("dp", "mp")): opt_state, params = p_get_initial_state(freeze(model.params)) # define out train step def train_step(params, opt_state, dropout_rng, batch): .... return new_params, tuple(new_opt_state), new_dropout_rng, metrics # pjit the train step # in_axis_resources and out_axis_resources expect the PartitionSpec # for every input argument and return values p_train_step = pjit( train_step, in_axis_resources=(param_spec, opt_state_spec, None, None), out_axis_resources=(param_spec, opt_state_spec, None, None), ) # do the training with mesh(mesh_devices, ("dp", "mp")): params, state, loss, rng = p_train_step(params, opt_state, ...) ``` As we can see above, all the sharding logic is outside of the model definition, so ideally we don't need to modify the modeling code. This also means it should be possible to apply this to any other model by defining the right `PyTree` of `PartitionSpec`. A few things to consider for future work. - A convenient way to get the PyTree of model parameters, so we can define the partition spec. - Currently, model weights are initialized when the model class is instantiated. This can cause problems for models that cannot fit on one device. There should be an option to abstractly initialize the model without having to initialize the weights. This will also allow a convenient way to get the PyTree. - The `from_pretrained` method also directly puts the weights on the device, we need to consider either sharded loading or initially loading the weights on the CPU then sharding them on the devices, to avoid OOM with huge models.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12590/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12590/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12590", "html_url": "https://github.com/huggingface/transformers/pull/12590", "diff_url": "https://github.com/huggingface/transformers/pull/12590.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12590.patch", "merged_at": 1626283544000 }
https://api.github.com/repos/huggingface/transformers/issues/12589
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12589/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12589/comments
https://api.github.com/repos/huggingface/transformers/issues/12589/events
https://github.com/huggingface/transformers/issues/12589
939,911,642
MDU6SXNzdWU5Mzk5MTE2NDI=
12,589
Git LFS bug when uploading to hub
{ "login": "BirgerMoell", "id": 1704131, "node_id": "MDQ6VXNlcjE3MDQxMzE=", "avatar_url": "https://avatars.githubusercontent.com/u/1704131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BirgerMoell", "html_url": "https://github.com/BirgerMoell", "followers_url": "https://api.github.com/users/BirgerMoell/followers", "following_url": "https://api.github.com/users/BirgerMoell/following{/other_user}", "gists_url": "https://api.github.com/users/BirgerMoell/gists{/gist_id}", "starred_url": "https://api.github.com/users/BirgerMoell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BirgerMoell/subscriptions", "organizations_url": "https://api.github.com/users/BirgerMoell/orgs", "repos_url": "https://api.github.com/users/BirgerMoell/repos", "events_url": "https://api.github.com/users/BirgerMoell/events{/privacy}", "received_events_url": "https://api.github.com/users/BirgerMoell/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You need to explicitly `git lfs track` the files (of file name patterns) that are to be stored in LFS", "Here it might be your tfevent files, so you can `git lfs track \"*tfevents*\"`?", "Solved it by doing the following.\r\n\r\n\r\n1.Tracking the file with LFS \r\n```\r\ngit lfs track filename\r\n```\r\n2 Assuring that it is tracked by \r\n```\r\ngit lfs status\r\n```\r\n3.\r\n```\r\ngit lfs migrate import --include=\"*.v2\"\r\n```" ]
1,625
1,625
1,625
NONE
null
After running a MLM for 69000 steps https://huggingface.co/birgermoell/roberta-swedish-scandi/tree/main the model crashed and now I get an error when trying to upload to the hub. The same error was responsible for stopping the training. ``` Uploading LFS objects: 100% (2/2), 998 MB | 0 B/s, done. Enumerating objects: 13, done. Counting objects: 100% (13/13), done. Delta compression using up to 96 threads Compressing objects: 100% (9/9), done. Writing objects: 100% (9/9), 33.99 KiB | 308.00 KiB/s, done. Total 9 (delta 3), reused 0 (delta 0) remote: ------------------------------------------------------------------------- remote: Your push was rejected because it contains files larger than 10M. remote: Please use https://git-lfs.github.com/ to store larger files. remote: ------------------------------------------------------------------------- remote: Offending files: remote: - events.out.tfevents.1625668537.t1v-n-98937c84-w-0.121638.3.v2 (ref: refs/heads/main) To https://huggingface.co/birgermoell/roberta-swedish-scandi ! [remote rejected] main -> main (pre-receive hook declined) error: failed to push some refs to 'https://huggingface.co/birgermoell/roberta-swedish-scandi' ``` Git lfs is installed in the repository. Perhaps the files stored in git are too large?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12589/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12589/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12588
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12588/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12588/comments
https://api.github.com/repos/huggingface/transformers/issues/12588/events
https://github.com/huggingface/transformers/issues/12588
939,903,314
MDU6SXNzdWU5Mzk5MDMzMTQ=
12,588
'MT5Tokenizer' is not defined (on Google colab)
{ "login": "rashmibanthia", "id": 4139133, "node_id": "MDQ6VXNlcjQxMzkxMzM=", "avatar_url": "https://avatars.githubusercontent.com/u/4139133?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rashmibanthia", "html_url": "https://github.com/rashmibanthia", "followers_url": "https://api.github.com/users/rashmibanthia/followers", "following_url": "https://api.github.com/users/rashmibanthia/following{/other_user}", "gists_url": "https://api.github.com/users/rashmibanthia/gists{/gist_id}", "starred_url": "https://api.github.com/users/rashmibanthia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rashmibanthia/subscriptions", "organizations_url": "https://api.github.com/users/rashmibanthia/orgs", "repos_url": "https://api.github.com/users/rashmibanthia/repos", "events_url": "https://api.github.com/users/rashmibanthia/events{/privacy}", "received_events_url": "https://api.github.com/users/rashmibanthia/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Fixed it for now - `!pip install git+https://github.com/huggingface/transformers.git@b29c394` ", "Should be fixed now, thanks for reporting!" ]
1,625
1,625
1,625
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: `transformers-4.9.0.dev0` - Platform: Google Colab - Python version: `3.7.10` - PyTorch version (GPU?): `1.9.0+cu102` - Tensorflow version (GPU?): `2.5.0` - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @sgugger , @patil-suraj ## Information Trying to run_mlm.py - The problem arises when using: - Just importing packages. ## To reproduce Steps to reproduce the behavior: 1. Run the following on Google colab 2. ``` !pip install git+https://github.com/huggingface/transformers.git from transformers import ( CONFIG_MAPPING, MODEL_FOR_MASKED_LM_MAPPING, AutoConfig, AutoModelForMaskedLM, AutoTokenizer, DataCollatorForLanguageModeling, HfArgumentParser, Trainer, TrainingArguments, set_seed, ) ``` Error message - ``` --------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-1-a06d7b6932ed> in <module>() 2 3 ----> 4 from transformers import ( 5 CONFIG_MAPPING, 6 MODEL_FOR_MASKED_LM_MAPPING, 5 frames /usr/local/lib/python3.7/dist-packages/transformers/models/mt5/__init__.py in <module>() 94 globals()["__file__"], 95 _import_structure, ---> 96 extra_objects={"MT5Tokenizer": MT5Tokenizer, "MT5TokenizerFast": MT5TokenizerFast}, 97 ) NameError: name 'MT5Tokenizer' is not defined ``` ## Expected behavior No error when importing. Thank you for your help!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12588/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12588/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12587
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12587/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12587/comments
https://api.github.com/repos/huggingface/transformers/issues/12587/events
https://github.com/huggingface/transformers/issues/12587
939,899,494
MDU6SXNzdWU5Mzk4OTk0OTQ=
12,587
OOM during saving step
{ "login": "thies1006", "id": 32954413, "node_id": "MDQ6VXNlcjMyOTU0NDEz", "avatar_url": "https://avatars.githubusercontent.com/u/32954413?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thies1006", "html_url": "https://github.com/thies1006", "followers_url": "https://api.github.com/users/thies1006/followers", "following_url": "https://api.github.com/users/thies1006/following{/other_user}", "gists_url": "https://api.github.com/users/thies1006/gists{/gist_id}", "starred_url": "https://api.github.com/users/thies1006/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thies1006/subscriptions", "organizations_url": "https://api.github.com/users/thies1006/orgs", "repos_url": "https://api.github.com/users/thies1006/repos", "events_url": "https://api.github.com/users/thies1006/events{/privacy}", "received_events_url": "https://api.github.com/users/thies1006/received_events", "type": "User", "site_admin": false }
[ { "id": 2659267025, "node_id": "MDU6TGFiZWwyNjU5MjY3MDI1", "url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed", "name": "DeepSpeed", "color": "4D34F7", "default": false, "description": "" } ]
closed
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false } ]
[ "cc @sgugger ", "I think this is more on the DeepSpeed side so cc-ing @stas00 to confirm.", "Thank you for the full log.\r\n\r\nYes, it's on the deepspeed side. \r\n\r\nAs you can see in https://huggingface.co/transformers/master/main_classes/deepspeed.html#getting-the-model-weights-out if you use:\r\n\r\n```\r\n{\r\n \"zero_optimization\": {\r\n \"stage3_gather_fp16_weights_on_model_save\": true\r\n }\r\n}\r\n```\r\n\r\nthen it reconsolidates the whole fp16 model on cpu, while gathering one layer at a time on GPU (and then moving to cpu).\r\n\r\nYou can see the code here: https://github.com/microsoft/DeepSpeed/blob/5652072e5451077da4179e5398b1c0c71c752c34/deepspeed/runtime/engine.py#L1991\r\n\r\nSo to first unblock you disable the above setting in `ds_config.json` by setting it to `false` and then use `zero_to_fp32.py` as explained here https://huggingface.co/transformers/master/main_classes/deepspeed.html#getting-the-model-weights-out if you need to extract the weights - as a bonus you get fp32 weights then.\r\n\r\nMeanwhile let me have a look and see if I can make that code more memory tight - in theory if the training had enough gpu memory it should too - e.g. can iterate over each param, rather full layers. I will experiment and get back to you.", "OK the proper fix is here https://github.com/microsoft/DeepSpeed/pull/1220 if you want to try that branch, but should be merged into deepspeed master shortly and hopefully a new release will be made soon.\r\n", "sorry, looks like more work is needed there. will keep you posted.", "This version should do the right thing as all the tests now pass: https://github.com/microsoft/DeepSpeed/pull/1223\r\n\r\nUnfortunately missed the new deepspeed release, so will enter the next one.\r\n\r\nDo let me know if you encounter any issues with this PR branch.\r\n\r\nThank you.", "Thank you @stas00 ! Here is what I did:\r\n- I tried with your PR, no OOM anymore during save step. So the original problem is solved. \r\n- However when trying to resume from that checkpoint via `--resume_from_checkpoint /tmp/tst-summarization/checkpoint-10` I ran out of cpu ram (512GB in my case). \r\n\r\nJust some further comments:\r\n- Setting the option `\"stage3_gather_fp16_weights_on_model_save\": false` works as well (HF model is simply not saved).\r\n- Exporting the Deepspeed checkpoint offline using the script as you said works and I can also resume training using this exported model via `--model_name_or_path`. ", "Closing as original problem was solved." ]
1,625
1,626
1,626
NONE
null
I'm trying to train the Blenderbot-9B model using the Deepspeed integration on 8 GPUs, each of them has 16GB VRAM (one node). Script: `deepspeed --hostfile myhostfile \ ${_PATH}/examples/pytorch/summarization/run_summarization.py \ --model_name_or_path hyunwoongko/blenderbot-9B \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /tmp/tst-summarization \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 8 \ --deepspeed ${_PATH}/tests/deepspeed/ds_config_zero3.json \ --logging_steps 1 \ --fp16 \ --overwrite_output_dir \ --save_steps 10 \ --gradient_accumulation_steps 1 \ --evaluation_strategy="steps" \ --max_train_samples 10024 \ --max_eval_samples 32 \ --max_source_length 128 --max_target_length 128 \ --eval_steps 5 ` Training and evaluation seem to run fine, I see about 10GB of VRAM occupied on each GPU, so there is even free space left on the GPUs. However afterwards during the saving step I get OOM, which I don't understand. Log: [log.txt](https://github.com/huggingface/transformers/files/6785035/log.txt) Deespeed: 0.4.3+c9fee82 torch 1.8, cuda 11.1 Transformers: '4.9.0.dev0'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12587/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12587/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12586
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12586/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12586/comments
https://api.github.com/repos/huggingface/transformers/issues/12586/events
https://github.com/huggingface/transformers/pull/12586
939,879,449
MDExOlB1bGxSZXF1ZXN0Njg2MDMyNTM0
12,586
Fix caching issue #12536
{ "login": "ManuelFay", "id": 43467008, "node_id": "MDQ6VXNlcjQzNDY3MDA4", "avatar_url": "https://avatars.githubusercontent.com/u/43467008?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ManuelFay", "html_url": "https://github.com/ManuelFay", "followers_url": "https://api.github.com/users/ManuelFay/followers", "following_url": "https://api.github.com/users/ManuelFay/following{/other_user}", "gists_url": "https://api.github.com/users/ManuelFay/gists{/gist_id}", "starred_url": "https://api.github.com/users/ManuelFay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ManuelFay/subscriptions", "organizations_url": "https://api.github.com/users/ManuelFay/orgs", "repos_url": "https://api.github.com/users/ManuelFay/repos", "events_url": "https://api.github.com/users/ManuelFay/events{/privacy}", "received_events_url": "https://api.github.com/users/ManuelFay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Note: Tests will have to be changed if you want to go this way, I would imagine it's a bit too general of a fix to be honest.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,625
1,629
1,629
CONTRIBUTOR
null
# What does this PR do? This PR is a proposed fix to issue #12536 . It does so by simply logging the unfound file instead of raising an error causing program execution, in the special case of non-existent optional vocab files which are handled in the case `local_files_only=True` (`FileNotFoundError`) and the case where `local_files_only=False` and user is online, but not the case where `local_files_only=False` and user is offlline. This needs to be reviewed to ensure this is the direction to go to fix this issue, and that this will not be a problem in other cases. Fixes #12536 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12586/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12586/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12586", "html_url": "https://github.com/huggingface/transformers/pull/12586", "diff_url": "https://github.com/huggingface/transformers/pull/12586.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12586.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/12585
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12585/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12585/comments
https://api.github.com/repos/huggingface/transformers/issues/12585/events
https://github.com/huggingface/transformers/issues/12585
939,847,851
MDU6SXNzdWU5Mzk4NDc4NTE=
12,585
Error when running wav2vec2 embeddings
{ "login": "BirgerMoell", "id": 1704131, "node_id": "MDQ6VXNlcjE3MDQxMzE=", "avatar_url": "https://avatars.githubusercontent.com/u/1704131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BirgerMoell", "html_url": "https://github.com/BirgerMoell", "followers_url": "https://api.github.com/users/BirgerMoell/followers", "following_url": "https://api.github.com/users/BirgerMoell/following{/other_user}", "gists_url": "https://api.github.com/users/BirgerMoell/gists{/gist_id}", "starred_url": "https://api.github.com/users/BirgerMoell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BirgerMoell/subscriptions", "organizations_url": "https://api.github.com/users/BirgerMoell/orgs", "repos_url": "https://api.github.com/users/BirgerMoell/repos", "events_url": "https://api.github.com/users/BirgerMoell/events{/privacy}", "received_events_url": "https://api.github.com/users/BirgerMoell/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Can you copy-paste a reproducible code snippet here (create dummy data if necessary) ? :-) ", "Made a colab that reproduced the error.\r\nhttps://colab.research.google.com/drive/1JpZ33M3tCKJBK6XZeDhj4u30roWKQ63s?usp=sharing", "I can't run the colab as `processor` is commented out", "The main problem is the following: \r\n\r\nWe should not use a tokenizer to process wav files -> we should use the processor for that. So `AutoTokenizer` should be replaced by `Wav2Vec2Processor`. If you settle on using `HubertForCTC`, it's a good idea to first look into the examples of the docs to check how the model should be used. E.g. here we should an example for `HubertForCTC`: https://huggingface.co/transformers/master/model_doc/hubert.html#hubertforctc\r\n\r\n=> so from this example you can see that you should load the wav file yourself and then use the `Wav2Vec2Processor` to process the input. This will return `input_values` that you can pass to the model.\r\n\r\nAlso, just a note on how to write issue for the future ;-):\r\n\r\nIt's always good to aim for a *minimal* reproducible code example. E.g. for this error it should be relatively simple to figure out that the error is produced by the following lines: \r\n\r\n```python\r\ninput_values = processor(wav_file, return_tensors=\"pt\", padding=True, sampling_rate=new_sample_rate) # there is no truncation param anymore\r\nencoded_states = model(\r\n input_values=input_values[\"input_ids\"], \r\n # attention_mask=input_values[\"attention_mask\"], \r\n output_hidden_states=True\r\n)\r\n```\r\n\r\nin your code. So to make debugging easier it would be good to create a dumy `wav_array` (this can be just a random 1-D np.float32 array) and then post 3,4 lines here that show that there is a bug. E.g.:\r\n\r\n```python\r\nimport numpy as np\r\nfrom transformers import AutoTokenizer, HubertForCTC\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"facebook/hubert-large-ls960-ft\")\r\nmodel = HubertForCTC.from_pretrained(\"facebook/hubert-large-ls960-ft\")\r\n\r\nwav_file = np.random.random((1, 1024))\r\n\r\ninput_values = processor(wav_file, return_tensors=\"pt\", padding=True)\r\nencoded_states = model(input_values=input_values[\"input_ids\"])\r\n```\r\n\r\n=> It takes much less time to run these 5 lines then going through the colab (which sadly doesn't even run correctly).\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,625
1,628
1,628
NONE
null
While trying to extract wav2vec2 embeddings I get the following errors. e "feature_extractor.py", line 80, in <module> feature_extractor("/home/bmoell/data/media.talkbank.org/dementia/English/Pitt/Control/cookie") File "feature_extractor.py", line 36, in feature_extractor get_wav2vecembeddings_from_audiofile(wav_file) File "feature_extractor.py", line 57, in get_wav2vecembeddings_from_audiofile input_values = processor(resampled, return_tensors="pt", padding=True, sampling_rate=new_sample_rate) # there is no truncation param anymore File "/home/bmoell/hubert-dementia-screening/dementia/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2284, in __call__ raise ValueError( ValueError: text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples). I'm using the following script to extract wav2vec2 embeddings from .wav files. ```python def get_wav2vecembeddings_from_audiofile(wav_file): print("the file is", wav_file) speech, sample_rate = sf.read(wav_file) if len(speech.shape) > 1: speech = stereo_to_mono(speech) # change sample rate to 16 000 hertz resampled = change_sample_rate(speech, sample_rate, new_sample_rate) print("the speech is", speech) input_values = processor(resampled, return_tensors="pt", padding=True, sampling_rate=new_sample_rate) # there is no truncation param anymore print("input values", input_values) # import pdb # pdb.set_trace() with torch.no_grad(): encoded_states = model( **input_values, # attention_mask=input_values["attention_mask"], output_hidden_states=True ) last_hidden_state = encoded_states.hidden_states[-1] # The last hidden-state is the first element of the output tuple print("getting wav2vec2 embeddings") print(last_hidden_state) torch.save(last_hidden_state, wav_file + '.wav2vec2.pt') ``` Updated script that takes in the file_path to processor. Now I get a different error. ```python def get_wav2vecembeddings_from_audiofile(wav_file): print("the file is", wav_file) speech, sample_rate = sf.read(wav_file) if len(speech.shape) > 1: speech = stereo_to_mono(speech) # change sample rate to 16 000 hertz resampled = change_sample_rate(speech, sample_rate, new_sample_rate) print("the speech is", speech) input_values = processor(wav_file, return_tensors="pt", padding=True, sampling_rate=new_sample_rate) # there is no truncation param anymore print("input values", input_values) # import pdb # pdb.set_trace() with torch.no_grad(): encoded_states = model( input_values=input_values["input_ids"], # attention_mask=input_values["attention_mask"], output_hidden_states=True ) last_hidden_state = encoded_states.hidden_states[-1] # The last hidden-state is the first element of the output tuple print("getting wav2vec2 embeddings") print(last_hidden_state) torch.save(last_hidden_state, wav_file + '.wav2vec2.pt') ``` File "/home/bmoell/hubert-dementia-screening/dementia/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 294, in _conv_forward return F.conv1d(input, weight, bias, self.stride, RuntimeError: expected scalar type Long but found Float
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12585/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12585/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12584
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12584/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12584/comments
https://api.github.com/repos/huggingface/transformers/issues/12584/events
https://github.com/huggingface/transformers/issues/12584
939,846,147
MDU6SXNzdWU5Mzk4NDYxNDc=
12,584
[Flax]Not able to Run Hugging Face GPT2 model for jax on TPU's
{ "login": "vivekvkashyap", "id": 58116635, "node_id": "MDQ6VXNlcjU4MTE2NjM1", "avatar_url": "https://avatars.githubusercontent.com/u/58116635?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vivekvkashyap", "html_url": "https://github.com/vivekvkashyap", "followers_url": "https://api.github.com/users/vivekvkashyap/followers", "following_url": "https://api.github.com/users/vivekvkashyap/following{/other_user}", "gists_url": "https://api.github.com/users/vivekvkashyap/gists{/gist_id}", "starred_url": "https://api.github.com/users/vivekvkashyap/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vivekvkashyap/subscriptions", "organizations_url": "https://api.github.com/users/vivekvkashyap/orgs", "repos_url": "https://api.github.com/users/vivekvkashyap/repos", "events_url": "https://api.github.com/users/vivekvkashyap/events{/privacy}", "received_events_url": "https://api.github.com/users/vivekvkashyap/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "@patrickvonplaten \r\n", "Hey @vivekvkashyap,\r\ninstead of putting a screenshot here - could you maybe copy paste the link to your google colab instead so that we can reproduce the error?\r\n\r\nThank you!", "@patrickvonplaten i have added the colab notebook in the section To Reproduce\r\n", "@patil-suraj\r\n", "Similar to #12578 - we are working on it :-)\r\n" ]
1,625
1,626
1,626
NONE
null
Hi,Trying to do FlaxGPT2ForMultipleChoice.I am trying to run GPT2 using hugging face for TPU's.It is showing tuple out of index.But when i run it in CPU there is no such error.Even when i run a simple model without any code of mine it is also behaving the same in TPU. ![image](https://user-images.githubusercontent.com/58116635/124922271-f5a3ff00-e016-11eb-8679-3258fc3727e9.png) This below example is for a basic model ![image](https://user-images.githubusercontent.com/58116635/124922511-313ec900-e017-11eb-9d2c-7473fe8b21af.png) To Reproduce This is the colab notebook: https://colab.research.google.com/drive/1h8CeTM5NUpHeS1oGHONX1YbwtGHf3nyU?usp=sharing
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12584/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12584/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12583
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12583/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12583/comments
https://api.github.com/repos/huggingface/transformers/issues/12583/events
https://github.com/huggingface/transformers/issues/12583
939,821,639
MDU6SXNzdWU5Mzk4MjE2Mzk=
12,583
AttributeError for DataCollatorForLanguageModelling with tokenizers.Tokenizer
{ "login": "lewisbails", "id": 32473550, "node_id": "MDQ6VXNlcjMyNDczNTUw", "avatar_url": "https://avatars.githubusercontent.com/u/32473550?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewisbails", "html_url": "https://github.com/lewisbails", "followers_url": "https://api.github.com/users/lewisbails/followers", "following_url": "https://api.github.com/users/lewisbails/following{/other_user}", "gists_url": "https://api.github.com/users/lewisbails/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewisbails/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewisbails/subscriptions", "organizations_url": "https://api.github.com/users/lewisbails/orgs", "repos_url": "https://api.github.com/users/lewisbails/repos", "events_url": "https://api.github.com/users/lewisbails/events{/privacy}", "received_events_url": "https://api.github.com/users/lewisbails/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi there! I am unsure why you thought you could use a `tokenizers.Tokenizer` object here. The [documentation](https://huggingface.co/transformers/main_classes/data_collator.html#datacollatorforlanguagemodeling) clearly states it has to be a `PreTrainedTokenizerBase`, so either a `PreTrainedTokenizer` or a `PreTrainedTokenizerFast`. You can instantiate one with\r\n```\r\nfrom transformers import PreTrainedTokenizerFast\r\n\r\ntokenizer = PreTrainedTokenizerFast(tokenzier_file=path_to_json)\r\n```", "Sorry about that, I get confused what to use where between the two projects sometimes.\r\nI've also found [this](https://huggingface.co/transformers/fast_tokenizers.html#) to help me.\r\nAlthough the [documentation](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizerFast) for `PreTrainedTokenizerFast` doesn't show `tokenizer_file` as a valid parameter to `__init__`", "Oh very true, it's definitely missing! Do you want to make a PR to fix it?" ]
1,625
1,625
1,625
CONTRIBUTOR
null
## Environment info - `transformers` version: 4.8.2 - Platform: Linux-5.3.0-53-generic-x86_64-with-debian-buster-sid - Python version: 3.7.10 - PyTorch version (GPU?): 1.9.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help - tokenizers: @LysandreJik - trainer: @sgugger ## Information Model I am using (Bert, XLNet ...): Bert The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ``` tokenizer = Tokenizer.from_file("my-tokenizer.json") config = AutoConfig.from_pretrained("bert-base-cased", vocab_size=tokenizer.get_vocab_size()) model = AutoModelForMaskedLM.from_config(config) tokenizer.enable_truncation(max_length=model.config.max_position_embeddings) dataset = LMDataset(tokenizer, files=['train_1.txt', 'train_2.txt']) data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, **cfg.data_collator_kwargs) ``` ``` Traceback (most recent call last): ... File "/home/leb/lang-models/scripts/train_lm.py", line 25, in train_lm data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, **cfg.data_collator_kwargs) File "<string>", line 7, in __init__ File "/home/leb/anaconda3/envs/lang-models/lib/python3.7/site-packages/transformers/data/data_collator.py", line 333, in __post_init__ if self.mlm and self.tokenizer.mask_token is None: AttributeError: 'tokenizers.Tokenizer' object has no attribute 'mask_token' ``` ## Expected behavior Expected to be able to use tokenizers.Tokenizer in the tokenizer parameter to DataCollatorForLanguageModelling.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12583/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12583/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12582
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12582/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12582/comments
https://api.github.com/repos/huggingface/transformers/issues/12582/events
https://github.com/huggingface/transformers/pull/12582
939,806,077
MDExOlB1bGxSZXF1ZXN0Njg1OTY4OTA1
12,582
Simplify unk token
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,625
1,625
COLLABORATOR
null
# What does this PR do? As seen on [tokenizers#748](https://github.com/huggingface/tokenizers/issues/748) it's possible to avoid the UnigramTrainer forgetting about the unknown token if we set it properly as a kwarg when defining the trainer. This PR does that to avoid messing with the json after.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12582/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12582/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12582", "html_url": "https://github.com/huggingface/transformers/pull/12582", "diff_url": "https://github.com/huggingface/transformers/pull/12582.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12582.patch", "merged_at": 1625835754000 }
https://api.github.com/repos/huggingface/transformers/issues/12581
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12581/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12581/comments
https://api.github.com/repos/huggingface/transformers/issues/12581/events
https://github.com/huggingface/transformers/issues/12581
939,774,886
MDU6SXNzdWU5Mzk3NzQ4ODY=
12,581
ViT doesnt use tokenizer, yet shown as example transformer website
{ "login": "dmatos2012", "id": 11776554, "node_id": "MDQ6VXNlcjExNzc2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/11776554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dmatos2012", "html_url": "https://github.com/dmatos2012", "followers_url": "https://api.github.com/users/dmatos2012/followers", "following_url": "https://api.github.com/users/dmatos2012/following{/other_user}", "gists_url": "https://api.github.com/users/dmatos2012/gists{/gist_id}", "starred_url": "https://api.github.com/users/dmatos2012/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dmatos2012/subscriptions", "organizations_url": "https://api.github.com/users/dmatos2012/orgs", "repos_url": "https://api.github.com/users/dmatos2012/repos", "events_url": "https://api.github.com/users/dmatos2012/events{/privacy}", "received_events_url": "https://api.github.com/users/dmatos2012/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "cc @julien-c @LysandreJik @sgugger @patil-suraj - For Vision and Speech models we probably should create a `AutoProcessor` and adapt the default website widget to not use `AutoTokenizer` for Vision & Speech", "There is the `AutoFeatureExtractor` class already for vision (since there are no processors there).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "I am facing same problem for 'facebook/dino-vitb16' on hugging face. \r\nI am trying to do use transformers.onnx to convert model to onnx. \r\nThough, there is no tokenizer for this model, it looks for tokenizer. \r\nAny solution for this ?", "> \r\n\r\nHi @kartikpodugu Could you open a new issue with a description and a code snippet that reproduce the issue? Thank you.", "@kartikpodugu @ydshieh \r\nI also faced same issue in `from transformers import ViTForImageClassification.`\r\nHowever, I resolved this issue by upgrade the transformers version.\r\n\r\n[Old] 4.12.0.dev0\r\n[New] 4.29.2", "@SangbumChoi Could you open a new issue with a description, your full environment information, and a code snippet that reproduce the issue? Thank you.\r\n\r\nTried `from transformers import ViTForImageClassification` and it works fine." ]
1,625
1,686
1,629
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.0.dev0 - Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.3.4 (tpu) - Jax version: 0.2.16 - JaxLib version: 0.1.68 - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): ViT The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Open vit [transformer](https://huggingface.co/google/vit-base-patch16-224) website 2. Click on "</> Use in Transformers" 3. Copy the text and run it <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> Error trace: ``` tokenizer = AutoTokenizer.from_pretrained("google/vit-base-patch16-224") File "/home/david/transformers/src/transformers/models/auto/tokenization_auto.py", line 576, in from_pretrained raise ValueError( ValueError: Unrecognized configuration class <class 'transformers.models.vit.configuration_vit.ViTConfig'> to build an AutoTokenizer. Model type should be one of RetriBertConfig, RoFormerConfig, T5Config, MT5Config, MobileBertConfig, DistilBertConfig, AlbertConfig, CamembertConfig, PegasusConfig, MBartConfig, XLMRobertaConfig, MarianConfig, BlenderbotSmallConfig, BlenderbotConfig, BartConfig, LongformerConfig, RobertaConfig, ReformerConfig, ElectraConfig, FunnelConfig, LxmertConfig, LayoutLMConfig, DPRConfig, SqueezeBertConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, FlaubertConfig, XLMConfig, CTRLConfig, FSMTConfig, BertGenerationConfig, DebertaConfig, DebertaV2Config, RagConfig, XLMProphetNetConfig, Speech2TextConfig, M2M100Config, ProphetNetConfig, MPNetConfig, TapasConfig, LEDConfig, ConvBertConfig, BigBirdConfig, IBertConfig, Wav2Vec2Config, HubertConfig, GPTNeoConfig, LukeConfig, BigBirdPegasusConfig, CanineConfig. ``` ## Expected behavior Able to load the model correctly. Also @patrickvonplaten already solved it for me saying its ViTFeatureExtractor instead, but still it should be changed in the website. Thank you! <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12581/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12581/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12580
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12580/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12580/comments
https://api.github.com/repos/huggingface/transformers/issues/12580/events
https://github.com/huggingface/transformers/issues/12580
939,706,654
MDU6SXNzdWU5Mzk3MDY2NTQ=
12,580
Unable to quantize Google's LaBSE model using convert_graph_to_onnx.py
{ "login": "swapnil3597", "id": 30098342, "node_id": "MDQ6VXNlcjMwMDk4MzQy", "avatar_url": "https://avatars.githubusercontent.com/u/30098342?v=4", "gravatar_id": "", "url": "https://api.github.com/users/swapnil3597", "html_url": "https://github.com/swapnil3597", "followers_url": "https://api.github.com/users/swapnil3597/followers", "following_url": "https://api.github.com/users/swapnil3597/following{/other_user}", "gists_url": "https://api.github.com/users/swapnil3597/gists{/gist_id}", "starred_url": "https://api.github.com/users/swapnil3597/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/swapnil3597/subscriptions", "organizations_url": "https://api.github.com/users/swapnil3597/orgs", "repos_url": "https://api.github.com/users/swapnil3597/repos", "events_url": "https://api.github.com/users/swapnil3597/events{/privacy}", "received_events_url": "https://api.github.com/users/swapnil3597/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Indeed, this looks like a memory error!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,625
1,629
1,629
NONE
null
## Environment Info: Experiment was performed on Google Colab, RAM: 12.69GB Also experiment on machine with ~20GB RAM available. ### Who can help @LysandreJik @sgugger @SilvanK4t1qbit ## Information I'm unable to quantize Google's [`setu4993/LaBSE`](https://huggingface.co/setu4993/LaBSE) model using script [`convert_graph_to_onnx.py`](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_graph_to_onnx.py) (Approximate model size LaBSE is **~1.8GB**). The command I used to convert the graph is: ```bash python convert_graph_to_onnx.py --framework pt --model setu4993/LaBSE --quantize saved_models_temp/labse_bert_onnx/labse_bert.onnx --pipeline sentiment-analysis --opset 11 ``` The ONNX `convert` and `optimize` steps are executed, and after that process is killed while running `quantize`. The process is **Killed** without any error. **Terminal Output:** ```bash 2021-07-08 10:11:48.524534: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0 ====== Converting model to ONNX ====== ONNX opset version set to: 11 Loading pipeline (model: setu4993/LaBSE, tokenizer: setu4993/LaBSE) Downloading: 100% 560/560 [00:00<00:00, 763kB/s] Downloading: 100% 1.88G/1.88G [00:39<00:00, 47.4MB/s] tcmalloc: large alloc 1539547136 bytes == 0x5654616fc000 @ 0x7fb8b248ab6b 0x7fb8b24aa379 0x7fb8469f526e 0x7fb8469f69e2 0x7fb88a260b49 0x7fb88a261897 0x7fb88a63dd89 0x7fb88ada2b9a 0x7fb88ad85cbe 0x7fb88a98aa05 0x7fb89d2cc451 0x565457706338 0x56545783a1ba 0x5654578337ad 0x5654577c6c9f 0x565457807d79 0x565457804cc4 0x5654577c5559 0x5654578394f8 0x5654578337ad 0x5654577c6a81 0x565457807d79 0x565457804cc4 0x5654577c5462 0x565457838715 0x5654578337ad 0x5654577c6a81 0x565457807d79 0x565457804cc4 0x5654577c5462 0x565457838715 tcmalloc: large alloc 1539547136 bytes == 0x5654d1b10000 @ 0x7fb8b248ab6b 0x7fb8b24aa379 0x7fb8469f526e 0x7fb8469f69e2 0x7fb88b1129e9 0x7fb89d47d349 0x565457804c65 0x5654577c5462 0x565457838715 0x5654578337ad 0x5654577c6003 0x5654577c5b09 0x56545790d28d 0x56545787c1db 0x5654577c4bb1 0x5654578b5fed 0x565457838988 0x5654578337ad 0x565457705e2c 0x565457835bb5 0x5654578334ae 0x5654577c63ea 0x56545783532a 0x5654578334ae 0x5654577c6c9f 0x5654577c6ea1 0x565457835bb5 0x5654578334ae 0x5654577c6c9f 0x5654577c6ea1 0x565457835bb5 Some weights of BertForSequenceClassification were not initialized from the model checkpoint at setu4993/LaBSE and are newly initialized: ['classifier.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Downloading: 100% 239/239 [00:00<00:00, 374kB/s] Downloading: 100% 5.22M/5.22M [00:00<00:00, 53.7MB/s] Downloading: 100% 9.62M/9.62M [00:00<00:00, 45.6MB/s] Downloading: 100% 112/112 [00:00<00:00, 169kB/s] Creating folder path_to_model/saved_models_temp/labse_bert_onnx Using framework PyTorch: 1.9.0+cu102 Found input input_ids with shape: {0: 'batch', 1: 'sequence'} Found input token_type_ids with shape: {0: 'batch', 1: 'sequence'} Found input attention_mask with shape: {0: 'batch', 1: 'sequence'} Found output output_0 with shape: {0: 'batch'} Ensuring inputs are in correct order position_ids is not present in the generated input list. Generated inputs order: ['input_ids', 'attention_mask', 'token_type_ids'] tcmalloc: large alloc 1539547136 bytes == 0x5654d1b10000 @ 0x7fb8b248ab6b 0x7fb8b24aa379 0x7fb8469f526e 0x7fb8469f69e2 0x7fb88b111a73 0x7fb88a647b7b 0x7fb88ada2bef 0x7fb88ad87480 0x7fb88a990454 0x7fb88a648890 0x7fb88ae9c26f 0x7fb88ac5af3e 0x7fb88c530f77 0x7fb88c5313f2 0x7fb88ac5af3e 0x7fb88c7e7fde 0x7fb88c7e8102 0x7fb88b0dd8a6 0x7fb89d163742 0x5654577c4d54 0x5654577c4a50 0x565457839105 0x5654578b6e36 0x5654578abe76 0x565457800484 0x565457804c65 0x5654577c5462 0x565457838715 0x5654578337ad 0x565457705eb1 0x7fb89d6c1275 /usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py:1974: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! input_tensor.shape[chunk_dim] == tensor_shape for input_tensor in input_tensors tcmalloc: large alloc 1539547136 bytes == 0x5654d1b10000 @ 0x7fb8b24aa887 0x7fb8b0da0c29 0x7fb8b0da0d47 0x7fb8b0da27a5 0x7fb88ce699c6 0x7fb88ce6bbf6 0x7fb88ce6d20a 0x7fb88ce6de23 0x7fb89d6564f9 0x7fb89d659004 0x7fb89d5cfc00 0x7fb89d054b88 0x5654577c4cc0 0x5654577c4a50 0x565457838be0 0x5654578334ae 0x5654577c63ea 0x56545783532a 0x5654578334ae 0x5654577c63ea 0x56545783532a 0x5654578334ae 0x5654577c63ea 0x56545783532a 0x5654578334ae 0x5654577c63ea 0x5654578387f0 0x5654578334ae 0x5654577c63ea 0x56545783532a 0x5654577c630a tcmalloc: large alloc 1883791360 bytes == 0x56554dbbe000 @ 0x7fb8b24aa887 0x7fb8b0da0c29 0x7fb8b0da1afb 0x7fb8b0da1bb4 0x7fb8b0da1f9c 0x7fb8987e7bb7 0x7fb8987e8064 0x7fb88ce66a1c 0x7fb89d699aff 0x7fb89d054b88 0x5654577c58a8 0x565457838fd5 0x5654578334ae 0x5654577c63ea 0x56545783532a 0x5654578334ae 0x5654577c63ea 0x5654578387f0 0x5654578334ae 0x5654577c63ea 0x56545783532a 0x5654577c630a 0x5654578343b5 0x5654578334ae 0x5654577c63ea 0x5654578343b5 0x5654578334ae 0x5654578331b3 0x5654578fd182 0x5654578fd4fd 0x5654578fd3a6 ====== Optimizing ONNX model ====== tcmalloc: large alloc 2147483648 bytes == 0x5655a8fba000 @ 0x7fb8b248ab6b 0x7fb8b24aa379 0x7fb7eb84b34c 0x7fb7eb8482f4 0x7fb7eb8027d1 0x7fb7eb8077b2 0x7fb7eb80a4d6 0x7fb7eb6783a0 0x7fb7ebb3e747 0x7fb7ebb84b53 0x7fb7ebbacb38 0x5654577c58a8 0x565457838fd5 0x5654578334ae 0x5654577c63ea 0x56545783460e 0x5654578334ae 0x5654577c6a81 0x565457807d79 0x565457804cc4 0x5654577c5462 0x565457838715 0x5654577c630a 0x5654578343b5 0x5654578334ae 0x5654578331b3 0x5654578fd182 0x5654578fd4fd 0x5654578fd3a6 0x5654578d4723 0x5654578d43cc 2021-07-08 10:14:05.009614471 [W:onnxruntime:, inference_session.cc:1303 Initialize] Serializing optimized model with Graph Optimization level greater than ORT_ENABLE_EXTENDED and the NchwcTransformer enabled. The generated model may contain hardware specific optimizations, and should only be used in the same environment the model was optimized in. Optimized model has been written at path_to_model/saved_models_temp/labse_bert_onnx/labse_bert-optimized.onnx: ✔ /!\ Optimized model contains hardware specific operators which might not be portable. /!\ As of onnxruntime 1.4.0, models larger than 2GB will fail to quantize due to protobuf constraint. This limitation will be removed in the next release of onnxruntime. WARNING:root:onnxruntime.quantization.quantize is deprecated. Please use quantize_static for static quantization, quantize_dynamic for dynamic quantization. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator FusedMatMul. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator Gelu. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator FusedMatMul. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator Gelu. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator FusedMatMul. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator Gelu. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator FusedMatMul. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator Gelu. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator FusedMatMul. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator Gelu. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator FusedMatMul. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator Gelu. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator FusedMatMul. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator Gelu. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator FusedMatMul. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator Gelu. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator FusedMatMul. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator Gelu. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator FusedMatMul. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator Gelu. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator FusedMatMul. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator Gelu. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator FusedMatMul. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator Gelu. No schema registered for this operator. Warning: Unsupported operator LayerNormalization. No schema registered for this operator. Warning: Unsupported operator FusedGemm. No schema registered for this operator. tcmalloc: large alloc 3079086080 bytes == 0x5656f6cf0000 @ 0x7fb8b24aa001 0x5654577f7b30 0x5654577ce655 0x7fb8ae61c5a9 0x5654577c4c47 0x5654578b5fed 0x565457838988 0x5654578334ae 0x5654577c63ea 0x56545783460e 0x5654578334ae 0x5654577c63ea 0x56545783460e 0x5654578337ad 0x5654577c63ea 0x56545783460e 0x5654577c630a 0x56545783460e 0x5654578334ae 0x5654577c63ea 0x56545783532a 0x5654577c630a 0x5654578343b5 0x5654578334ae 0x5654578331b3 0x5654578fd182 0x5654578fd4fd 0x5654578fd3a6 0x5654578d4723 0x5654578d43cc 0x7fb8b1292bf7 ^C ``` The process is killed after this without any external interruption. Could this be a memory issue. I also tried this same experiment on machine with over 20GB RAM available, but the results were similar. ## To reproduce **Python package Requirements:** ```text torch transformers onnx onnxruntime onnxruntime-tools ``` **Run command:** ``` python convert_graph_to_onnx.py --framework pt --model setu4993/LaBSE --quantize path_to_model/labse_bert_onnx/labse_bert.onnx --pipeline sentiment-analysis --opset 11 ``` File: [`convert_graph_to_onnx.py`](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_graph_to_onnx.py)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12580/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12580/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12579
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12579/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12579/comments
https://api.github.com/repos/huggingface/transformers/issues/12579/events
https://github.com/huggingface/transformers/issues/12579
939,558,818
MDU6SXNzdWU5Mzk1NTg4MTg=
12,579
ImportError: cannot import name 'LineByLineTextDataset' from 'transformers' (unknown location)
{ "login": "ccyccxcl", "id": 24389529, "node_id": "MDQ6VXNlcjI0Mzg5NTI5", "avatar_url": "https://avatars.githubusercontent.com/u/24389529?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ccyccxcl", "html_url": "https://github.com/ccyccxcl", "followers_url": "https://api.github.com/users/ccyccxcl/followers", "following_url": "https://api.github.com/users/ccyccxcl/following{/other_user}", "gists_url": "https://api.github.com/users/ccyccxcl/gists{/gist_id}", "starred_url": "https://api.github.com/users/ccyccxcl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ccyccxcl/subscriptions", "organizations_url": "https://api.github.com/users/ccyccxcl/orgs", "repos_url": "https://api.github.com/users/ccyccxcl/repos", "events_url": "https://api.github.com/users/ccyccxcl/events{/privacy}", "received_events_url": "https://api.github.com/users/ccyccxcl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes this notebook is deprecated, you should look at the new version [here](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling_from_scratch.ipynb) (or on [colab](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/language_modeling_from_scratch.ipynb)).\r\n\r\nMore generally the up-to-date list of notebooks is in the [documentation](https://huggingface.co/transformers/master/notebooks.html).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,625
1,629
1,629
NONE
null
when I tried to run the following codes as in **https://huggingface.co/blog/how-to-train** , it raised a bug: from transformers import LineByLineTextDataset dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path="./oscar.eo.txt", block_size=128, ) ImportError: cannot import name 'LineByLineTextDataset' from 'transformers' (unknown location) @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12579/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12579/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12578
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12578/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12578/comments
https://api.github.com/repos/huggingface/transformers/issues/12578/events
https://github.com/huggingface/transformers/issues/12578
939,534,652
MDU6SXNzdWU5Mzk1MzQ2NTI=
12,578
tuple index out of range for FlaxMBartForConditionalGeneration
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false } ]
[ "Hi @bhavitvyamalik \r\n\r\nI tried this on TPU VM and it works. Where did you try it colab TPU or TPU VM ? ", "I tried this on Colab. Wanted to test the pipeline here before shifting it TPU VM. Is there any way to run it on google colab?", "Not sure, I will try to see what's the issue with colab. But it should work just fine on TPU VM.", "1. You were right! Works fine with TPU VM except for this (nothing to worry about I think):\r\n`tcmalloc: large alloc 2444541952 bytes == 0x8f822000 @ 0x7f5700b36680 0x7f5700b57824 0x5f7b11 0x648631 0x5c38e6 0x4f30e6 0x64ee88 0x505653 0x56acb6 0x568d9a 0x50b868 0x56fb87 0x568d9a 0x68cdc7 0x67e161 0x67e1df 0x4a447c 0x4a4619 0x67e829 0x4eee7b 0x6b71ed 0x7f570094d0b3 0x5f96de`\r\n\r\n2. I was trying to run a forward pass for generating outputs using this model. Using only `eval_step` part should suffice here right? (I referred to [this](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/causal_language_modeling_flax.ipynb#scrollTo=Sj1mJNJa6PPS) notebook here for steps) \r\n```\r\nlinear_decay_lr_schedule_fn = optax.linear_schedule(init_value=3e-4, end_value=0, transition_steps=1000)\r\nadamw = optax.adamw(learning_rate=linear_decay_lr_schedule_fn, b1=0.9, b2=0.98, eps=1e-8, weight_decay=0.01)\r\n\r\nstate = train_state.TrainState.create(apply_fn=model.__call__, params=model.params, tx=adamw)\r\n\r\ndef eval_step(params, batch):\r\n generated_tokens = model.generate(**batch, params=params, train=False, forced_bos_token_id=tokenizer.lang_code_to_id[\"fr_XX\"])[\"sequences\"]\r\n return generated_tokens\r\n\r\nparallel_eval_step = jax.pmap(eval_step, \"batch\")\r\n\r\nfor model_input in model_inputs: # model_inputs contain tokenized values of input sentences\r\n output_logits = parallel_eval_step(state.params, model_input) # Model forward\r\n```", "This [colab](https://colab.research.google.com/drive/1qn7d9FkEOEIQcaLr2WFhopH64JGKyXe6?usp=sharing) should help with how to use generate on TPU", "@patil-suraj -> let's check if we can solve those issue when changing to `jnp.ndarray` type-check", "fixed in #12638", "I also face this error during quantization, I am using fastt5 library to quantize the weights of this model **\"pszemraj/grammar-synthesis-base\"** , but in transformers library (one file of this library at this path(**/usr/local/lib/python3.7/dist-packages/transformers/utils/generic.py])** show the error in colab notebook.\r\n\r\nError is **IndexError: tuple index out of range** \r\n\r\ncode is here:\r\n`!pip install fastt5\r\n\r\nfrom fastT5 import (OnnxT5, get_onnx_runtime_sessions,\r\n generate_onnx_representation, quantize)\r\nfrom transformers import AutoTokenizer\r\n\r\nmodel_or_model_path = 'pszemraj/grammar-synthesis-base'\r\n# Step 1. convert huggingfaces t5 model to onnx\r\nonnx_model_paths = generate_onnx_representation(model_or_model_path)\r\n\r\n\r\n# Step 2. (recommended) quantize the converted model for fast inference and to reduce model size.\r\nquant_model_paths = quantize(onnx_model_paths)\r\n\r\n# step 3. setup onnx runtime\r\nmodel_sessions = get_onnx_runtime_sessions(quant_model_paths)\r\n\r\n# step 4. get the onnx model\r\nmodel = OnnxT5(model_or_model_path, model_sessions)`\r\n\r\nError occurred in this function(generate_onnx_representation)..\r\nSo how can we debug the error (thanks)..\r\n\r\none more error may you face during quantize is that **encoder is not defined ** at this function **(generate_onnx_representation)**" ]
1,625
1,659
1,626
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.0.dev0 (installed from source) - Platform: Google colab - Python version: 3.7.10 - Using TPU in script?: Yes - Dependecies were installed following this colab notebook: https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/causal_language_modeling_flax.ipynb#scrollTo=Sj1mJNJa6PPS ### Who can help @patil-suraj @patrickvonplaten ## Information Model I am using: FlaxMBartForConditionalGeneration The problem arises when loading the model itself ## To reproduce Steps to reproduce the behavior: ``` from transformers import FlaxMBartForConditionalGeneration model = FlaxMBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-one-to-many-mmt") ``` ``` --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-5-f8556949d896> in <module>() 1 from transformers import FlaxMBartForConditionalGeneration, MBart50TokenizerFast 2 ----> 3 model = FlaxMBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-one-to-many-mmt", from_pt=True) 4 tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-one-to-many-mmt", src_lang="en_XX") 15 frames /usr/local/lib/python3.7/dist-packages/transformers/modeling_flax_utils.py in from_pretrained(cls, pretrained_model_name_or_path, dtype, *model_args, **kwargs) 336 337 # init random models --> 338 model = cls(config, *model_args, **model_kwargs) 339 340 if from_pt: /usr/local/lib/python3.7/dist-packages/transformers/models/mbart/modeling_flax_mbart.py in __init__(self, config, input_shape, seed, dtype, **kwargs) 948 ): 949 module = self.module_class(config=config, dtype=dtype, **kwargs) --> 950 super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype) 951 952 def init_weights(self, rng: jax.random.PRNGKey, input_shape: Tuple) -> FrozenDict: /usr/local/lib/python3.7/dist-packages/transformers/modeling_flax_utils.py in __init__(self, config, module, input_shape, seed, dtype) 103 104 # randomly initialized parameters --> 105 random_params = self.init_weights(self.key, input_shape) 106 107 # save required_params as set /usr/local/lib/python3.7/dist-packages/transformers/models/mbart/modeling_flax_mbart.py in init_weights(self, rng, input_shape) 973 decoder_attention_mask, 974 position_ids, --> 975 decoder_position_ids, 976 )["params"] 977 /usr/local/lib/python3.7/dist-packages/flax/linen/module.py in init(self, rngs, method, mutable, *args, **kwargs) 998 _, v_out = self.init_with_output( 999 rngs, *args, -> 1000 method=method, mutable=mutable, **kwargs) 1001 return v_out 1002 /usr/local/lib/python3.7/dist-packages/flax/linen/module.py in init_with_output(self, rngs, method, mutable, *args, **kwargs) 967 rngs = {'params': rngs} 968 return self.apply( --> 969 {}, *args, rngs=rngs, method=method, mutable=mutable, **kwargs) 970 971 def init(self, /usr/local/lib/python3.7/dist-packages/flax/linen/module.py in apply(self, variables, rngs, method, mutable, capture_intermediates, *args, **kwargs) 937 method, self, 938 mutable=mutable, capture_intermediates=capture_intermediates --> 939 )(variables, *args, **kwargs, rngs=rngs) 940 941 def init_with_output(self, /usr/local/lib/python3.7/dist-packages/flax/core/scope.py in wrapper(variables, rngs, *args, **kwargs) 685 **kwargs) -> Union[Any, Tuple[Any, VariableDict]]: 686 with bind(variables, rngs=rngs, mutable=mutable).temporary() as root: --> 687 y = fn(root, *args, **kwargs) 688 if mutable is not False: 689 return y, root.mutable_variables() /usr/local/lib/python3.7/dist-packages/flax/linen/module.py in scope_fn(scope, *args, **kwargs) 1176 _context.capture_stack.append(capture_intermediates) 1177 try: -> 1178 return fn(module.clone(parent=scope), *args, **kwargs) 1179 finally: 1180 _context.capture_stack.pop() /usr/local/lib/python3.7/dist-packages/flax/linen/module.py in wrapped_module_method(*args, **kwargs) 273 _context.module_stack.append(self) 274 try: --> 275 y = fun(self, *args, **kwargs) 276 if _context.capture_stack: 277 filter_fn = _context.capture_stack[-1] /usr/local/lib/python3.7/dist-packages/transformers/models/mbart/modeling_flax_mbart.py in __call__(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, position_ids, decoder_position_ids, output_attentions, output_hidden_states, return_dict, deterministic) 1310 output_hidden_states=output_hidden_states, 1311 return_dict=return_dict, -> 1312 deterministic=deterministic, 1313 ) 1314 /usr/local/lib/python3.7/dist-packages/flax/linen/module.py in wrapped_module_method(*args, **kwargs) 273 _context.module_stack.append(self) 274 try: --> 275 y = fun(self, *args, **kwargs) 276 if _context.capture_stack: 277 filter_fn = _context.capture_stack[-1] /usr/local/lib/python3.7/dist-packages/transformers/models/mbart/modeling_flax_mbart.py in __call__(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, position_ids, decoder_position_ids, output_attentions, output_hidden_states, return_dict, deterministic) 905 output_hidden_states=output_hidden_states, 906 return_dict=return_dict, --> 907 deterministic=deterministic, 908 ) 909 /usr/local/lib/python3.7/dist-packages/flax/linen/module.py in wrapped_module_method(*args, **kwargs) 273 _context.module_stack.append(self) 274 try: --> 275 y = fun(self, *args, **kwargs) 276 if _context.capture_stack: 277 filter_fn = _context.capture_stack[-1] /usr/local/lib/python3.7/dist-packages/transformers/models/mbart/modeling_flax_mbart.py in __call__(self, input_ids, attention_mask, position_ids, output_attentions, output_hidden_states, return_dict, deterministic) 763 ) 764 --> 765 last_hidden_states = outputs[0] 766 last_hidden_states = self.layer_norm(last_hidden_states) 767 /usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in __getitem__(self, k) 1810 return inner_dict[k] 1811 else: -> 1812 return self.to_tuple()[k] 1813 1814 def __setattr__(self, name, value): IndexError: tuple index out of range ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12578/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12578/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12577
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12577/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12577/comments
https://api.github.com/repos/huggingface/transformers/issues/12577/events
https://github.com/huggingface/transformers/pull/12577
939,521,676
MDExOlB1bGxSZXF1ZXN0Njg1NzI5MDY3
12,577
[Work In Progress] SentenceTransformer implementation based on CLIP
{ "login": "minsik-ai", "id": 20217873, "node_id": "MDQ6VXNlcjIwMjE3ODcz", "avatar_url": "https://avatars.githubusercontent.com/u/20217873?v=4", "gravatar_id": "", "url": "https://api.github.com/users/minsik-ai", "html_url": "https://github.com/minsik-ai", "followers_url": "https://api.github.com/users/minsik-ai/followers", "following_url": "https://api.github.com/users/minsik-ai/following{/other_user}", "gists_url": "https://api.github.com/users/minsik-ai/gists{/gist_id}", "starred_url": "https://api.github.com/users/minsik-ai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/minsik-ai/subscriptions", "organizations_url": "https://api.github.com/users/minsik-ai/orgs", "repos_url": "https://api.github.com/users/minsik-ai/repos", "events_url": "https://api.github.com/users/minsik-ai/events{/privacy}", "received_events_url": "https://api.github.com/users/minsik-ai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,625
1,629
1,629
NONE
null
# What does this PR do? Sentence Transformer Flax implementation for Flax/JAX week. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Reviewers will be added when the PR has progressed.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12577/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12577/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12577", "html_url": "https://github.com/huggingface/transformers/pull/12577", "diff_url": "https://github.com/huggingface/transformers/pull/12577.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12577.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/12576
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12576/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12576/comments
https://api.github.com/repos/huggingface/transformers/issues/12576/events
https://github.com/huggingface/transformers/issues/12576
939,503,874
MDU6SXNzdWU5Mzk1MDM4NzQ=
12,576
Summarization failure "All images are copyrighted" for certain text inputs
{ "login": "polplop", "id": 47445257, "node_id": "MDQ6VXNlcjQ3NDQ1MjU3", "avatar_url": "https://avatars.githubusercontent.com/u/47445257?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polplop", "html_url": "https://github.com/polplop", "followers_url": "https://api.github.com/users/polplop/followers", "following_url": "https://api.github.com/users/polplop/following{/other_user}", "gists_url": "https://api.github.com/users/polplop/gists{/gist_id}", "starred_url": "https://api.github.com/users/polplop/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polplop/subscriptions", "organizations_url": "https://api.github.com/users/polplop/orgs", "repos_url": "https://api.github.com/users/polplop/repos", "events_url": "https://api.github.com/users/polplop/events{/privacy}", "received_events_url": "https://api.github.com/users/polplop/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "It seems Pegasus-XSum is waste of time and space, let me try cnn-dailymail\r\n", "I tried cnn-dailymail. its working. I wasted my 2.5 GB of data trying XSum. Soon In University will try Bart too...\r\n![image](https://github.com/huggingface/transformers/assets/36412543/4f216fc3-620c-4eb4-8d08-c0c83e2f7dfa)\r\n\r\n" ]
1,625
1,689
1,629
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.1 - Platform: Linux-5.2.18-200.fc30.x86_64-x86_64-with-fedora-30-Thirty - Python version: 3.7.4 - PyTorch version (GPU?): 1.7.1+cu101 (True) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: nope ### Who can help @patil-suraj <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (PEGASUS-xsum): The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce ```python # Minimal example import torch bad_txt = "delivery giant DoorDash launched its Japan operation on Wednesday, a move that is expected to further intensify the already fierce competition for a slice of the country's food delivery market. Starting Wednesday, customers in Sendai, a major city in northeastern Japan, can order from hundreds of local restaurants as well as national chains via DoorDash. Japan is one of the largest delivery markets in the world, but it's still very underpenetrated relative to the size of the population and the size of the economy, DoorDash co-founder and CEO Tony Xu told Nikkei Asia in an interview on Tuesday." model_name = 'google/pegasus-xsum' # Fails #model_name = 'google/pegasus-cnn_dailymail' # Works fine device = 'cuda' if torch.cuda.is_available() else 'cpu' from transformers import PegasusForConditionalGeneration, PegasusTokenizer, PegasusConfig tokenizer = PegasusTokenizer.from_pretrained(model_name) model = PegasusForConditionalGeneration.from_pretrained(model_name).to(device) def summarize(src_text): batch = tokenizer(src_text, truncation=True, padding='longest', return_tensors="pt").to(device) translated = model.generate(**batch, early_stopping=True).to(device) tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)[0] return tgt_text print(summarize(bad_txt)) # Summary output (xsum) : All images are copyrighted. # Summary output (cnndm): delivery giant DoorDash launched its Japan operation on Wednesday.<n>The move is expected to further intensify the already fierce competition for a slice of the country's food delivery market.<n>Japan is one of the largest delivery markets in the world, but it's still very underpenetrated relative to the size of the population. ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior A relevant summary is produced by the model. Shortening the length of the input at the start or end seems to make it work, however words removed are arbitrary. Input is already much less than token limit 512. <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12576/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12576/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12575
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12575/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12575/comments
https://api.github.com/repos/huggingface/transformers/issues/12575/events
https://github.com/huggingface/transformers/issues/12575
939,407,755
MDU6SXNzdWU5Mzk0MDc3NTU=
12,575
HTTPSConnectionPool(host='cdn-lfs.huggingface.co', port=443): Max retries exceeded
{ "login": "moh-yani", "id": 55953151, "node_id": "MDQ6VXNlcjU1OTUzMTUx", "avatar_url": "https://avatars.githubusercontent.com/u/55953151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/moh-yani", "html_url": "https://github.com/moh-yani", "followers_url": "https://api.github.com/users/moh-yani/followers", "following_url": "https://api.github.com/users/moh-yani/following{/other_user}", "gists_url": "https://api.github.com/users/moh-yani/gists{/gist_id}", "starred_url": "https://api.github.com/users/moh-yani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/moh-yani/subscriptions", "organizations_url": "https://api.github.com/users/moh-yani/orgs", "repos_url": "https://api.github.com/users/moh-yani/repos", "events_url": "https://api.github.com/users/moh-yani/events{/privacy}", "received_events_url": "https://api.github.com/users/moh-yani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Looks like a networking or DNS error. Can you try again, or try from another machine/network?", "Dear @julien-c \r\n\r\nThank you for the assistance. We will try again to run this.\r\n\r\nKind regards,\r\n\r\nMY", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "are you using pythonanywhere.com for hosting by any chance..?\r\nbecause they have a system known as whitelisting sites. you need to go and convince them on their forum to make it whitelisted then it will work perfectly.\r\n\r\n@julien-c is right, it is a problem of DNS", "HTTPSConnectionPool(host='cdn-lfs.huggingface.co' - constant - unable to install this - all day long", "from transformers import AutoModelWithLMHead, AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"distilbert-base-cased\")\r\nmodel = AutoModelWithLMHead.from_pretrained(\"distilbert-base-cased\")\r\n\r\ni am getting this error while I run the above code lines \r\nConnectionError: HTTPSConnectionPool(host='cdn-lfs.huggingface.co', port=443): Read timed out.\r\n\r\nany thoughts on how to proceed ?", "i am observing this error while trining the gpt4all \r\nConnectionError: HTTPSConnectionPool(host='cdn-lfs.huggingface.co', port=443): Max retries exceeded with url:", "> HTTPSConnectionPool(host='cdn-lfs.huggingface.co' - constant - unable to install this - all day long\r\n\r\nI also encountered the same problem, how did you solve it later?", "any updates on this one?", "@vijaykumar-1551 \r\n\r\n> from transformers import AutoModelWithLMHead, AutoTokenizer\r\n>\r\n> tokenizer = AutoTokenizer.from_pretrained(\"distilbert-base-cased\")\r\n> model = AutoModelWithLMHead.from_pretrained(\"distilbert-base-cased\")\r\n>\r\n> i am getting this error while I run the above code lines\r\n> ConnectionError: HTTPSConnectionPool(host='cdn-lfs.huggingface.co', port=443): Read timed out.\r\n>\r\n> any thoughts on how to proceed ?\r\n\r\nI encountered the same issue. As a workaraund I pass `resume_download=True` argument to `from_pretrained` and when the error occurs just restart the script 🤷 \r\n\r\nFor example,\r\n\r\n```python\r\n model = AutoModelForCausalLM.from_pretrained(\r\n model_name, \r\n torch_dtype=torch.float16, \r\n device_map='sequential', \r\n resume_download=True,\r\n cache_dir='.cache/open-llama-13b-open-instruct'\r\n )\r\n```", "hey sorry i was abit busy, i could not resolve the error yet, will update\r\nas soon I find a solution.\r\nSorry for the delay.\r\n--\r\nThanks & Regards\r\nVijay Kumar V\r\nPython Developer\r\nP2Fsemiconductors\r\nhttps://www.p2fsemi.com/index.php\r\nContact: 6361960718\r\n\r\n\r\nOn Wed, 28 Jun 2023 at 21:27, Mikhail Kravets ***@***.***>\r\nwrote:\r\n\r\n> @vijaykumar-1551 <https://github.com/vijaykumar-1551>\r\n>\r\n> from transformers import AutoModelWithLMHead, AutoTokenizer\r\n>\r\n> tokenizer = AutoTokenizer.from_pretrained(\"distilbert-base-cased\")\r\n> model = AutoModelWithLMHead.from_pretrained(\"distilbert-base-cased\")\r\n>\r\n> i am getting this error while I run the above code lines\r\n> ConnectionError: HTTPSConnectionPool(host='cdn-lfs.huggingface.co',\r\n> port=443): Read timed out.\r\n>\r\n> any thoughts on how to proceed ?\r\n>\r\n> I encountered the same issue. As a workaraund I pass resume_download=True\r\n> argument to from_pretrained and when the error occurs just restart the\r\n> script 🤷\r\n>\r\n> For example,\r\n>\r\n> model = AutoModelForCausalLM.from_pretrained(\r\n> model_name,\r\n> torch_dtype=torch.float16,\r\n> device_map='sequential',\r\n> resume_download=True,\r\n> cache_dir='.cache/open-llama-13b-open-instruct'\r\n> )\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/12575#issuecomment-1611695095>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/A4GZTW2DAROMH56L2NHBEWTXNRH7LANCNFSM477ZOC7A>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n", "I have solved the issue with the combined two methods as follows. \r\n\r\n**1. set a higher timeout of 2MB ( about 2000000 bytes)** \r\n\r\nIt can improve half of the efficiency from 20% to up to 50% of the downloaded content (240MB in my case) \r\n\r\n```\r\nimport urllib3, socket\r\nfrom urllib3.connection import HTTPConnection\r\n\r\nHTTPConnection.default_socket_options = ( \r\n HTTPConnection.default_socket_options + [\r\n (socket.SOL_SOCKET, socket.SO_SNDBUF, 2000000), \r\n (socket.SOL_SOCKET, socket.SO_RCVBUF, 2000000)\r\n ])\r\n```\r\n\r\n**2. Add the parameter of resume_download=True**\r\n\r\nFor example, I use from_pretrained() function as similar as your case. It helps my jupyter notebook complete the last half of the downloaded content according to the progress bar. \r\n\r\n```\r\nfrom transformers import TFAutoModelForSeq2SeqLM, DataCollatorForSeq2Seq\r\n\r\nmodel = TFAutoModelForSeq2SeqLM.from_pretrained(MODEL_CHECKPOINT, \r\n resume_download=True,)\r\n\r\n```\r\n\r\nSince I add the timeout first, and then add the download parameter. So I have no chance to try the download parameter alone. The 2nd method could work alone. \r\n\r\nCheers! \r\n\r\n ", "Thank you for the help buddy. WIll try these pointers.!!\r\n--\r\nThanks & Regards\r\nVijay Kumar V\r\nPython Developer\r\nP2Fsemiconductors\r\nhttps://www.p2fsemi.com/index.php\r\nContact: 6361960718\r\n\r\n\r\nOn Wed, 13 Sept 2023 at 05:37, mikechen66 ***@***.***> wrote:\r\n\r\n> I have solved the issue with the combined two methods to\r\n>\r\n> *1. set a higher timeout of 2MB ( about 2000000 bytes)*\r\n>\r\n> It can improve half of the efficiency from 20% to up to 50% of the\r\n> downloaded content (240MB in my case)\r\n>\r\n> import urllib3, socket\r\n> from urllib3.connection import HTTPConnection\r\n>\r\n> HTTPConnection.default_socket_options = (\r\n> HTTPConnection.default_socket_options + [\r\n> (socket.SOL_SOCKET, socket.SO_SNDBUF, 2000000),\r\n> (socket.SOL_SOCKET, socket.SO_RCVBUF, 2000000)\r\n> ])\r\n>\r\n> *2. Add the parameter of resume_download=True*\r\n>\r\n> For example, I use from_pretrained() function as similar as your case. It\r\n> helps my jupyter notebook complete the last half of the downloaded content\r\n> according to the progress bar.\r\n>\r\n> from transformers import TFAutoModelForSeq2SeqLM, DataCollatorForSeq2Seq\r\n>\r\n> model = TFAutoModelForSeq2SeqLM.from_pretrained(MODEL_CHECKPOINT,\r\n> resume_download=True,)\r\n>\r\n>\r\n> Since I add the timeout first, and then add the download parameter. So I\r\n> have no chance to try the download parameter alone. The 2nd method could\r\n> work alone.\r\n>\r\n> Cheers!\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/12575#issuecomment-1716743090>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/A4GZTW4ILSKCRQUGOKDP45TX2D2L3ANCNFSM477ZOC7A>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n" ]
1,625
1,694
1,629
NONE
null
Dear, I am new in Transformers, I just tried to run the syntax below: from transformers import BertForSequenceClassification, AdamW, BertConfig model = BertForSequenceClassification.from_pretrained( "bert-base-cased", num_labels = 2, output_attentions = False, output_hidden_states = False ) model.cuda() However, I got an error message like: `HTTPSConnectionPool(host='cdn-lfs.huggingface.co', port=443): Max retries exceeded with url: /bert-base-cased/d6992b8cd27d7a132eafce6a8210272329a371b1c762d453588795dd3 835593e (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7ff9e60c6dd0>: Failed to establish a new connection: [Errno -2] Name or service not known')) Traceback (most recent call last): File "/root/anaconda3/envs/bert/lib/python3.7/site-packages/urllib3/connection.py", line 170, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/root/anaconda3/envs/bert/lib/python3.7/site-packages/urllib3/util/connection.py", line 73, in create_connection for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): File "/root/anaconda3/envs/bert/lib/python3.7/socket.py", line 752, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno -2] Name or service not known ` Any suggestions on how to fix it? Kind regards, MY
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12575/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12575/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12574
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12574/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12574/comments
https://api.github.com/repos/huggingface/transformers/issues/12574/events
https://github.com/huggingface/transformers/pull/12574
939,379,257
MDExOlB1bGxSZXF1ZXN0Njg1NjExMzgx
12,574
[model.from_pretrained] raise exception early on failed load
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,625
1,625
CONTRIBUTOR
null
Currently if `load` pretrained weights fails in `from_pretrained`, we first print a whole bunch of successful messages and then fail - this PR puts the exception first to avoid all the misleading messages. (github produces some weird replay effects when re-using a branch that it assigns by default when editing on github) @sgugger, @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12574/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12574/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12574", "html_url": "https://github.com/huggingface/transformers/pull/12574", "diff_url": "https://github.com/huggingface/transformers/pull/12574.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12574.patch", "merged_at": 1625757471000 }
https://api.github.com/repos/huggingface/transformers/issues/12573
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12573/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12573/comments
https://api.github.com/repos/huggingface/transformers/issues/12573/events
https://github.com/huggingface/transformers/issues/12573
939,378,380
MDU6SXNzdWU5MzkzNzgzODA=
12,573
PEGASUS using ONNX
{ "login": "karimfayed", "id": 46823709, "node_id": "MDQ6VXNlcjQ2ODIzNzA5", "avatar_url": "https://avatars.githubusercontent.com/u/46823709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/karimfayed", "html_url": "https://github.com/karimfayed", "followers_url": "https://api.github.com/users/karimfayed/followers", "following_url": "https://api.github.com/users/karimfayed/following{/other_user}", "gists_url": "https://api.github.com/users/karimfayed/gists{/gist_id}", "starred_url": "https://api.github.com/users/karimfayed/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/karimfayed/subscriptions", "organizations_url": "https://api.github.com/users/karimfayed/orgs", "repos_url": "https://api.github.com/users/karimfayed/repos", "events_url": "https://api.github.com/users/karimfayed/events{/privacy}", "received_events_url": "https://api.github.com/users/karimfayed/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello @karimfayed! We're in the process of switching our approach relative to using the ONNX converter. See the following PR https://github.com/huggingface/transformers/pull/11786.\r\n\r\nIt has support for BART, so enabling support for Pegasus should be fairly simple. Please let us know if you run into any issues.\r\n\r\nYou can see the docs here: https://235542-155220641-gh.circle-artifacts.com/0/docs/_build/html/serialization.html\r\n\r\nPlease make sure to git checkout the PR first!", "> Hello @karimfayed! We're in the process of switching our approach relative to using the ONNX converter. See the following PR #11786.\r\n> \r\n> It has support for BART, so enabling support for Pegasus should be fairly simple. Please let us know if you run into any issues.\r\n> \r\n> You can see the docs here: https://235542-155220641-gh.circle-artifacts.com/0/docs/_build/html/serialization.html\r\n> \r\n> Please make sure to git checkout the PR first!\r\n\r\nHello @LysandreJik , thank you for your help. I read both the [docs](https://235542-155220641-gh.circle-artifacts.com/0/docs/_build/html/serialization.html) and the issue and I used the command : \r\n`!python3 -m transformers.onnx -f pytorch --model=Karimfayed/pegasus-SAMSum --features=default --optimize --optimization-level=all onnx/Karimfayed/pegasus-SAMSum/`\r\n\r\n**but I keep getting this error**\r\n\r\n> /usr/bin/python3: No module named transformers.onnx\r\n\r\n**Even when I replace ` transformers.onnx` with `transformers.onnx.export ` I get this error:**\r\n\r\n> /usr/bin/python3: Error while finding module specification for 'transformers.onnx.export' (ModuleNotFoundError: No module named 'transformers.onnx')", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,625
1,629
1,629
NONE
null
Hello @patrickvonplaten. , I just uploaded my fine-tuned model to the hub and I wanted to use ONNX to convert the pytorch model and be able to use it in a JavaScript back-end. **I used the following command:** `!python3 -m transformers.convert_graph_to_onnx --model Karimfayed/pegasus-SAMSum --framework pt pegasus-SAMSum.onnx` **I receive the following error message:** > Error while converting the model: Unrecognized configuration class <class 'transformers.configuration_pegasus.PegasusConfig'> for this kind of AutoModel: AutoModel. Model type should be one of RetriBertConfig, T5Config, DistilBertConfig, AlbertConfig, CamembertConfig, XLMRobertaConfig, BartConfig, LongformerConfig, RobertaConfig, LayoutLMConfig, SqueezeBertConfig, BertConfig, OpenAIGPTConfig, GPT2Config, MobileBertConfig, TransfoXLConfig, XLNetConfig, FlaubertConfig, FSMTConfig, XLMConfig, CTRLConfig, ElectraConfig, ReformerConfig, FunnelConfig, LxmertConfig, BertGenerationConfig, DebertaConfig, DPRConfig, XLMProphetNetConfig, ProphetNetConfig. **Is PEGASUS going to be added to the list soon or is there any way around it?**
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12573/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12573/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12572
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12572/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12572/comments
https://api.github.com/repos/huggingface/transformers/issues/12572/events
https://github.com/huggingface/transformers/issues/12572
939,355,997
MDU6SXNzdWU5MzkzNTU5OTc=
12,572
push_to_hub related issues (from Google Colab)
{ "login": "ewalia", "id": 34584727, "node_id": "MDQ6VXNlcjM0NTg0NzI3", "avatar_url": "https://avatars.githubusercontent.com/u/34584727?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ewalia", "html_url": "https://github.com/ewalia", "followers_url": "https://api.github.com/users/ewalia/followers", "following_url": "https://api.github.com/users/ewalia/following{/other_user}", "gists_url": "https://api.github.com/users/ewalia/gists{/gist_id}", "starred_url": "https://api.github.com/users/ewalia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ewalia/subscriptions", "organizations_url": "https://api.github.com/users/ewalia/orgs", "repos_url": "https://api.github.com/users/ewalia/repos", "events_url": "https://api.github.com/users/ewalia/events{/privacy}", "received_events_url": "https://api.github.com/users/ewalia/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Are you inside a Colab?\r\n\r\nYou might want to try `huggingface_hub.Repository` and pass in an authentication token\r\n", "To write to the repo, you'll also need to login with `!huggingface-cli login`\r\n\r\nOnce you've done that, if you want to use only git commands without passing by the `Repository` class, you can do it as such:\r\n\r\n```\r\n!git clone https://user:$(cat /root/.huggingface/token)@huggingface.co/<NAMESPACE>/<MODEL_ID>\r\n```\r\n\r\nor, if you'd rather use environment variables:\r\n\r\n```py\r\n# Put the token in an environment variable\r\nfrom huggingface_hub import HfFolder\r\nimport os\r\nos.environ['HF_AUTH'] = HfFolder().get_token()\r\n```\r\n```\r\n# Clone the repo with authentication\r\n!git clone https://user:[email protected]/<NAMESPACE>/<MODEL_ID>\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,625
1,629
1,629
NONE
null
I am trying to write a transformer model to a repo at huggingface.co !git push doesn't work after successful !git add . and !git commit fatal: could not read username for https://huggingface.co no such device or address
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12572/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12572/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12571
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12571/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12571/comments
https://api.github.com/repos/huggingface/transformers/issues/12571/events
https://github.com/huggingface/transformers/issues/12571
939,354,089
MDU6SXNzdWU5MzkzNTQwODk=
12,571
AutoTokenizer not loading gpt2 model on instance without internet connection even after caching model
{ "login": "bhedayat", "id": 13006899, "node_id": "MDQ6VXNlcjEzMDA2ODk5", "avatar_url": "https://avatars.githubusercontent.com/u/13006899?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhedayat", "html_url": "https://github.com/bhedayat", "followers_url": "https://api.github.com/users/bhedayat/followers", "following_url": "https://api.github.com/users/bhedayat/following{/other_user}", "gists_url": "https://api.github.com/users/bhedayat/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhedayat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhedayat/subscriptions", "organizations_url": "https://api.github.com/users/bhedayat/orgs", "repos_url": "https://api.github.com/users/bhedayat/repos", "events_url": "https://api.github.com/users/bhedayat/events{/privacy}", "received_events_url": "https://api.github.com/users/bhedayat/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Seemed to have fixed it by following this https://github.com/huggingface/transformers/issues/9687\r\nand using transformers 4.5.1 instead", "Same problem as #12536. @LysandreJik ", "i got the same error for load model \"bert-base-uncased\"", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Is this still a problem here? I can load the tokenizer, save it and then load it again without internet connection", "Both linked issues were never fixed so I would say so\n\nOn Wed, Aug 18, 2021, 6:44 PM Patrick von Platen ***@***.***>\nwrote:\n\n> Is this still a problem here?\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/12571#issuecomment-901266168>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AKLUCABJRZZD7AQL6HZDRITT5PPO7ANCNFSM477UY3MA>\n> .\n> Triage notifications on the go with GitHub Mobile for iOS\n> <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>\n> or Android\n> <https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email>\n> .\n>\n", "A simple workaround would be to just do:\r\n\r\n```python\r\nfrom transformers import GPT2Tokenizer\r\ntok = GPT2Tokenizer.from_pretrained(\"gpt2\", cache_dir=\"<some_directory>\")\r\ntok.save_pretrained(\"<some_directory>\")\r\n```\r\n\r\nand loading it from there without internet, but I guess it would indeed be more userfriendly to allow this automatically once the tokenizer has been downloaded once", "I digged a bit more into it in the linked issue #12536 (now stale) and the problem was that non existent files (such as the added tokens json in some of the tokenizers) caused a \"breaking\" exception offline but a simple warning online, or when the local files only flag was set to true. As you said, the workaround is super simple (even just setting local files only to true fixes it ) but it's just UX", "In the other issue, I proposed a simple (very naive fix) as a PR that circumvented this behavior but I suspect it might break things elsewhere (and would require changing a pipeline test) ", "Hi everybody, I am getting the same error and after digging a bit deeper, I believe that the current caching mechanism depends on the Internet connection crucially for latest versions, e.g., 4.8.x and 4.9.2. I blame the function `get_from_cache`, which IMHO shouldn't work properly unless you always have Internet. Some details are below.\r\n\r\nSimple code to reproduce the effect:\r\n```\r\nfrom transformers import AutoTokenizer, AutoModel\r\ntok = AutoTokenizer.from_pretrained('roberta-base', unk_token='<unk>')\r\n```\r\n\r\nFirst, specifying the caching directory doesn't help, because the function `get_from_cache` computes the caching path using the so-caled `etag`:\r\n```\r\nfilename = url_to_filename(url, etag)\r\n```\r\nI added a code to print the filename, the url, and the etag. When Internet is there, we get:\r\n```\r\n### url: https://huggingface.co/roberta-base/resolve/main/config.json etag: \"8db5e7ac5bfc9ec8b613b776009300fe3685d957\" filename: 733bade19e5f0ce98e6531021dd5180994bb2f7b8bd7e80c7968805834ba351e.35205c6cfc956461d8515139f0f8dd5d207a2f336c0c3a83b4bc8dca3518e37b\r\n### url: https://huggingface.co/roberta-base/resolve/main/vocab.json etag: \"5606f48548d99a9829d10a96cd364b816b02cd21\" filename: d3ccdbfeb9aaa747ef20432d4976c32ee3fa69663b379deb253ccfce2bb1fdc5.d67d6b367eb24ab43b08ad55e014cf254076934f71d832bbab9ad35644a375ab\r\n### url: https://huggingface.co/roberta-base/resolve/main/merges.txt etag: \"226b0752cac7789c48f0cb3ec53eda48b7be36cc\" filename: cafdecc90fcab17011e12ac813dd574b4b3fea39da6dd817813efa010262ff3f.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b\r\n### url: https://huggingface.co/roberta-base/resolve/main/tokenizer.json etag: \"ad0bcbeb288f0d1373d88e0762e66357f55b8311\" filename: d53fc0fa09b8342651efd4073d75e19617b3e51287c2a535becda5808a8db287.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730\r\n### url: https://huggingface.co/roberta-base/resolve/main/config.json etag: \"8db5e7ac5bfc9ec8b613b776009300fe3685d957\" filename: 733bade19e5f0ce98e6531021dd5180994bb2f7b8bd7e80c7968805834ba351e.35205c6cfc956461d8515139f0f8dd5d207a2f336c0c3a83b4bc8dca3518e37b\r\n```\r\nThen, I have to disconnect the Internet. Now, the files are cached and should be accessed just fine.\r\n\r\nSo, we retry to create a tokenizer again, but it failes because without etag, we generate a **very different filename**:\r\n```\r\n### url: https://huggingface.co/roberta-base/resolve/main/tokenizer_config.json etag: None filename: dfe8f1ad04cb25b61a647e3d13620f9bf0a0f51d277897b232a5735297134132\r\n```\r\nThe function ``get_from_cache`` has the parameter local_files_only. When, it's true, etag is not computed. However, it is not clear how to use this to enable offline creation of resources after they have been downloaded once.\r\n\r\nThank you!", "@searchivarius `local_files_only` _should_ indeed work. You can add it to your from_pretrained calls, e.g.\r\n\r\n```py\r\ntok = AutoTokenizer.from_pretrained('roberta-base', unk_token='<unk>', local_files_only=True)\r\n```\r\n\r\nThat's the very hands-on, manual way to do this for each of your model, config, tokenizer inits. You can also set this globally. See https://github.com/huggingface/transformers/blob/master/docs/source/installation.md#offline-mode", "Hi @BramVanroy thanks a lot, `TRANSFORMERS_OFFLINE`, indeed, resolves the issue!", "it seems very strange for me that local_files_only=True still dosen't work for me \r\neven though it works for BertConfig.from_pretrained\r\n\r\ni must follow what this https://github.com/huggingface/transformers/issues/12571#issuecomment-901280736 does" ]
1,625
1,650
1,629
NONE
null
I am trying to first download and cache the GPT2 Tokenizer to use on an instance that does not have internet connection. I am able to download the tokenizer on my ec2 instance that does have an internet connection but when i copy over the directory to my instance that does not have the connection it gives a connection error. The issue seems to be with only the tokenizer and not the model ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.8.1 - Platform: Linux-4.14.232-176.381.amzn2.x86_64-x86_64-with-glibc2.9 - Python version: 3.6.10 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help Models: - gpt2: @patrickvonplaten, @LysandreJik ## Information Tokenizer/Model I am using (GPT2, microsoft/DialogRPT-updown): The problem arises when using: * [X] the official example scripts: (give details below) The tasks I am working on is: * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. On my ec2 instance that has an internet connection I run ``` from transformers import GPT2Tokenizer GPT2Tokenizer.from_pretrained("gpt2", cache_dir="<some_directory>") ``` 2. On my ec2 instance which does not have an internet connection I run the same command ``` from transformers import GPT2Tokenizer GPT2Tokenizer.from_pretrained("gpt2", cache_dir="<some_directory>") ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1680, in from_pretrained user_agent=user_agent, File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/file_utils.py", line 1337, in cached_path local_files_only=local_files_only, File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/file_utils.py", line 1553, in get_from_cache "Connection error, and we cannot find the requested files in the cached path." ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. Also does not work with AutoTokenizer ## Expected behavior After doing some digging it is looking for the added_tokens_file which does not exist. The vocab_file does exist.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12571/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12571/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12570
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12570/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12570/comments
https://api.github.com/repos/huggingface/transformers/issues/12570/events
https://github.com/huggingface/transformers/issues/12570
939,345,183
MDU6SXNzdWU5MzkzNDUxODM=
12,570
Can't Select Specific GPU by TrainingArguments
{ "login": "EasonC13", "id": 43432631, "node_id": "MDQ6VXNlcjQzNDMyNjMx", "avatar_url": "https://avatars.githubusercontent.com/u/43432631?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EasonC13", "html_url": "https://github.com/EasonC13", "followers_url": "https://api.github.com/users/EasonC13/followers", "following_url": "https://api.github.com/users/EasonC13/following{/other_user}", "gists_url": "https://api.github.com/users/EasonC13/gists{/gist_id}", "starred_url": "https://api.github.com/users/EasonC13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EasonC13/subscriptions", "organizations_url": "https://api.github.com/users/EasonC13/orgs", "repos_url": "https://api.github.com/users/EasonC13/repos", "events_url": "https://api.github.com/users/EasonC13/events{/privacy}", "received_events_url": "https://api.github.com/users/EasonC13/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You should use the env variable `CUDA_VISIBLE_DEVICES` to set the GPUs you want to use. If you have multiple GPUs available, the `Trainer` will use all of them, that is expected and not a bug.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "I have a similar challenge. I have 3 GPUs in the server. I run the script like this:\r\n```\r\nCUDA_VISIBLE_DEVICES=2 python main.py.\r\n```\r\nhowever, when I print `training_args.device`, it still shows cuda:0. `model.device` shows the same thing\r\nthis does not help either:\r\n```\r\nexport CUDA_VISIBLE_DEVICES=2\r\npython main.py\r\n```\r\n\r\nI am using Seq2SeqTrainingArguments", "This is normal, PyTorch names all visible devices from 0 to the number -1. So cuda0 in PyTorch is the first device you set as available, in this case GPU 2.", "I'm having the same issue with the Trainer class.\r\n\r\nEven after setting CUDA_VISIBLE_DEVICES, it still attempts to use all GPUs on my machine. This is problematic, as I share this server with other users. (And even during times with open GPUs, there are more than 8 GPUS present and it exhausts peer mapping resources.)\r\n\r\nError can be reproduced using the LED fine-tune Colab notebook demo if downloaded to a multi-GPU machine. https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb\r\n\r\nwith the following added code inserted in the first cell:\r\n\r\nimport os\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"]=\"0\"\r\n\r\nI've attempted setting the CUDA_VISIBLE_DEVICES environmental variable other ways (ie. from terminal, etc.) with similar results.\r\n\r\nI've also attempted using the PyTorch method of specifying GPU, with similar results:\r\nimport torch\r\nDEFAULT_DEVICE = \"cuda\"\r\ntorch.cuda.set_device(0)", "No you need to set that environment variable with the launch command, not inside your training script:\r\n```\r\nCUDA_VISIBLE_DEVICES=\"0\" python main.py\r\n```", "So is there any way to do this within a notebook?", "You need to set the variable before launching the jupyter notebook\r\n```\r\nCUDA_VISIBLE_DEVICES=\"0\" jupyter notebook\r\n```", "Ahhh, thank you. That successfully restricts the GPUs accessed in the notebook. ", "> You need to set the variable before launching the jupyter notebook\r\n> \r\n> ```\r\n> CUDA_VISIBLE_DEVICES=\"0\" jupyter notebook\r\n> ```\r\n\r\nIt's very inconvenient each time to restart jupyter lab/notebook to just change the device. Also, I may want to use several notebooks on different devices. PytorchLightening, for example, gives you freedom to select device for each run.", "In Jupyter Notebook, we can use one of these:\r\n\r\n```\r\nimport os\r\nos.environ[\"CUDA_DEVICE_ORDER\"]=\"PCI_BUS_ID\"\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"]=\"0\"\r\n```\r\n\r\n```\r\n%env CUDA_DEVICE_ORDER=PCI_BUS_ID\r\n%env CUDA_VISIBLE_DEVICES=0\r\n```", "Referring to all above solutions, all my GPUs are running or get CUDA device errors.\r\nAs alternatives, I override TrainingArguments Class. However, it might have undiscovered issues though.\r\n* backgrounds : I have more than one GPUs. Using huggingface trainer, all devices are involved in training.\r\n* problems : Trainer [seems to use ddp after checking device and n_gpus](https://github.com/huggingface/transformers/blob/a22db885b41b3a1b302fc206312ee4d99cdf4b7c/src/transformers/trainer.py#L1159) method in TrainingArugments , and `_setup_devices` in [TrainingArguments](https://github.com/huggingface/transformers/blob/a22db885b41b3a1b302fc206312ee4d99cdf4b7c/src/transformers/training_args.py#L1105) controls overall device setting. \r\n* temporary remedies : Instead of overriding `_setup_devices` (since it relates multiple dependent functions), I manually set `device` method and `n_gpus` method. In this case, I don't need to give any `os.environ` or `CUDA_VISIBLE_DEVICES `in front of python commands for single use. However, it may require if you want to use selected two or three gpus out of 4.\r\n\r\n```\r\nclass customTrainingArguments(TrainingArguments):\r\n def __init__(self,*args, **kwargs):\r\n super(customTrainingArguments, self).__init__(*args, **kwargs)\r\n\r\n @property\r\n @torch_required\r\n def device(self) -> \"torch.device\":\r\n \"\"\"\r\n The device used by this process.\r\n Name the device the number you use.\r\n \"\"\"\r\n return torch.device(\"cuda:3\")\r\n\r\n @property\r\n @torch_required\r\n def n_gpu(self):\r\n \"\"\"\r\n The number of GPUs used by this process.\r\n Note:\r\n This will only be greater than one when you have multiple GPUs available but are not using distributed\r\n training. For distributed training, it will always be 1.\r\n \"\"\"\r\n # Make sure `self._n_gpu` is properly setup.\r\n # _ = self._setup_devices\r\n # I set to one manullay\r\n self._n_gpu = 1\r\n return self._n_gpu\r\n```", "In a related problem, CUDA_VISIBLE_DEVICES doesn't seem to work, as I set it to use only the second gpu, but it always uses the first. I tried @kimcando 's solution but it just gives another error: \"module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:1\", even after sending the data to device cuda:1. ", "@hrmello have you been able to solve this problem? I'm facing the same issue", "> @hrmello have you been able to solve this problem? I'm facing the same issue\r\n\r\n@ngonhi Unfortunately not. I found out that if you don't specify the GPU, it finds and use all of them. But if you only want to run your code using a single one, you must use gpu 0. ", "It feels folks need this feature so might be worth reopening the issue. @sgugger?", "We can reopen it, but there is nothing I can do to fix it as it is part of the launching process of your script, which is implemented in PyTorch, not in Transformers :man_shrugging: \r\n\r\nWe are implementing this option in the `accelerate launcher` [here](https://github.com/huggingface/accelerate/pull/732) for folks interested.", "Thank you. This is very helpful. ", "> Referring to all above solutions, all my GPUs are running or get CUDA device errors. As alternatives, I override TrainingArguments Class. However, it might have undiscovered issues though.\r\n> \r\n> * backgrounds : I have more than one GPUs. Using huggingface trainer, all devices are involved in training.\r\n> * problems : Trainer [seems to use ddp after checking device and n_gpus](https://github.com/huggingface/transformers/blob/a22db885b41b3a1b302fc206312ee4d99cdf4b7c/src/transformers/trainer.py#L1159) method in TrainingArugments , and `_setup_devices` in [TrainingArguments](https://github.com/huggingface/transformers/blob/a22db885b41b3a1b302fc206312ee4d99cdf4b7c/src/transformers/training_args.py#L1105) controls overall device setting.\r\n> * temporary remedies : Instead of overriding `_setup_devices` (since it relates multiple dependent functions), I manually set `device` method and `n_gpus` method. In this case, I don't need to give any `os.environ` or `CUDA_VISIBLE_DEVICES `in front of python commands for single use. However, it may require if you want to use selected two or three gpus out of 4.\r\n> \r\n> ```\r\n> class customTrainingArguments(TrainingArguments):\r\n> def __init__(self,*args, **kwargs):\r\n> super(customTrainingArguments, self).__init__(*args, **kwargs)\r\n> \r\n> @property\r\n> @torch_required\r\n> def device(self) -> \"torch.device\":\r\n> \"\"\"\r\n> The device used by this process.\r\n> Name the device the number you use.\r\n> \"\"\"\r\n> return torch.device(\"cuda:3\")\r\n> \r\n> @property\r\n> @torch_required\r\n> def n_gpu(self):\r\n> \"\"\"\r\n> The number of GPUs used by this process.\r\n> Note:\r\n> This will only be greater than one when you have multiple GPUs available but are not using distributed\r\n> training. For distributed training, it will always be 1.\r\n> \"\"\"\r\n> # Make sure `self._n_gpu` is properly setup.\r\n> # _ = self._setup_devices\r\n> # I set to one manullay\r\n> self._n_gpu = 1\r\n> return self._n_gpu\r\n> ```\r\n\r\nIt works. I just have to comment out the `@torch_required` and add `import torch` at line 1, then I can freely choose whatever GPU I want. Thanks a million.", "Thank you @kimcando I finally got my code to run on a single, specified GPU with your modification! ", "> Referring to all above solutions, all my GPUs are running or get CUDA device errors. As alternatives, I override TrainingArguments Class. However, it might have undiscovered issues though.\r\n> \r\n> * backgrounds : I have more than one GPUs. Using huggingface trainer, all devices are involved in training.\r\n> * problems : Trainer [seems to use ddp after checking device and n_gpus](https://github.com/huggingface/transformers/blob/a22db885b41b3a1b302fc206312ee4d99cdf4b7c/src/transformers/trainer.py#L1159) method in TrainingArugments , and `_setup_devices` in [TrainingArguments](https://github.com/huggingface/transformers/blob/a22db885b41b3a1b302fc206312ee4d99cdf4b7c/src/transformers/training_args.py#L1105) controls overall device setting.\r\n> * temporary remedies : Instead of overriding `_setup_devices` (since it relates multiple dependent functions), I manually set `device` method and `n_gpus` method. In this case, I don't need to give any `os.environ` or `CUDA_VISIBLE_DEVICES `in front of python commands for single use. However, it may require if you want to use selected two or three gpus out of 4.\r\n> \r\n> ```\r\n> class customTrainingArguments(TrainingArguments):\r\n> def __init__(self,*args, **kwargs):\r\n> super(customTrainingArguments, self).__init__(*args, **kwargs)\r\n> \r\n> @property\r\n> @torch_required\r\n> def device(self) -> \"torch.device\":\r\n> \"\"\"\r\n> The device used by this process.\r\n> Name the device the number you use.\r\n> \"\"\"\r\n> return torch.device(\"cuda:3\")\r\n> \r\n> @property\r\n> @torch_required\r\n> def n_gpu(self):\r\n> \"\"\"\r\n> The number of GPUs used by this process.\r\n> Note:\r\n> This will only be greater than one when you have multiple GPUs available but are not using distributed\r\n> training. For distributed training, it will always be 1.\r\n> \"\"\"\r\n> # Make sure `self._n_gpu` is properly setup.\r\n> # _ = self._setup_devices\r\n> # I set to one manullay\r\n> self._n_gpu = 1\r\n> return self._n_gpu\r\n> ```\r\n\r\nThis is the **best** solution for now, I would like to provide more usage for new bees,\r\n(we should comment out `@torch_required`.\r\nWe can specify the GPU by changing the `return torch.device(\"cuda:3\")` in the `def device(self)`,\r\n\r\nAfter overloading class `customTrainingArguments`, \r\nwe only need to `training_args = customTrainingArguments(...)` **instead** of `training_args = TrainingArguments(...)`\r\nthe arguments inside are as usual \r\nIt is the **simplest** way now ", "> This is the best solution for now, I would like to provide more usage for new bees,\r\n(we should comment out @torch_required.\r\nWe can specify the GPU by changing the return torch.device(\"cuda:3\") in the def device(self),\r\n\r\n> After overloading class customTrainingArguments,\r\nwe only need to training_args = customTrainingArguments(...) instead of training_args = TrainingArguments(...)\r\nthe arguments inside are as usual\r\nIt is the simplest way now\r\n\r\nWhich version of transformers is it working with?\r\n\r\nI get error \r\n```\r\nAttributeError: 'CustomTrainingArguments' object has no attribute 'distributed_state'\r\n```\r\n", "I follow this tutorial to finetune Whisper, however, the code won't select specific GPUs and only run on gpu:0.\r\nI used CUDA_VISIBLE_DEVICES.\r\n\r\ncode used\r\n`from transformers import Seq2SeqTrainingArguments\r\nfrom transformers import Seq2SeqTrainer`\r\n\r\nbest regards", "> > This is the best solution for now, I would like to provide more usage for new bees,\r\n> > (we should comment out @torch_required.\r\n> > We can specify the GPU by changing the return torch.device(\"cuda:3\") in the def device(self),\r\n> \r\n> > After overloading class customTrainingArguments,\r\n> > we only need to training_args = customTrainingArguments(...) instead of training_args = TrainingArguments(...)\r\n> > the arguments inside are as usual\r\n> > It is the simplest way now\r\n> \r\n> Which version of transformers is it working with?\r\n> \r\n> I get error\r\n> \r\n> ```\r\n> AttributeError: 'CustomTrainingArguments' object has no attribute 'distributed_state'\r\n> ```\r\n\r\nDid you ever fixed this error?\r\n\r\nEDIT: Seems like using an older version worked here, Version 4.29.2 worked ", "from accelerate.state import AcceleratorState\r\nfrom accelerate.utils import DistributedType\r\n\r\nclass cached_property(property):\r\n def __get__(self, obj, objtype=None):\r\n if obj is None:\r\n return self\r\n if self.fget is None:\r\n raise AttributeError(\"unreadable attribute\")\r\n attr = \"__cached_\" + self.fget.__name__\r\n cached = getattr(obj, attr, None)\r\n if cached is None:\r\n cached = self.fget(obj)\r\n setattr(obj, attr, cached)\r\n return cached\r\n\r\nclass CustomTrainingArguments(TrainingArguments):\r\n def __init__(self,*args, **kwargs):\r\n super(CustomTrainingArguments, self).__init__(*args, **kwargs)\r\n\r\n @property\r\n def device(self) -> \"torch.device\":\r\n return torch.device(\"cuda:0\")\r\n\r\n @property\r\n def n_gpu(self):\r\n self._n_gpu = 1\r\n return self._n_gpu\r\n\r\n @property\r\n def parallel_mode(self):\r\n return \"not_parallel\"\r\n \r\n @cached_property\r\n def _setup_devices(self) -> \"torch.device\":\r\n self.distributed_state = AcceleratorState(backend=self.ddp_backend)\r\n self._n_gpu = 1\r\n device = self.distributed_state.device\r\n self.local_rank = self.distributed_state.local_process_index\r\n self.distributed_state.distributed_type = DistributedType.NO\r\n device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\r\n torch.cuda.set_device(device)\r\n return device`" ]
1,625
1,692
1,629
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.8.2 - Platform: Jupyter Notebook on Ubuntu - Python version: 3.7 - PyTorch version (GPU?): 1.8.0+cu111 - Using GPU in script?: No, By Jupyter Notebook - Using distributed or parallel set-up in script?:It is distributed but I don't want that ### Who can help - trainer: @sgugger find by git-blame: @philschmid ## To reproduce By TrainingArguments, I want to set up my compute device only to torch.device(type='cuda', index=1). If I not set local_rank when init TrainingArguments, it will compute on both GPU. Steps to reproduce the behavior: ``` from transformers import TrainingArguments, Trainer, EvalPrediction training_args = TrainingArguments( learning_rate=1e-4, num_train_epochs=6, per_device_train_batch_size=32, per_device_eval_batch_size=32, logging_steps=200, output_dir="./training_output", overwrite_output_dir=True, # The next line is important to ensure the dataset labels are properly passed to the model remove_unused_columns=False, local_rank= 1 ) ``` Then you will get `ValueError: Error initializing torch.distributed using env:// rendezvous: environment variable RANK expected, but not set` But after I set ``` import os os.environ["RANK"]="1" ``` I get `ValueError: Error initializing torch.distributed using env:// rendezvous: environment variable WORLD_SIZE expected, but not set` These error not happen if I not set local_rank when init TrainingArguments even though I don't set any environment variable. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior I want to set up my compute device only to torch.device(type='cuda', index=1).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12570/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12570/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12569
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12569/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12569/comments
https://api.github.com/repos/huggingface/transformers/issues/12569/events
https://github.com/huggingface/transformers/pull/12569
939,257,467
MDExOlB1bGxSZXF1ZXN0Njg1NTA2NTUz
12,569
Remove logging of GPU count etc from run_t5_mlm_flax.py
{ "login": "ibraheem-moosa", "id": 14109029, "node_id": "MDQ6VXNlcjE0MTA5MDI5", "avatar_url": "https://avatars.githubusercontent.com/u/14109029?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ibraheem-moosa", "html_url": "https://github.com/ibraheem-moosa", "followers_url": "https://api.github.com/users/ibraheem-moosa/followers", "following_url": "https://api.github.com/users/ibraheem-moosa/following{/other_user}", "gists_url": "https://api.github.com/users/ibraheem-moosa/gists{/gist_id}", "starred_url": "https://api.github.com/users/ibraheem-moosa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ibraheem-moosa/subscriptions", "organizations_url": "https://api.github.com/users/ibraheem-moosa/orgs", "repos_url": "https://api.github.com/users/ibraheem-moosa/repos", "events_url": "https://api.github.com/users/ibraheem-moosa/events{/privacy}", "received_events_url": "https://api.github.com/users/ibraheem-moosa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@patrickvonplaten Hey Patrick can you please check this PR?" ]
1,625
1,628
1,625
CONTRIBUTOR
null
Successfully logging this information requires Pytorch. For the purposes of this script we are not using Pytorch. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12569/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12569/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12569", "html_url": "https://github.com/huggingface/transformers/pull/12569", "diff_url": "https://github.com/huggingface/transformers/pull/12569.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12569.patch", "merged_at": 1625695547000 }
https://api.github.com/repos/huggingface/transformers/issues/12568
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12568/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12568/comments
https://api.github.com/repos/huggingface/transformers/issues/12568/events
https://github.com/huggingface/transformers/issues/12568
939,196,536
MDU6SXNzdWU5MzkxOTY1MzY=
12,568
Pegasus from Pytorch to tensorflow
{ "login": "karimfayed", "id": 46823709, "node_id": "MDQ6VXNlcjQ2ODIzNzA5", "avatar_url": "https://avatars.githubusercontent.com/u/46823709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/karimfayed", "html_url": "https://github.com/karimfayed", "followers_url": "https://api.github.com/users/karimfayed/followers", "following_url": "https://api.github.com/users/karimfayed/following{/other_user}", "gists_url": "https://api.github.com/users/karimfayed/gists{/gist_id}", "starred_url": "https://api.github.com/users/karimfayed/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/karimfayed/subscriptions", "organizations_url": "https://api.github.com/users/karimfayed/orgs", "repos_url": "https://api.github.com/users/karimfayed/repos", "events_url": "https://api.github.com/users/karimfayed/events{/privacy}", "received_events_url": "https://api.github.com/users/karimfayed/received_events", "type": "User", "site_admin": false }
[ { "id": 1897896961, "node_id": "MDU6TGFiZWwxODk3ODk2OTYx", "url": "https://api.github.com/repos/huggingface/transformers/labels/Migration", "name": "Migration", "color": "e99695", "default": false, "description": "" } ]
closed
false
null
[]
[ "You should update `<pt, tf>` to reflect the library you want to use to export the graph. Either `pt` or `tf`.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,625
1,629
1,629
NONE
null
I have fine-tuned PEGASUS model for abstractive summarization using [this script](https://gist.github.com/jiahao87/50cec29725824da7ff6dd9314b53c4b3) which uses huggingface. The output model is in pytorch. On huggingface [docs](https://huggingface.co/transformers/serialization.html) the following is supposed to do the required conversion: `python convert_graph_to_onnx.py --framework <pt, tf> --model bert-base-cased bert-base-cased.onnx` I use colab and I ran the following command to transform my pegasus model: `!python convert_graph_to_onnx.py --framework <pt, tf> --model ./results/checkpoint-4000 ./results/checkpoint-4000.onnx` I keep getting the following message which is confusing as it is written in the documentation that the script convert_graph_to_onnx.py is at the root of the transformers sources: ![Screenshot (84)](https://user-images.githubusercontent.com/46823709/124824521-c2a82f80-df72-11eb-8406-128555604460.png) **Thank you in advance.**
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12568/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12568/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12567
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12567/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12567/comments
https://api.github.com/repos/huggingface/transformers/issues/12567/events
https://github.com/huggingface/transformers/pull/12567
939,123,384
MDExOlB1bGxSZXF1ZXN0Njg1MzkyODIy
12,567
Init pickle
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,625
1,625
COLLABORATOR
null
# What does this PR do? This PR is an alternative to #12552 and properly sets `_LazyModule` as a class used in all inits (no custom subclasses anymore) to make the `transformers` module picklable. It also cleans up nicely the inits. The only downside is that new models started before this PR but not yet merged will need a rebase for the intermediate init.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12567/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12567/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12567", "html_url": "https://github.com/huggingface/transformers/pull/12567", "diff_url": "https://github.com/huggingface/transformers/pull/12567.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12567.patch", "merged_at": 1625743246000 }
https://api.github.com/repos/huggingface/transformers/issues/12566
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12566/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12566/comments
https://api.github.com/repos/huggingface/transformers/issues/12566/events
https://github.com/huggingface/transformers/pull/12566
939,102,594
MDExOlB1bGxSZXF1ZXN0Njg1Mzc1MTA3
12,566
[examples/hybrid_clip] fix loading clip vision model
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,625
1,625
MEMBER
null
# What does this PR do? Fix loading config when the model is of type `clip_vision_model`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12566/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12566/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12566", "html_url": "https://github.com/huggingface/transformers/pull/12566", "diff_url": "https://github.com/huggingface/transformers/pull/12566.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12566.patch", "merged_at": 1625678428000 }
https://api.github.com/repos/huggingface/transformers/issues/12565
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12565/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12565/comments
https://api.github.com/repos/huggingface/transformers/issues/12565/events
https://github.com/huggingface/transformers/pull/12565
939,048,895
MDExOlB1bGxSZXF1ZXN0Njg1MzI5MjUw
12,565
tfhub.de -> tfhub.dev
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "strictly speaking, tensorboard.dev is not a part of tfhub.dev, is it?\r\n\r\n(sorry for piggybacking on this tiny PR 😂)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,625
1,681
1,628
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12565/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12565/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12565", "html_url": "https://github.com/huggingface/transformers/pull/12565", "diff_url": "https://github.com/huggingface/transformers/pull/12565.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12565.patch", "merged_at": 1628489477000 }
https://api.github.com/repos/huggingface/transformers/issues/12564
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12564/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12564/comments
https://api.github.com/repos/huggingface/transformers/issues/12564/events
https://github.com/huggingface/transformers/pull/12564
939,032,170
MDExOlB1bGxSZXF1ZXN0Njg1MzE0OTkz
12,564
Added fsck_etags to verify cache consistency.
{ "login": "xloem", "id": 279585, "node_id": "MDQ6VXNlcjI3OTU4NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/279585?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xloem", "html_url": "https://github.com/xloem", "followers_url": "https://api.github.com/users/xloem/followers", "following_url": "https://api.github.com/users/xloem/following{/other_user}", "gists_url": "https://api.github.com/users/xloem/gists{/gist_id}", "starred_url": "https://api.github.com/users/xloem/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xloem/subscriptions", "organizations_url": "https://api.github.com/users/xloem/orgs", "repos_url": "https://api.github.com/users/xloem/repos", "events_url": "https://api.github.com/users/xloem/events{/privacy}", "received_events_url": "https://api.github.com/users/xloem/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thank you for your contribution! How do you envision using this? Running this locally on a smallish 23GB cache takes 46 seconds on my machine, so this isn't something that can be ran every time.\r\n\r\nHow would you recommend we approach solving #12557 using your proposal?", "Hey, thanks for your reply.\r\n\r\nPersonally I was running into crashes from data corruption and needed a way to handle that situation. I thought others might get into the same scenario, so shared this code. When it moves the corrupt files away they get redownloaded.\r\n\r\nThis might work better if a standalone tool were added that users could run. I have severe cognitive and computer issues and contribute in very small bursts.\r\n\r\nRegarding thoughts like 12557:\r\n- this introduces the concept of fscking data, but doesn't solve the issue\r\n- checking etags from the network could help, certificate pinning helps. The fscking could be merged into an 'update all models' function, since the network only has the latest etags.\r\n- regarding automation speed, it might be faster to check only the files that are loaded, when they are loaded, or only after download or at user request, or have it disableable.\r\n- things could be made more normative if the cache used git repositories in some way. git already has fsck and signatures to some degree, and users can make their own git repositories easily. not sure why the design decision was made not to do that, but the hashed filenames certainly provide more ways to verify.\r\n- regarding 12557, i can come up with lots of ideas, but if there were some public messaging around the concept then more experienced cryptographers and security specialists would likely eventually weigh in. i'm just an old disabled hobbyist.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,625
1,629
1,629
CONTRIBUTOR
null
# What does this PR do? A prototype function is added to check cache consistency. Note that it does not verify the data is correct, just that the data hashes are consistent. It may be used as: `python -c 'from transformers import file_utils; file_utils.fsck_etags()'` No output means all the local etags match their data. Shown file by file if info logging is enabled. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> #12557 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12564/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12564/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12564", "html_url": "https://github.com/huggingface/transformers/pull/12564", "diff_url": "https://github.com/huggingface/transformers/pull/12564.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12564.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/12563
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12563/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12563/comments
https://api.github.com/repos/huggingface/transformers/issues/12563/events
https://github.com/huggingface/transformers/issues/12563
938,976,171
MDU6SXNzdWU5Mzg5NzYxNzE=
12,563
Issue in terms of accuracy of onnx converted models of SQUAD based ROBERTA on legal domain.
{ "login": "kingafy", "id": 15839412, "node_id": "MDQ6VXNlcjE1ODM5NDEy", "avatar_url": "https://avatars.githubusercontent.com/u/15839412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kingafy", "html_url": "https://github.com/kingafy", "followers_url": "https://api.github.com/users/kingafy/followers", "following_url": "https://api.github.com/users/kingafy/following{/other_user}", "gists_url": "https://api.github.com/users/kingafy/gists{/gist_id}", "starred_url": "https://api.github.com/users/kingafy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kingafy/subscriptions", "organizations_url": "https://api.github.com/users/kingafy/orgs", "repos_url": "https://api.github.com/users/kingafy/repos", "events_url": "https://api.github.com/users/kingafy/events{/privacy}", "received_events_url": "https://api.github.com/users/kingafy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co) instead?\r\n\r\nIf you'd like us to investigate this as a bug, please provide additional information so that we may help; for example, the ID of a pretrained model on the hub that loses accuracy when being converted, the commands run, the environment, library version.\r\n\r\nThanks!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,625
1,629
1,629
NONE
null
Hi Team, I was using the hugging face utility from latest transformer version to convert a SQUAD based Roberta model to ONNX. After conversion, I observed the accuracy has dipped significantly even though I never quantized the model. Any suggestions or advice what could have been the reason. Does the ONNX conversion results in loss in terms of predictions? If so, is there any parameter which can be experimented with to reduce and yet improve the run time performance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12563/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12563/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12562
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12562/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12562/comments
https://api.github.com/repos/huggingface/transformers/issues/12562/events
https://github.com/huggingface/transformers/pull/12562
938,961,235
MDExOlB1bGxSZXF1ZXN0Njg1MjU0MzUx
12,562
Double check for attribute num_examples
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Failure looks unrelated but circleCI is not letting me re-run the tests, so merging and watching master." ]
1,625
1,625
1,625
COLLABORATOR
null
# What does this PR do? As pointed out by #12479, the `isinstance` check for `IterableDataset` and its subclasses does not look at the type, but whether the class implements some methods, but not all attributes. This PR adds a double check when necessary to avoid an `AttributeError`. Fixes #12479
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12562/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12562/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12562", "html_url": "https://github.com/huggingface/transformers/pull/12562", "diff_url": "https://github.com/huggingface/transformers/pull/12562.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12562.patch", "merged_at": 1625676642000 }
https://api.github.com/repos/huggingface/transformers/issues/12561
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12561/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12561/comments
https://api.github.com/repos/huggingface/transformers/issues/12561/events
https://github.com/huggingface/transformers/pull/12561
938,944,490
MDExOlB1bGxSZXF1ZXN0Njg1MjQwMTIy
12,561
Don't stop at num_epochs when using IterableDataset
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,625
1,625
COLLABORATOR
null
# What does this PR do? Currently, when someone uses an `IterableDataset` inside the `Trainer`, the training loop stops after 3 iterations over the iterable dataset, this PR fixes that to just rely on `max_steps` (has to be set or there is an error at init). Fixes #12499
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12561/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12561/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12561", "html_url": "https://github.com/huggingface/transformers/pull/12561", "diff_url": "https://github.com/huggingface/transformers/pull/12561.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12561.patch", "merged_at": 1625743486000 }
https://api.github.com/repos/huggingface/transformers/issues/12560
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12560/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12560/comments
https://api.github.com/repos/huggingface/transformers/issues/12560/events
https://github.com/huggingface/transformers/pull/12560
938,940,795
MDExOlB1bGxSZXF1ZXN0Njg1MjM2OTQ2
12,560
Adding prepare_decoder_input_ids_from_labels methods to all TF ConditionalGeneration models
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,625
1,625
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12560/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12560/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12560", "html_url": "https://github.com/huggingface/transformers/pull/12560", "diff_url": "https://github.com/huggingface/transformers/pull/12560.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12560.patch", "merged_at": 1625668247000 }
https://api.github.com/repos/huggingface/transformers/issues/12559
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12559/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12559/comments
https://api.github.com/repos/huggingface/transformers/issues/12559/events
https://github.com/huggingface/transformers/pull/12559
938,907,197
MDExOlB1bGxSZXF1ZXN0Njg1MjA4MTg4
12,559
[Flax] Allow retraining from save checkpoint
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,625
1,625
MEMBER
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR allows all flax scripts to start training from already pretrained cehckpoints ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12559/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12559/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12559", "html_url": "https://github.com/huggingface/transformers/pull/12559", "diff_url": "https://github.com/huggingface/transformers/pull/12559.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12559.patch", "merged_at": 1625665424000 }
https://api.github.com/repos/huggingface/transformers/issues/12558
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12558/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12558/comments
https://api.github.com/repos/huggingface/transformers/issues/12558/events
https://github.com/huggingface/transformers/pull/12558
938,882,332
MDExOlB1bGxSZXF1ZXN0Njg1MTg2ODMx
12,558
Fix group_lengths for short datasets
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,625
1,625
COLLABORATOR
null
# What does this PR do? This PR adds a fix in the `group_lengths` function used in all language modeling examples so it also works for short datasets (without returning a dataset of length 0). The fix was discussed in the issue mentioned below. Fixes #12438
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12558/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12558/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12558", "html_url": "https://github.com/huggingface/transformers/pull/12558", "diff_url": "https://github.com/huggingface/transformers/pull/12558.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12558.patch", "merged_at": 1625743421000 }
https://api.github.com/repos/huggingface/transformers/issues/12557
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12557/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12557/comments
https://api.github.com/repos/huggingface/transformers/issues/12557/events
https://github.com/huggingface/transformers/issues/12557
938,874,349
MDU6SXNzdWU5Mzg4NzQzNDk=
12,557
Cached data not checked for integrity
{ "login": "xloem", "id": 279585, "node_id": "MDQ6VXNlcjI3OTU4NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/279585?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xloem", "html_url": "https://github.com/xloem", "followers_url": "https://api.github.com/users/xloem/followers", "following_url": "https://api.github.com/users/xloem/following{/other_user}", "gists_url": "https://api.github.com/users/xloem/gists{/gist_id}", "starred_url": "https://api.github.com/users/xloem/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xloem/subscriptions", "organizations_url": "https://api.github.com/users/xloem/orgs", "repos_url": "https://api.github.com/users/xloem/repos", "events_url": "https://api.github.com/users/xloem/events{/privacy}", "received_events_url": "https://api.github.com/users/xloem/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,625
1,629
1,629
CONTRIBUTOR
null
### Who can help @julien-c @sgugger ## To reproduce Steps to reproduce the behavior: 1. Mutate a cache file or cut the internet while downloading 2. Load the data e.g. via a pipeline 3. Either the incorrect data loads fine, or an unhelpful error is thrown ## Expected behavior The files are hashed in git and delivered via https with the hash included. It would be good for the cache system to verify this hash and report corruption to help the user, especially if a model fails to load. It would be great if the hash system were a little integrated with git so that git signatures of the hashes could be checked some day. Alternatively/additionally certificate pinning could be used in the library to help protect the user. Users should be informed when using the library, that it does not authenticate the models when they are loaded, and that it is easy for a malicious party to alter them undetected.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12557/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12557/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12556
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12556/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12556/comments
https://api.github.com/repos/huggingface/transformers/issues/12556/events
https://github.com/huggingface/transformers/issues/12556
938,855,928
MDU6SXNzdWU5Mzg4NTU5Mjg=
12,556
Slow gpt2 training on TPU with run_clm_flax.py
{ "login": "miwojc", "id": 32404415, "node_id": "MDQ6VXNlcjMyNDA0NDE1", "avatar_url": "https://avatars.githubusercontent.com/u/32404415?v=4", "gravatar_id": "", "url": "https://api.github.com/users/miwojc", "html_url": "https://github.com/miwojc", "followers_url": "https://api.github.com/users/miwojc/followers", "following_url": "https://api.github.com/users/miwojc/following{/other_user}", "gists_url": "https://api.github.com/users/miwojc/gists{/gist_id}", "starred_url": "https://api.github.com/users/miwojc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/miwojc/subscriptions", "organizations_url": "https://api.github.com/users/miwojc/orgs", "repos_url": "https://api.github.com/users/miwojc/repos", "events_url": "https://api.github.com/users/miwojc/events{/privacy}", "received_events_url": "https://api.github.com/users/miwojc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,625
1,629
1,629
NONE
null
## Environment info - `transformers` version: 4.9.0.dev0 - Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.9.0+cpu (False) - Tensorflow version (GPU?): 2.5.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu) - Jax version: 0.2.16 - JaxLib version: 0.1.68 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help - gpt2: @patrickvonplaten, @LysandreJik --> ## Information Model I am using (Bert, XLNet ...): gpt2 The problem arises when using: * [X] the official example scripts: (give details below) https://huggingface.co/flax-community/papuGaPT2/blob/main/run_clm_flax.py with only small modification added print(jax.device_count()) in main() to see if TPU is being used. The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) Oscar, polish dataset 47GB: --dataset_name="oscar" \ --dataset_config_name="unshuffled_deduplicated_pl" \ ## To reproduce Steps to reproduce the behavior: 1. login to remote VM: ```./google-cloud-sdk/bin/gcloud alpha compute tpus tpu-vm ssh dishcloth --zone us-central1-a --project hf-flax``` 2. activate venv: source ~/papugapt2/bin/activate 3. run pretraining: ``` cd papuGaPT2 bash pretrain_model.sh ``` ## Expected behavior Expected to get training speed around 1s/it on this dataset (as this was speed achieved before updating to latest scripts from master) But instead the speed is around 10-20s/it
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12556/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12556/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12555
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12555/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12555/comments
https://api.github.com/repos/huggingface/transformers/issues/12555/events
https://github.com/huggingface/transformers/pull/12555
938,838,052
MDExOlB1bGxSZXF1ZXN0Njg1MTQ5NjI3
12,555
Display error message for pipeline loading failures
{ "login": "xloem", "id": 279585, "node_id": "MDQ6VXNlcjI3OTU4NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/279585?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xloem", "html_url": "https://github.com/xloem", "followers_url": "https://api.github.com/users/xloem/followers", "following_url": "https://api.github.com/users/xloem/following{/other_user}", "gists_url": "https://api.github.com/users/xloem/gists{/gist_id}", "starred_url": "https://api.github.com/users/xloem/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xloem/subscriptions", "organizations_url": "https://api.github.com/users/xloem/orgs", "repos_url": "https://api.github.com/users/xloem/repos", "events_url": "https://api.github.com/users/xloem/events{/privacy}", "received_events_url": "https://api.github.com/users/xloem/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "I think this should be left to the following error message, that will let users know what classes were tried (all of them) and that they failed. The exact code should be left as follow-up.\r\n\r\nThis warning added is just noise in regular usage as it is expected that some classes won't work (so always issueing a warning when nothing is wrong).\r\n\r\nThis is not a super strong opinion though.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,625
1,631
1,631
CONTRIBUTOR
null
# What does this PR do? Displays an error message when a model class fails to load a model for a pipeline. This helped me understand what was going on when pipelines failed to load. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @Narsil @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12555/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12555/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12555", "html_url": "https://github.com/huggingface/transformers/pull/12555", "diff_url": "https://github.com/huggingface/transformers/pull/12555.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12555.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/12554
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12554/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12554/comments
https://api.github.com/repos/huggingface/transformers/issues/12554/events
https://github.com/huggingface/transformers/issues/12554
938,814,942
MDU6SXNzdWU5Mzg4MTQ5NDI=
12,554
Issue converting Flax model to Pytorch
{ "login": "BirgerMoell", "id": 1704131, "node_id": "MDQ6VXNlcjE3MDQxMzE=", "avatar_url": "https://avatars.githubusercontent.com/u/1704131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BirgerMoell", "html_url": "https://github.com/BirgerMoell", "followers_url": "https://api.github.com/users/BirgerMoell/followers", "following_url": "https://api.github.com/users/BirgerMoell/following{/other_user}", "gists_url": "https://api.github.com/users/BirgerMoell/gists{/gist_id}", "starred_url": "https://api.github.com/users/BirgerMoell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BirgerMoell/subscriptions", "organizations_url": "https://api.github.com/users/BirgerMoell/orgs", "repos_url": "https://api.github.com/users/BirgerMoell/repos", "events_url": "https://api.github.com/users/BirgerMoell/events{/privacy}", "received_events_url": "https://api.github.com/users/BirgerMoell/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Running the following command:\r\n\r\n```python\r\nfrom transformers import RobertaForMaskedLM, FlaxRobertaForMaskedLM\r\nimport numpy as np\r\nimport torch \r\n\r\nmodel_fx = FlaxRobertaForMaskedLM.from_pretrained(\"birgermoell/roberta-swedish\")\r\nmodel_pt = RobertaForMaskedLM.from_pretrained(\"birgermoell/roberta-swedish\", from_flax=True)\r\ninput_ids = np.asarray(2 * [128 * [0]], dtype=np.int32)\r\ninput_ids_pt = torch.tensor(input_ids)\r\n\r\nlogits_pt = model_pt(input_ids_pt).logits\r\nprint(logits_pt)\r\nlogits_fx = model_fx(input_ids).logits\r\nprint(logits_fx)\r\n```\r\n\r\nshould give more or less identical results", "Just corrected the pt weights. If you run:\r\n\r\n```python\r\nfrom transformers import RobertaForMaskedLM, FlaxRobertaForMaskedLM\r\nimport numpy as np\r\nimport torch \r\n\r\nmodel_fx = FlaxRobertaForMaskedLM.from_pretrained(\"birgermoell/roberta-swedish\")\r\nmodel_pt = RobertaForMaskedLM.from_pretrained(\"birgermoell/roberta-swedish\")\r\ninput_ids = np.asarray(2 * [128 * [0]], dtype=np.int32)\r\ninput_ids_pt = torch.tensor(input_ids)\r\n\r\nlogits_pt = model_pt(input_ids_pt).logits\r\nprint(logits_pt)\r\nlogits_fx = model_fx(input_ids).logits\r\nprint(logits_fx)\r\n```\r\n\r\nYou should see equal results. The checkpoint was somehow incorrectly converted.", "Note that one should convert checkpoints with:\r\n\r\n```python\r\nfrom transformers import RobertaForMaskedLM\r\n\r\nmodel = RobertaForMaskedLM.from_pretrained(\"...\", from_flax=True)\r\nmodel.save_pretrained(\"./\")\r\n```\r\n\r\nand not the `AutoModel....` classes.\r\n\r\nAlso it's important to realize that the lm head layer is actually tied to the input word embedding layer which is why Flax just doesn't save those weights. Then when converting those weights to PyTorch, PyTorch says there are missing but since the weights are tied PyTorch would have overwritten those weights anyways with the input embeddings which is why it the warning: \r\n\r\n```\r\nSome weights of RobertaForMaskedLM were not initialized from the Flax model and are newly initialized: ['lm_head.decoder.bias', 'lm_head.decoder.weight']\r\n```\r\n\r\ndoesn't matter.", "@BirgerMoell Also note that your local `.git` repository must be huge since you've essentially uploaded ~100 checkpoints of 500 MB each -> so your local `.git` stores 50 GB already I think.", "Widget seems to work: https://huggingface.co/birgermoell/roberta-swedish?text=Var+kan+jag+hitta+n%C3%A5gon+%3Cmask%3E+talar+engelska%3F", "Awesome. Just to clarify. Once I'm done with training, this script should help me convert the model to pytorch.\r\n \r\n```\r\nfrom transformers import RobertaForMaskedLM\r\n\r\nmodel = RobertaForMaskedLM.from_pretrained(\"...\", from_flax=True)\r\nmodel.save_pretrained(\"./\")\r\n```", "@patrickvonplaten the uploaded model is still performing poorly so I'm not 100% the issue is fully resolved.\r\n<img width=\"1801\" alt=\"Screenshot 2021-07-07 at 16 26 59\" src=\"https://user-images.githubusercontent.com/1704131/124778176-100ba900-df41-11eb-8345-b3b51e0d1e9f.png\">\r\nAs you can see it outputs empty tokens.", "> @patrickvonplaten the uploaded model is still performing poorly so I'm not 100% the issue is fully resolved.\r\n> <img alt=\"Screenshot 2021-07-07 at 16 26 59\" width=\"1801\" src=\"https://user-images.githubusercontent.com/1704131/124778176-100ba900-df41-11eb-8345-b3b51e0d1e9f.png\">\r\n> As you can see it outputs empty tokens.\r\n\r\nHi @BirgerMoell, I'm training a RoBERTa model too using JAX during this community week -- model [here](https://huggingface.co/flax-community/indonesian-roberta-base). I got about 2.188 evaluation loss, yet the results are still somewhat jibberish despite the result. I think our models are, somehow, trained incorrectly? Or possibly require more data cleaning of some sort.", "@w11wo Yeah. Something is definitely up. I think a good idea would be that people who work with similar models figure out a good way to clean the data and look at other things that might be wrong.", "Facing same issue here, trained a model with Flax / Jax, then saved. When loading in Pytorch via \"from Flax = True\" , I have silly output despite training showing OK loss... Did you manage to find a solution or understand the issue ? ", "Hi @jppaolim !\r\n\r\nIn my case, I loaded the earlier weights of the model (from the first few epochs), instead of the fully-trained model weights from the last training epoch. Loading the right model weights fixed it for me.\r\n\r\nAnother way to fix it might be training for longer.\r\n\r\nHope this helps! :) " ]
1,625
1,652
1,625
NONE
null
When using the following script to convert a trained flax model to pytorch, the model seems to perform extremely poorly. ``` from transformers import RobertaForMaskedLM model = RobertaForMaskedLM.from_pretrained("./", from_flax=True) model.save_pretrained("./") ``` ```python from transformers import RobertaForMaskedLM, FlaxRobertaForMaskedLM import numpy as np import torch model_fx = FlaxRobertaForMaskedLM.from_pretrained("birgermoell/roberta-swedish") model_pt = RobertaForMaskedLM.from_pretrained("birgermoell/roberta-swedish", from_flax=True) input_ids = np.asarray(2 * [128 * [0]], dtype=np.int32) input_ids_pt = torch.tensor(input_ids) logits_pt = model_pt(input_ids_pt).logits print(logits_pt) logits_fx = model_fx(input_ids).logits print(logits_fx) ``` Comparing gives the following input. ``` tensor([[[ 1.7789, -13.5291, -11.2138, ..., -5.2875, -9.3274, -4.7912], [ 2.3076, -13.4161, -11.1511, ..., -5.3181, -9.0602, -4.6083], [ 2.6451, -13.4425, -11.0671, ..., -5.2838, -8.8323, -4.2280], ..., [ 1.9009, -13.6516, -11.2348, ..., -4.9726, -9.3278, -4.6060], [ 2.0522, -13.5394, -11.2804, ..., -4.9960, -9.1956, -4.5691], [ 2.2570, -13.5093, -11.2640, ..., -4.9986, -9.1292, -4.3310]], [[ 1.7789, -13.5291, -11.2138, ..., -5.2875, -9.3274, -4.7912], [ 2.3076, -13.4161, -11.1511, ..., -5.3181, -9.0602, -4.6083], [ 2.6451, -13.4425, -11.0671, ..., -5.2838, -8.8323, -4.2280], ..., [ 1.9009, -13.6516, -11.2348, ..., -4.9726, -9.3278, -4.6060], [ 2.0522, -13.5394, -11.2804, ..., -4.9960, -9.1956, -4.5691], [ 2.2570, -13.5093, -11.2640, ..., -4.9986, -9.1292, -4.3310]]], grad_fn=<AddBackward0>) [[[ 0.1418128 -14.170926 -11.12649 ... -7.542998 -10.79537 -9.382975 ] [ 1.7505689 -13.178099 -10.356588 ... -6.794136 -10.567211 -8.6670065 ] [ 2.0270724 -13.522658 -10.372475 ... -7.0110755 -10.396935 -8.419178 ] ... [ 0.19080782 -14.390833 -11.399942 ... -7.469897 -10.715849 -9.234054 ] [ 1.3052869 -13.332332 -10.702984 ... -6.9498534 -10.813769 -8.608736 ] [ 1.6442876 -13.226774 -10.59941 ... -7.0290956 -10.693554 -8.457008 ]] [[ 0.1418128 -14.170926 -11.12649 ... -7.542998 -10.79537 -9.382975 ] [ 1.7505689 -13.178099 -10.356588 ... -6.794136 -10.567211 -8.6670065 ] [ 2.0270724 -13.522658 -10.372475 ... -7.0110755 -10.396935 -8.419178 ] ... [ 0.19080782 -14.390833 -11.399942 ... -7.469897 -10.715849 -9.234054 ] [ 1.3052869 -13.332332 -10.702984 ... -6.9498534 -10.813769 -8.608736 ] [ 1.6442876 -13.226774 -10.59941 ... -7.0290956 -10.693554 -8.457008 ]]] ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12554/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12554/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12553
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12553/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12553/comments
https://api.github.com/repos/huggingface/transformers/issues/12553/events
https://github.com/huggingface/transformers/issues/12553
938,765,007
MDU6SXNzdWU5Mzg3NjUwMDc=
12,553
`model_name_or_path` does not seem to load in previously trained checkpoints
{ "login": "MalteHB", "id": 47593213, "node_id": "MDQ6VXNlcjQ3NTkzMjEz", "avatar_url": "https://avatars.githubusercontent.com/u/47593213?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MalteHB", "html_url": "https://github.com/MalteHB", "followers_url": "https://api.github.com/users/MalteHB/followers", "following_url": "https://api.github.com/users/MalteHB/following{/other_user}", "gists_url": "https://api.github.com/users/MalteHB/gists{/gist_id}", "starred_url": "https://api.github.com/users/MalteHB/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MalteHB/subscriptions", "organizations_url": "https://api.github.com/users/MalteHB/orgs", "repos_url": "https://api.github.com/users/MalteHB/repos", "events_url": "https://api.github.com/users/MalteHB/events{/privacy}", "received_events_url": "https://api.github.com/users/MalteHB/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Can you post a code snippet or make a Colab to reproduce the error?", "@MalteHB it would be nice if you could provide an exact code snippet that we can copy paste to reproduce the error. Otherwise I don't really know what code you've run. I tried re-starting to train from a pretrained checkpoint and it works just fine on my side.", "@NielsRogge @patrickvonplaten yes of course, sorry! \r\n\r\nAs mentioned we are using a modified version of the `run_mlm_flax_stream.py` script which you can find [here](https://huggingface.co/flax-community/roberta-large-scandi/blob/main/src/run_mlm_flax_stream.py), and the code used to run the script is, where `\"/home/Z6HJB/roberta-large-scandi/roberta-base-pretrained-scandinavian/\"` is a directory with a `config.json`and a `flax_model.msgpack`: \r\n```\r\nexport MODEL_DIR=/home/Z6HJB/roberta-large-scandi/roberta-base-pretrained-scandinavian/\r\n\r\nsource /home/Z6HJB/test/bin/activate\r\n\r\npython3 ./src/run_mlm_flax_stream.py \\\r\n --model_name_or_path=\"/home/Z6HJB/roberta-large-scandi/roberta-base-pretrained-scandinavian/\" \\\r\n --output_dir=\"/home/Z6HJB/roberta-large-scandi/model_continued2\" \\\r\n --tokenizer_name=\"${MODEL_DIR}\" \\\r\n --dataset_name=\"mc4\" \\\r\n --dataset_config_name=\"unshuffled_deduplicated_en\" \\\r\n --max_seq_length=\"128\" \\\r\n --per_device_train_batch_size=\"128\" \\\r\n --per_device_eval_batch_size=\"128\" \\\r\n --learning_rate=\"3e-4\" \\\r\n --warmup_steps=\"1000\" \\\r\n --overwrite_output_dir \\\r\n --adam_beta1=\"0.9\" \\\r\n --adam_beta2=\"0.98\" \\\r\n --num_train_steps=\"1000000\" \\\r\n --num_eval_samples=\"5000\" \\\r\n --save_steps=\"1000\" \\\r\n --logging_steps=\"25\" \\\r\n --eval_steps=\"1000\" \\\r\n --push_to_hub \\\r\n #--config_name=\"${MODEL_DIR}\" \\\r\n #--model_type=\"roberta\" \\\r\n```\r\nLet me know if this suffices or if you need more!\r\n\r\nI might be busy for the rest of the day since I have a football match to watch 🇩🇰 🇩🇰 🇩🇰 🇩🇰 🇩🇰 🇩🇰 ", "I figured it out this line: https://huggingface.co/flax-community/roberta-large-scandi/blob/main/src/run_mlm_flax_stream.py#L439\r\n\r\nForces the model to be reinitialized from scratch everytime. I see that the official script also does that -> I'll open a PR to fix it in the official script and then you shoudl be able to copy paste from it :-) ", "What an awesome guy, you are @patrickvonplaten! Thank you so much!" ]
1,625
1,625
1,625
NONE
null
## Environment info - `transformers` version: 4.9.0.dev0 - Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.9.0+cu102 (False) - Tensorflow version (GPU?): 2.5.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu) - Jax version: 0.2.16 - JaxLib version: 0.1.68 - Using GPU in script?: Using TPU - Using distributed or parallel set-up in script?: Yes ## Information Model I am using is RoBERTa, and it is a part of the flax-community week. I am trying to load a previously trained model checkpoint by setting the 'model_name_or_path' flag into a MLM script, which can be found [here](https://huggingface.co/flax-community/roberta-large-scandi/blob/main/src/run_mlm_flax_stream.py), but seems that the model is initialized with new weights... ## Expected behavior Seeing that the model training loss would continue from where it stopped, and not seeing that the new model metrics simply mimicked the already trained metrics.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12553/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12553/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12552
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12552/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12552/comments
https://api.github.com/repos/huggingface/transformers/issues/12552/events
https://github.com/huggingface/transformers/pull/12552
938,709,151
MDExOlB1bGxSZXF1ZXN0Njg1MDQyMzAy
12,552
Make LazyModule picklable
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This doesn't work sadly, because it then sets the wrong `__file__` and `__path__` attribute to the `transformers` module:\r\n```py\r\nimport transformers\r\ntransformers.__file__\r\n```\r\nwill return something like `'/home/sgugger/git/transformers/src/transformers/file_utils.py'` instead of `'/home/sgugger/git/transformers/src/transformers/__init__.py'`.\r\n\r\nThis will then probably mess up lots of things that depend on those attributes.\r\n\r\nI can look at another solution when I have some time.", "Indeed ! Thanks for checking\r\nAt least I tried x)\r\n\r\nThere must be a way to keep _LazyModule inside `__init__.py` and make it picklable. Currently the issue is that it can't be imported like this `from transformers import _LazyModule`. But as soon as we enable its import, it will be possible to pickle it.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.", "Resolved by #12567" ]
1,625
1,651
1,628
MEMBER
null
From this issue https://github.com/huggingface/transformers/issues/12549 it seems that it could be nice to have the `transformers` module picklable, since it can be useful for the `datasets` library's caching for example. The only object that is currently not picklable is the `_LazyModule`. In this PR I just made this object picklable, and so `transformers` becomes picklable as well. This should hopefully help with issue https://github.com/huggingface/transformers/issues/12549
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12552/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12552/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12552", "html_url": "https://github.com/huggingface/transformers/pull/12552", "diff_url": "https://github.com/huggingface/transformers/pull/12552.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12552.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/12551
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12551/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12551/comments
https://api.github.com/repos/huggingface/transformers/issues/12551/events
https://github.com/huggingface/transformers/pull/12551
938,688,705
MDExOlB1bGxSZXF1ZXN0Njg1MDI1ODY2
12,551
[trainer] add option to ignore keys for the train function too (#11719)
{ "login": "shabie", "id": 30535146, "node_id": "MDQ6VXNlcjMwNTM1MTQ2", "avatar_url": "https://avatars.githubusercontent.com/u/30535146?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shabie", "html_url": "https://github.com/shabie", "followers_url": "https://api.github.com/users/shabie/followers", "following_url": "https://api.github.com/users/shabie/following{/other_user}", "gists_url": "https://api.github.com/users/shabie/gists{/gist_id}", "starred_url": "https://api.github.com/users/shabie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shabie/subscriptions", "organizations_url": "https://api.github.com/users/shabie/orgs", "repos_url": "https://api.github.com/users/shabie/repos", "events_url": "https://api.github.com/users/shabie/events{/privacy}", "received_events_url": "https://api.github.com/users/shabie/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,625
1,625
CONTRIBUTOR
null
# What does this PR do? This pull request adds the option of ignoring certain output keys for evaluation during the training phase. As of now, this option is only available for `predict` and `evaluation` methods of the `Trainer` class which can only be called after the training. Fixes #11719 Changes: 1. Add a new parameter to the `Trainer.train` function called `ignore_keys_for_eval`. 2. Pass this to the `ignore_keys` parameter of the `trainer.evaluate` function that is already called within the `trainer.train` function. 2. Add the parameter to the docstring.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12551/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12551/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12551", "html_url": "https://github.com/huggingface/transformers/pull/12551", "diff_url": "https://github.com/huggingface/transformers/pull/12551.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12551.patch", "merged_at": 1625659666000 }
https://api.github.com/repos/huggingface/transformers/issues/12550
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12550/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12550/comments
https://api.github.com/repos/huggingface/transformers/issues/12550/events
https://github.com/huggingface/transformers/pull/12550
938,668,770
MDExOlB1bGxSZXF1ZXN0Njg1MDA5NDcx
12,550
This will reduce "Already borrowed error":
{ "login": "Narsil", "id": 204321, "node_id": "MDQ6VXNlcjIwNDMyMQ==", "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Narsil", "html_url": "https://github.com/Narsil", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "organizations_url": "https://api.github.com/users/Narsil/orgs", "repos_url": "https://api.github.com/users/Narsil/repos", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "received_events_url": "https://api.github.com/users/Narsil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@LysandreJik I would welcome your look on this too. " ]
1,625
1,643
1,625
CONTRIBUTOR
null
# What does this PR do? Original issue https://github.com/huggingface/tokenizers/issues/537 The original issue is caused by transformers calling many times mutable functions on the rust tokenizers. Rust needs to guarantee that only 1 agent has a mutable reference to memory at a given time (for many reasons which don't need explaining here). Usually, the rust compiler can guarantee that this property is true at compile time. Unfortunately, this is impossible for Python to do that, so PyO3, the bridge between rust and python used by `tokenizers`, will change the compile guarantee for a dynamic guarantee, so if multiple agents try to have multiple mutable borrows at the same time, then the runtime will yell with "Already borrowed". The proposed fix here in transformers, is simply to reduce the actual number of calls that really need mutable borrows. By reducing them, we reduce the risk of running into "Already borrowed" error. The caveat is now we add a call to read the current configuration of the `_tokenizer`, so worst case we have 2 calls instead of 1, and best case we simply have 1 + a Python comparison of a dict (should be negligible). <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #https://github.com/huggingface/tokenizers/issues/537 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @n1t0 @LysandreJik <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12550/reactions", "total_count": 7, "+1": 1, "-1": 0, "laugh": 0, "hooray": 4, "confused": 0, "heart": 1, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12550/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12550", "html_url": "https://github.com/huggingface/transformers/pull/12550", "diff_url": "https://github.com/huggingface/transformers/pull/12550.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12550.patch", "merged_at": 1625816165000 }
https://api.github.com/repos/huggingface/transformers/issues/12549
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12549/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12549/comments
https://api.github.com/repos/huggingface/transformers/issues/12549/events
https://github.com/huggingface/transformers/issues/12549
938,453,201
MDU6SXNzdWU5Mzg0NTMyMDE=
12,549
TypeError: cannot pickle '_LazyModule' object
{ "login": "lancekung", "id": 19167336, "node_id": "MDQ6VXNlcjE5MTY3MzM2", "avatar_url": "https://avatars.githubusercontent.com/u/19167336?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lancekung", "html_url": "https://github.com/lancekung", "followers_url": "https://api.github.com/users/lancekung/followers", "following_url": "https://api.github.com/users/lancekung/following{/other_user}", "gists_url": "https://api.github.com/users/lancekung/gists{/gist_id}", "starred_url": "https://api.github.com/users/lancekung/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lancekung/subscriptions", "organizations_url": "https://api.github.com/users/lancekung/orgs", "repos_url": "https://api.github.com/users/lancekung/repos", "events_url": "https://api.github.com/users/lancekung/events{/privacy}", "received_events_url": "https://api.github.com/users/lancekung/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you please attach the final script you used or a branch that we can use to reproduce your code exactly? Thanks.\r\n\r\nnote: I took the liberty to edit your OP to use code formatting which is much easier to read. If possible use a similar approach in future reports. Thank you!\r\n", "> Could you please attach the final script you used or a branch that we can use to reproduce your code exactly? Thanks.\r\n> \r\n> note: I took the liberty to edit your OP to use code formatting which is much easier to read. If possible use a similar approach in future reports. Thank you!\r\n\r\nthis is my scripts, thanks very much!\r\n[run_clm.py.zip](https://github.com/huggingface/transformers/files/6774180/run_clm.py.zip)\r\n", "Thank you. The attached script fails for me. You also didn't supply the data, but I assume it doesn't matter. In the future please supply everything or adapt your runtime so that we could run it out of the box and not need to spend a lot of time to try to make things work.\r\n\r\n```\r\npython run_clm.py \\\r\n> --model_name_or_path gpt2 \\\r\n> --dataset_name wikitext \\\r\n> --dataset_config_name wikitext-2-raw-v1 \\\r\n> --do_train \\\r\n> --do_eval \\\r\n> --output_dir /tmp/test-clm\r\n2021-07-06 21:18:15.064178: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0\r\n2\r\n2021-07-06 21:18:17.425481: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0\r\n2021-07-06 21:18:17.425484: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0\r\nProcess Process-1:\r\nTraceback (most recent call last):\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/multiprocessing/process.py\", line 315, in _bootstrap\r\n self.run()\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/multiprocessing/process.py\", line 108, in run\r\n self._target(*self._args, **self._kwargs)\r\nTypeError: init_process() missing 1 required positional argument: 'fn'\r\nProcess Process-2:\r\nTraceback (most recent call last):\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/multiprocessing/process.py\", line 315, in _bootstrap\r\n self.run()\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/multiprocessing/process.py\", line 108, in run\r\n self._target(*self._args, **self._kwargs)\r\nTypeError: init_process() missing 1 required positional argument: 'fn'\r\n```\r\n\r\nsame failure with distributed.", "> Thank you. The attached script fails for me. You also didn't supply the data, but I assume it doesn't matter. In the future please supply everything or adapt your runtime so that we could run it out of the box and not need to spend a lot of time to try to make things work.\r\n\r\nSo sorry, it's my fault, I gave you the wrong version.\r\nThis is the right version.\r\n[run_clm.py.zip](https://github.com/huggingface/transformers/files/6774314/run_clm.py.zip)\r\n", "I'm able to reproduce the problem - great! \r\n\r\nLet's see what the culprit is.", "So the trigger is: `--preprocessing_num_workers 32`\r\n\r\nand the minimal reproduction cmd is:\r\n```\r\npython run_clm.py --model_name_or_path sshleifer/tiny-gpt2 --dataset_name wikitext \\\r\n--dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /tmp/test-clm \\\r\n--overwrite_output_dir --preprocessing_num_workers 32\r\n```\r\n\r\nIt happens only with your version of the script. I tested with the one in `master` it works fine there.\r\n\r\nThe problem is unrelated to the change in https://github.com/huggingface/transformers/pull/11168 as you have discovered yourself, since your code removed my changes and you're just passing:\r\n```\r\n def tokenize_function1(examples):\r\n return tokenizer(examples[text_column_name])\r\n```\r\n\r\nSo need to look elsewhere for the cause.", "From a quick look I suspect that perhaps this is an issue in `datasets` when `num_proc > 1`? Could you try to reduce the script to the bare minimum, so that it runs just:\r\n\r\n```\r\n with training_args.main_process_first(desc=\"dataset map tokenization\"):\r\n tokenized_datasets = raw_datasets.map(\r\n None,\r\n num_proc=5,\r\n )\r\n```\r\n\r\ninside the multi-proc modifications you made.\r\n\r\ne.g. the above is enough to trigger the same error in the script so removing most of the code should \r\n", "OK, here is the minimal reproducible script. Totally unrelated to `transformers` it seems except for the import of `transformers`\r\n\r\n```\r\nimport logging\r\nimport math\r\nimport os\r\nimport sys\r\nfrom dataclasses import dataclass, field\r\nfrom typing import Optional\r\n\r\nimport datasets\r\nfrom datasets import load_dataset\r\n\r\nimport transformers\r\n\r\nimport torch\r\nimport torch.distributed as dist\r\nimport torch.multiprocessing as mp\r\n\r\ndef main(rank, size):\r\n\r\n def tokenize_function(examples):\r\n return None\r\n\r\n raw_datasets = load_dataset(\"wikitext\", \"wikitext-2-raw-v1\")\r\n tokenized_datasets = raw_datasets.map(\r\n tokenize_function,\r\n num_proc=32,\r\n )\r\n\r\ndef _mp_fn(index):\r\n # For xla_spawn (TPUs)\r\n main()\r\n\r\ndef init_process(rank, size, fn, backend='gloo'):\r\n \"\"\" Initialize the distributed environment. \"\"\"\r\n os.environ['MASTER_ADDR'] = '127.0.0.1'\r\n os.environ['MASTER_PORT'] = '29500'\r\n dist.init_process_group(backend, rank=rank, world_size=size)\r\n fn(rank, size)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n # main()\r\n # size = int(os.environ['WORLD_SIZE'])\r\n size = int(torch.cuda.device_count())\r\n print(size)\r\n processes = []\r\n mp.set_start_method(\"spawn\")\r\n for rank in range(size):\r\n p = mp.Process(target=init_process, args=(rank, size, main))\r\n p.start()\r\n processes.append(p)\r\n\r\n for p in processes:\r\n p.join()\r\n\r\n```\r\n\r\nthis still fails with the same error.\r\n\r\n```\r\npython run_clm.py\r\n2\r\nReusing dataset wikitext (/home/stas/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/aa5e094000ec7afeb74c3be92c88313cd6f132d564c7effd961c10fd47c76f20)\r\nReusing dataset wikitext (/home/stas/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/aa5e094000ec7afeb74c3be92c88313cd6f132d564c7effd961c10fd47c76f20)\r\nProcess Process-1:\r\nProcess Process-2:\r\nTraceback (most recent call last):\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/multiprocessing/process.py\", line 315, in _bootstrap\r\n self.run()\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/multiprocessing/process.py\", line 108, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"/mnt/nvme1/code/huggingface/users/lancekung/run_clm.py\", line 60, in init_process\r\n fn(rank, size)\r\n File \"/mnt/nvme1/code/huggingface/users/lancekung/run_clm.py\", line 46, in main\r\n tokenized_datasets = raw_datasets.map(\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/dataset_dict.py\", line 471, in map\r\n {\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/dataset_dict.py\", line 472, in <dictcomp>\r\n k: dataset.map(\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_dataset.py\", line 1736, in map\r\n transformed_shards = [r.get() for r in results]\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_dataset.py\", line 1736, in <listcomp>\r\n transformed_shards = [r.get() for r in results]\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/pool.py\", line 771, in get\r\n raise self._value\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/pool.py\", line 537, in _handle_tasks\r\n put(task)\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/connection.py\", line 209, in send\r\n self._send_bytes(_ForkingPickler.dumps(obj))\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/reduction.py\", line 54, in dumps\r\n cls(buf, protocol, *args, **kwds).dump(obj)\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py\", line 498, in dump\r\n StockPickler.dump(self, obj)\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 487, in dump\r\n self.save(obj)\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 560, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 901, in save_tuple\r\n save(element)\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 560, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py\", line 990, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 971, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 997, in _batch_setitems\r\n save(v)\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 560, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py\", line 1493, in save_function\r\n pickler.save_reduce(_create_function, (obj.__code__,\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 692, in save_reduce\r\n save(args)\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 560, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 901, in save_tuple\r\n save(element)\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 560, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py\", line 990, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 971, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 997, in _batch_setitems\r\n save(v)\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 578, in save\r\n rv = reduce(self.proto)\r\nTypeError: cannot pickle '_LazyModule' object\r\nTraceback (most recent call last):\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/multiprocessing/process.py\", line 315, in _bootstrap\r\n self.run()\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/multiprocessing/process.py\", line 108, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"/mnt/nvme1/code/huggingface/users/lancekung/run_clm.py\", line 60, in init_process\r\n fn(rank, size)\r\n File \"/mnt/nvme1/code/huggingface/users/lancekung/run_clm.py\", line 46, in main\r\n tokenized_datasets = raw_datasets.map(\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/dataset_dict.py\", line 471, in map\r\n {\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/dataset_dict.py\", line 472, in <dictcomp>\r\n k: dataset.map(\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_dataset.py\", line 1736, in map\r\n transformed_shards = [r.get() for r in results]\r\n File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_dataset.py\", line 1736, in <listcomp>\r\n transformed_shards = [r.get() for r in results]\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/pool.py\", line 771, in get\r\n raise self._value\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/pool.py\", line 537, in _handle_tasks\r\n put(task)\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/connection.py\", line 209, in send\r\n self._send_bytes(_ForkingPickler.dumps(obj))\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/reduction.py\", line 54, in dumps\r\n cls(buf, protocol, *args, **kwds).dump(obj)\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py\", line 498, in dump\r\n StockPickler.dump(self, obj)\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 487, in dump\r\n self.save(obj)\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 560, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 901, in save_tuple\r\n save(element)\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 560, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py\", line 990, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 971, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 997, in _batch_setitems\r\n save(v)\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 560, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py\", line 1493, in save_function\r\n pickler.save_reduce(_create_function, (obj.__code__,\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 692, in save_reduce\r\n save(args)\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 560, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 901, in save_tuple\r\n save(element)\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 560, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py\", line 990, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 971, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 997, in _batch_setitems\r\n save(v)\r\n File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 578, in save\r\n rv = reduce(self.proto)\r\nTypeError: cannot pickle '_LazyModule' object\r\n```\r\n\r\nBut if you either:\r\n* comment out `import transformers` \r\n* or set `num_proc=1` in `datasets.map` (instead of `n>1`)\r\nall is good.\r\n\r\n@lhoestq, @albertvillanova - does this ring any bells? Clearly `transformers` loads some module lazily and trips up `datasets` even though transformers isn't really used here directly. Thank you.\r\n\r\n", "> OK, here is the minimal reproducible script. Totally unrelated to `transformers` it seems except for the import of `transformers`\r\n> \r\n> ```\r\n> import logging\r\n> import math\r\n> import os\r\n> import sys\r\n> from dataclasses import dataclass, field\r\n> from typing import Optional\r\n> \r\n> import datasets\r\n> from datasets import load_dataset\r\n> \r\n> import transformers\r\n> \r\n> import torch\r\n> import torch.distributed as dist\r\n> import torch.multiprocessing as mp\r\n> \r\n> def main(rank, size):\r\n> \r\n> def tokenize_function(examples):\r\n> return None\r\n> \r\n> raw_datasets = load_dataset(\"wikitext\", \"wikitext-2-raw-v1\")\r\n> tokenized_datasets = raw_datasets.map(\r\n> tokenize_function,\r\n> num_proc=32,\r\n> )\r\n> \r\n> def _mp_fn(index):\r\n> # For xla_spawn (TPUs)\r\n> main()\r\n> \r\n> def init_process(rank, size, fn, backend='gloo'):\r\n> \"\"\" Initialize the distributed environment. \"\"\"\r\n> os.environ['MASTER_ADDR'] = '127.0.0.1'\r\n> os.environ['MASTER_PORT'] = '29500'\r\n> dist.init_process_group(backend, rank=rank, world_size=size)\r\n> fn(rank, size)\r\n> \r\n> \r\n> if __name__ == \"__main__\":\r\n> # main()\r\n> # size = int(os.environ['WORLD_SIZE'])\r\n> size = int(torch.cuda.device_count())\r\n> print(size)\r\n> processes = []\r\n> mp.set_start_method(\"spawn\")\r\n> for rank in range(size):\r\n> p = mp.Process(target=init_process, args=(rank, size, main))\r\n> p.start()\r\n> processes.append(p)\r\n> \r\n> for p in processes:\r\n> p.join()\r\n> ```\r\n> \r\n> this still fails with the same error.\r\n> \r\n> ```\r\n> python run_clm.py\r\n> 2\r\n> Reusing dataset wikitext (/home/stas/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/aa5e094000ec7afeb74c3be92c88313cd6f132d564c7effd961c10fd47c76f20)\r\n> Reusing dataset wikitext (/home/stas/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/aa5e094000ec7afeb74c3be92c88313cd6f132d564c7effd961c10fd47c76f20)\r\n> Process Process-1:\r\n> Process Process-2:\r\n> Traceback (most recent call last):\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/multiprocessing/process.py\", line 315, in _bootstrap\r\n> self.run()\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/multiprocessing/process.py\", line 108, in run\r\n> self._target(*self._args, **self._kwargs)\r\n> File \"/mnt/nvme1/code/huggingface/users/lancekung/run_clm.py\", line 60, in init_process\r\n> fn(rank, size)\r\n> File \"/mnt/nvme1/code/huggingface/users/lancekung/run_clm.py\", line 46, in main\r\n> tokenized_datasets = raw_datasets.map(\r\n> File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/dataset_dict.py\", line 471, in map\r\n> {\r\n> File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/dataset_dict.py\", line 472, in <dictcomp>\r\n> k: dataset.map(\r\n> File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_dataset.py\", line 1736, in map\r\n> transformed_shards = [r.get() for r in results]\r\n> File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_dataset.py\", line 1736, in <listcomp>\r\n> transformed_shards = [r.get() for r in results]\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/pool.py\", line 771, in get\r\n> raise self._value\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/pool.py\", line 537, in _handle_tasks\r\n> put(task)\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/connection.py\", line 209, in send\r\n> self._send_bytes(_ForkingPickler.dumps(obj))\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/reduction.py\", line 54, in dumps\r\n> cls(buf, protocol, *args, **kwds).dump(obj)\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py\", line 498, in dump\r\n> StockPickler.dump(self, obj)\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 487, in dump\r\n> self.save(obj)\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 560, in save\r\n> f(self, obj) # Call unbound method with explicit self\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 901, in save_tuple\r\n> save(element)\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 560, in save\r\n> f(self, obj) # Call unbound method with explicit self\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py\", line 990, in save_module_dict\r\n> StockPickler.save_dict(pickler, obj)\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 971, in save_dict\r\n> self._batch_setitems(obj.items())\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 997, in _batch_setitems\r\n> save(v)\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 560, in save\r\n> f(self, obj) # Call unbound method with explicit self\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py\", line 1493, in save_function\r\n> pickler.save_reduce(_create_function, (obj.__code__,\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 692, in save_reduce\r\n> save(args)\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 560, in save\r\n> f(self, obj) # Call unbound method with explicit self\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 901, in save_tuple\r\n> save(element)\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 560, in save\r\n> f(self, obj) # Call unbound method with explicit self\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py\", line 990, in save_module_dict\r\n> StockPickler.save_dict(pickler, obj)\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 971, in save_dict\r\n> self._batch_setitems(obj.items())\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 997, in _batch_setitems\r\n> save(v)\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 578, in save\r\n> rv = reduce(self.proto)\r\n> TypeError: cannot pickle '_LazyModule' object\r\n> Traceback (most recent call last):\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/multiprocessing/process.py\", line 315, in _bootstrap\r\n> self.run()\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/multiprocessing/process.py\", line 108, in run\r\n> self._target(*self._args, **self._kwargs)\r\n> File \"/mnt/nvme1/code/huggingface/users/lancekung/run_clm.py\", line 60, in init_process\r\n> fn(rank, size)\r\n> File \"/mnt/nvme1/code/huggingface/users/lancekung/run_clm.py\", line 46, in main\r\n> tokenized_datasets = raw_datasets.map(\r\n> File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/dataset_dict.py\", line 471, in map\r\n> {\r\n> File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/dataset_dict.py\", line 472, in <dictcomp>\r\n> k: dataset.map(\r\n> File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_dataset.py\", line 1736, in map\r\n> transformed_shards = [r.get() for r in results]\r\n> File \"/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_dataset.py\", line 1736, in <listcomp>\r\n> transformed_shards = [r.get() for r in results]\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/pool.py\", line 771, in get\r\n> raise self._value\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/pool.py\", line 537, in _handle_tasks\r\n> put(task)\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/connection.py\", line 209, in send\r\n> self._send_bytes(_ForkingPickler.dumps(obj))\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/reduction.py\", line 54, in dumps\r\n> cls(buf, protocol, *args, **kwds).dump(obj)\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py\", line 498, in dump\r\n> StockPickler.dump(self, obj)\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 487, in dump\r\n> self.save(obj)\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 560, in save\r\n> f(self, obj) # Call unbound method with explicit self\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 901, in save_tuple\r\n> save(element)\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 560, in save\r\n> f(self, obj) # Call unbound method with explicit self\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py\", line 990, in save_module_dict\r\n> StockPickler.save_dict(pickler, obj)\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 971, in save_dict\r\n> self._batch_setitems(obj.items())\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 997, in _batch_setitems\r\n> save(v)\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 560, in save\r\n> f(self, obj) # Call unbound method with explicit self\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py\", line 1493, in save_function\r\n> pickler.save_reduce(_create_function, (obj.__code__,\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 692, in save_reduce\r\n> save(args)\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 560, in save\r\n> f(self, obj) # Call unbound method with explicit self\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 901, in save_tuple\r\n> save(element)\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 560, in save\r\n> f(self, obj) # Call unbound method with explicit self\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py\", line 990, in save_module_dict\r\n> StockPickler.save_dict(pickler, obj)\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 971, in save_dict\r\n> self._batch_setitems(obj.items())\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 997, in _batch_setitems\r\n> save(v)\r\n> File \"/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py\", line 578, in save\r\n> rv = reduce(self.proto)\r\n> TypeError: cannot pickle '_LazyModule' object\r\n> ```\r\n> \r\n> But if you either:\r\n> \r\n> * comment out `import transformers`\r\n> * or set `num_proc=1` in `datasets.map` (instead of `n>1`)\r\n> all is good.\r\n> \r\n> @lhoestq, @albertvillanova - does this ring any bells? Clearly `transformers` loads some module lazily and trips up `datasets` even though transformers isn't really used here directly. Thank you.\r\n\r\nThank you so much for your time, and hope other experts can give some tips about this problem.", "Hi @stas00, thanks for pinging. \r\n\r\nI'm having a look and after a first search, I think you are right and the problem comes from the fact that `transformers` makes a lazy import when importing it. I guess this affects `datasets` here: https://github.com/huggingface/datasets/blob/master/src/datasets/utils/py_utils.py#L319 (PR: https://github.com/huggingface/datasets/pull/502), which is used by dumps to pickle objects in a multiprocessing setup.\r\n\r\ncc: @lhoestq ", "> Hi @stas00, thanks for pinging.\r\n> \r\n> I'm having a look and after a first search, I think you are right and the problem comes from the fact that `transformers` makes a lazy import when importing it. I guess this affects `datasets` here: https://github.com/huggingface/datasets/blob/master/src/datasets/utils/py_utils.py#L319 (PR: [huggingface/datasets#502](https://github.com/huggingface/datasets/pull/502)), which is used by dumps to pickle objects in a multiprocessing setup.\r\n> \r\n> cc: @lhoestq\r\n\r\nhi albertvillanova, I removed import of transformers according to the following code, it still can't work.\r\n\r\n\r\n`def _no_cache_fields(obj):\r\n try:\r\n if (\r\n \"PreTrainedTokenizerBase\" in [base_class.__name__ for base_class in type(obj).__mro__]\r\n and hasattr(obj, \"cache\")\r\n and isinstance(obj.cache, dict)\r\n )`\r\n", "Note that we can easily make `_LazyModule` picklable. I can open a PR if needed to implement a `__reduce__` method for `_LazyModule`. It's the only object that prevents `transformers` from being picklable.\r\n\r\nEDIT: here it is: https://github.com/huggingface/transformers/pull/12552\r\n\r\nThis is just a way to easily fix this issue, but I think we should definitely keep trying to figure out why it tried to pickle `transformers` in the first place. This might come from `dill` that pickles the globals of some environments when pickling any object", "Linking to the new PR: https://github.com/huggingface/transformers/pull/12567\r\n", "Should be closed by #12567, please let us know if the problem persists.", "> Should be closed by #12567, please let us know if the problem persists.\r\n\r\nHi, a new problem has arisen\r\nwe can pickle \"LazyModule\" now, but can't pickle <class 'types.AutoModelForCausalLM'>\r\n\r\n\r\n\r\n\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/multiprocessing/process.py\", line 315, in _bootstrap\r\n self.run()\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/multiprocessing/process.py\", line 108, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"/media/cfs/gonglixing/9Nctl/gpt_v2/run_clm_v3.py\", line 509, in init_process\r\n fn(rank, size)\r\n File \"/media/cfs/gonglixing/9Nctl/gpt_v2/run_clm_v3.py\", line 367, in main\r\n tokenized_datasets = raw_datasets.map(\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py\", line 471, in map\r\n {\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py\", line 472, in <dictcomp>\r\n k: dataset.map(\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1736, in map\r\n transformed_shards = [r.get() for r in results]\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1736, in <listcomp>\r\n transformed_shards = [r.get() for r in results]\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/pool.py\", line 771, in get\r\n raise self._value\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/pool.py\", line 537, in _handle_tasks\r\n put(task)\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/connection.py\", line 209, in send\r\n self._send_bytes(_ForkingPickler.dumps(obj))\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/reduction.py\", line 54, in dumps\r\n cls(buf, protocol, *args, **kwds).dump(obj)\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py\",line 498, in dump\r\n StockPickler.dump(self, obj)\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py\", line 487, in dump\r\n self.save(obj)\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py\", line 560, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py\", line 901, in save_tuple\r\n save(element)\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py\", line 560, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py\",line 990, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py\", line 971, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py\", line 997, in _batch_setitems\r\n save(v)\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py\", line 560, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py\",line 1493, in save_function\r\n pickler.save_reduce(_create_function, (obj.__code__,\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py\", line 692, in save_reduce\r\n save(args)\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py\", line 560, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py\", line 901, in save_tuple\r\n save(element)\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py\", line 560, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py\",line 990, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py\", line 971, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py\", line 997, in _batch_setitems\r\n save(v)\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py\", line 560, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py\",line 1439, in save_type\r\n StockPickler.save_global(pickler, obj, name=name)\r\n File \"/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py\", line 1070, in save_global\r\n raise PicklingError(\r\n_pickle.PicklingError: Can't pickle <class 'types.AutoModelForCausalLM'>: it's notfound as types.AutoModelForCausalLM" ]
1,625
1,625
1,625
NONE
null
@stas00 edit: please see https://github.com/huggingface/transformers/issues/12549#issuecomment-875287701 for the short reproduction script. ---------------- ## Environment info - `transformers` version: 4.9.0.dev0 - Platform: Linux with Nvidia P40 - Python version: 3.8.0 - PyTorch version (GPU?): 1.8.0 - Tensorflow version (GPU?): - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help @stas00 @patrickvonplaten, @LysandreJik ## Information Model I am using (Bert, XLNet ...): GPT2 The problem arises when using: * [ ] the official example scripts: (give details below) * [√] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [√] my own task or dataset: (give details below) ## To reproduce I am running the minimal command: ``` python run_clm.py \ --model_name_or_path /mycheckpoin/ \ --train_file train.txt \ --validation_file eval.txt \ --do_train \ --do_eval \ --output_dir ./models/ \ --no_cuda False \ --fp16 \ --sharded_ddp simple \ --num_train_epochs 3.0 \ --disable_tqdm False \ --save_steps 100 \ --preprocessing_num_workers 32 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 ``` and I modified the following parts of the script ‘run_clm.py’, and the parameter rank passed in training_args.local_rank ``` def init_process(rank, size, fn, backend='gloo'): """ Initialize the distributed environment. """ os.environ['MASTER_ADDR'] = '127.0.0.1' os.environ['MASTER_PORT'] = '29500' dist.init_process_group(backend, rank=rank, world_size=size) fn(rank) if __name__ == "__main__": # main() # size = int(os.environ['WORLD_SIZE']) size = int(torch.cuda.device_count()) print(size) processes = [] mp.set_start_method("spawn") for rank in range(size): p = mp.Process(target=init_process, args=(rank, main)) p.start() processes.append(p) for p in processes: p.join() ``` the traceback informations are: ``` Process Process-2: Traceback (most recent call last): File "/usr/local/anaconda3/envs/py38/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/usr/local/anaconda3/envs/py38/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/media/cfs/gonglixing/9Nctl/gpt_v2/run_clm_v3.py", line 511, in init_process fn(rank, size) File "/media/cfs/gonglixing/9Nctl/gpt_v2/run_clm_v3.py", line 367, in main tokenized_datasets = raw_datasets.map( File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 471, in map { File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 472, in <dictcomp> k: dataset.map( File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1736, in map transformed_shards = [r.get() for r in results] File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1736, in <listcomp> transformed_shards = [r.get() for r in results] File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/pool.py", line 771, in get raise self._value File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/pool.py", line 537, in _handle_tasks put(task) File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/connection.py", line 209, in send self._send_bytes(_ForkingPickler.dumps(obj)) File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/reduction.py", line 54, in dumps cls(buf, protocol, *args, **kwds).dump(obj) File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py", line 498, in dump StockPickler.dump(self, obj) File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 487, in dump self.save(obj) File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 901, in save_tuple save(element) File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py", line 990, in save_module_dict StockPickler.save_dict(pickler, obj) File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 971, in save_dict self._batch_setitems(obj.items()) File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 997, in _batch_setitems save(v) File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py", line 1493, in save_function pickler.save_reduce(_create_function, (obj.__code__, File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 692, in save_reduce save(args) File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 901, in save_tuple save(element) File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py", line 990, in save_module_dict StockPickler.save_dict(pickler, obj) File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 971, in save_dict self._batch_setitems(obj.items()) File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 997, in _batch_setitems save(v) File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 578, in save rv = reduce(self.proto) TypeError: cannot pickle '_LazyModule' object ``` I run the following command based on the original script, it works well. The reason why I don't use this command is that our cluster doesn't support this way of passing parameters: "-m torch.distributed.launch --nproc_per_node=4 " ``` python -m torch.distributed.launch --nproc_per_node=4 run_clm.py \ --model_name_or_path /mycheckpoin/ \ --train_file train.txt \ --validation_file eval.txt \ --do_train \ --do_eval \ --output_dir ./models/ \ --no_cuda False \ --fp16 \ --sharded_ddp simple \ --num_train_epochs 3.0 \ --disable_tqdm False \ --save_steps 100 \ --preprocessing_num_workers 32 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 ``` ## Expected behavior
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12549/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12549/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12548
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12548/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12548/comments
https://api.github.com/repos/huggingface/transformers/issues/12548/events
https://github.com/huggingface/transformers/pull/12548
938,451,037
MDExOlB1bGxSZXF1ZXN0Njg0ODM0OTg2
12,548
raise exception when arguments to pipeline are incomplete
{ "login": "hwijeen", "id": 29157715, "node_id": "MDQ6VXNlcjI5MTU3NzE1", "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hwijeen", "html_url": "https://github.com/hwijeen", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "repos_url": "https://api.github.com/users/hwijeen/repos", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,625
1,625
1,625
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #12478 (issue) As discussed in the issue, this PR adds an exception when arguments to `pipeline` are incomplete. Incomplete cases are providing `tokenizer` or `feature_extractor` without specifying the model, which could lead to unexpected behavior demonstrated in the issue. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik @Narsil <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12548/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12548/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/12548", "html_url": "https://github.com/huggingface/transformers/pull/12548", "diff_url": "https://github.com/huggingface/transformers/pull/12548.diff", "patch_url": "https://github.com/huggingface/transformers/pull/12548.patch", "merged_at": 1625732254000 }
https://api.github.com/repos/huggingface/transformers/issues/12547
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12547/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12547/comments
https://api.github.com/repos/huggingface/transformers/issues/12547/events
https://github.com/huggingface/transformers/issues/12547
938,377,239
MDU6SXNzdWU5MzgzNzcyMzk=
12,547
Get Start with CamembertForSequenceClassification
{ "login": "ewayuan", "id": 28846772, "node_id": "MDQ6VXNlcjI4ODQ2Nzcy", "avatar_url": "https://avatars.githubusercontent.com/u/28846772?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ewayuan", "html_url": "https://github.com/ewayuan", "followers_url": "https://api.github.com/users/ewayuan/followers", "following_url": "https://api.github.com/users/ewayuan/following{/other_user}", "gists_url": "https://api.github.com/users/ewayuan/gists{/gist_id}", "starred_url": "https://api.github.com/users/ewayuan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ewayuan/subscriptions", "organizations_url": "https://api.github.com/users/ewayuan/orgs", "repos_url": "https://api.github.com/users/ewayuan/repos", "events_url": "https://api.github.com/users/ewayuan/events{/privacy}", "received_events_url": "https://api.github.com/users/ewayuan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nyou are initializing a `CamembertForSequenceClassification` model with weights from `camembert-base`. This means that you are only initializing the base of the model, not the classification head. Hence, the head will have randomly initialized weights. This is also given as a warning:\r\n\r\n```\r\nSome weights of the model checkpoint at ./nlp_models/camembert-base were not used when initializing CamembertForSequenceClassification: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']\r\n- This IS expected if you are initializing CamembertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing CamembertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of CamembertForSequenceClassification were not initialized from the model checkpoint at ./nlp_models/camembert-base and are newly initialized: ['classifier.dense.weight', 'classifier.dense.bias', 'classifier.out_proj.weight', 'classifier.out_proj.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\n```\r\n\r\nThis tells you exactly that: one should first fine-tune `CamembertForSequenceClassification` on a downstream task (in this case, a labeled dataset of sentence pairs that are labeled with either paraphrase/not paraphrase). You can check the [hub](https://huggingface.co/models?search=camembert) to see whether someone has already fine-tuned CamemBERT for paraphrasing (however apparently this doesn't seem to be the case).\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored." ]
1,625
1,629
1,629
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.3 - Platform: Windows - Python version: 3.9 - PyTorch version (GPU?): 1.9.0 - Tensorflow version (GPU?): - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): CamemBert The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) https://huggingface.co/transformers/task_summary.html I follow the above Summary of the tasks, in the Sequence Classification section. I'm trying to find the paraphrase probability of two French Sentence sentences by using CamemBertForSqeuenceClassification, But I got the the following warning and output. How could I edit my code? ``` Some weights of the model checkpoint at ./nlp_models/camembert-base were not used when initializing CamembertForSequenceClassification: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias'] - This IS expected if you are initializing CamembertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing CamembertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of CamembertForSequenceClassification were not initialized from the model checkpoint at ./nlp_models/camembert-base and are newly initialized: ['classifier.dense.weight', 'classifier.dense.bias', 'classifier.out_proj.weight', 'classifier.out_proj.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. not paraphrase: 47% is paraphrase: 53% not paraphrase: 47% is paraphrase: 53% ``` ## To reproduce Steps to reproduce the behavior: ```Python import torch from transformers import CamembertTokenizer from transformers.models.camembert.modeling_camembert import CamembertForSequenceClassification tokenizer = CamembertTokenizer.from_pretrained("./nlp_models/camembert-base") model = CamembertForSequenceClassification.from_pretrained("./nlp_models/camembert-base") classes = ["not paraphrase", "is paraphrase"] sequence_0 = 'La société HuggingFace est basée à New York City' sequence_1 = 'Les pommes sont particulièrement mauvaises pour la santé' sequence_2 = "Le siège social de HuggingFace est situé à Manhattan" # paraphrase = tokenizer(sequence_0, sequence_2, return_tensors="pt") not_paraphrase = tokenizer(sequence_0, sequence_1, return_tensors="pt") paraphrase_classification_logits = model(**paraphrase).logits not_paraphrase_classification_logits = model(**not_paraphrase).logits paraphrase_results = torch.softmax(paraphrase_classification_logits, dim=1).tolist()[0] not_paraphrase_results = torch.softmax(not_paraphrase_classification_logits, dim=1).tolist()[0] # Should be paraphrase for i in range(len(classes)): print(f"{classes[i]}: {int(round(paraphrase_results[i] * 100))}%") # Should not be paraphrase for i in range(len(classes)): print(f"{classes[i]}: {int(round(not_paraphrase_results[i] * 100))}%") ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12547/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12547/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12546
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12546/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12546/comments
https://api.github.com/repos/huggingface/transformers/issues/12546/events
https://github.com/huggingface/transformers/issues/12546
938,282,211
MDU6SXNzdWU5MzgyODIyMTE=
12,546
Necessary resources for training a (small/tiny) LM from scratch?
{ "login": "brijow", "id": 11220949, "node_id": "MDQ6VXNlcjExMjIwOTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/11220949?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brijow", "html_url": "https://github.com/brijow", "followers_url": "https://api.github.com/users/brijow/followers", "following_url": "https://api.github.com/users/brijow/following{/other_user}", "gists_url": "https://api.github.com/users/brijow/gists{/gist_id}", "starred_url": "https://api.github.com/users/brijow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brijow/subscriptions", "organizations_url": "https://api.github.com/users/brijow/orgs", "repos_url": "https://api.github.com/users/brijow/repos", "events_url": "https://api.github.com/users/brijow/events{/privacy}", "received_events_url": "https://api.github.com/users/brijow/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\ncould you please ask this question on the [forum](https://discuss.huggingface.co/) rather than here? We like to keep Github issues for bugs/feature requests.\r\n\r\nThanks!", "Sure, thanks @NielsRogge . Here is a [link to the post on the forum](https://discuss.huggingface.co/t/necessary-resources-for-training-a-small-tiny-lm-from-scratch/8139), cheers." ]
1,625
1,625
1,625
NONE
null
This is mostly a follow up question in regards to this hugging face blog post on training a LM (and a tokenizer) from scratch : https://huggingface.co/blog/how-to-train I think this may be an ideal approach to try out in my situation, but I'm wondering about cost and how much data I really need to train a LM from scratch on my domain-specific dataset? I'm quite new to the field and haven't read many papers on this subject as of yet, so I was hoping someone might be able to provide some ballpark estimates about computing resources required for training some small LM(s) from scratch? I'd like to obtain a fine-tuned (or trained from scratch) domain-specific LM to serve as a backbone for various downstream NLP tasks on my domain-specific text data. I have been experimenting with the fine-tuning LM approach, (i.e. fine-tuning BERT based models on MLM before performing task-specific fine-tuning), but I'm curious about the training from scratch option if I can get a rough idea the required compute resources / cost. Thanks very much in advance for any help / tips on unpacking this question.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12546/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12546/timeline
completed
null
null