url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
β | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
β | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/11035 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11035/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11035/comments | https://api.github.com/repos/huggingface/transformers/issues/11035/events | https://github.com/huggingface/transformers/issues/11035 | 848,976,468 | MDU6SXNzdWU4NDg5NzY0Njg= | 11,035 | 404 Client Error: Not Found for url: https://huggingface.co/%5CHuggingface-Sentiment-Pipeline/resolve/main/config.json | {
"login": "nithinreddyy",
"id": 56256685,
"node_id": "MDQ6VXNlcjU2MjU2Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/56256685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nithinreddyy",
"html_url": "https://github.com/nithinreddyy",
"followers_url": "https://api.github.com/users/nithinreddyy/followers",
"following_url": "https://api.github.com/users/nithinreddyy/following{/other_user}",
"gists_url": "https://api.github.com/users/nithinreddyy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nithinreddyy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nithinreddyy/subscriptions",
"organizations_url": "https://api.github.com/users/nithinreddyy/orgs",
"repos_url": "https://api.github.com/users/nithinreddyy/repos",
"events_url": "https://api.github.com/users/nithinreddyy/events{/privacy}",
"received_events_url": "https://api.github.com/users/nithinreddyy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Are you able to successfully load your model with e.g. `AutoModel.from_pretrained(local_path)`?",
"> Are you able to successfully load your model with e.g. `AutoModel.from_pretrained(local_path)`?\r\n\r\nNo, same error.",
"Hi @nithinreddyy, could you share information about your setup so that we may investigate? Thanks.\r\nWhat happens if you remove the backward slash in your path? Is it a local path or is `Huggingface-Sentiment-Pipeline` in the root directory?",
"The code is working. The actual code is \r\n\r\n```\r\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline\r\nimport transformers\r\n\r\nmodel = AutoModelForSequenceClassification.from_pretrained('directory')\r\ntoken = AutoTokenizer.from_pretrained('directory')\r\n\r\nclassifier = pipeline(task='sentiment-analysis', model=model, tokenizer=token)\r\n\r\nclassifier(\"my name is nithin\") #Run this code and it returns the output\r\n```\r\n\r\nTo save the 'sentiment-analysis' pipeline, try the below code\r\n\r\n```\r\nfrom transformers import pipeline\r\n\r\nclassifier = pipeline('sentiment-analysis')\r\nclassifier.save_pretrained('directory')\r\n```\r\n\r\nThat's it. Thank you.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,617 | 1,620 | 1,620 | NONE | null | I'm trying to use the hugging face sentimet-analysis pipeline. I've downloaded the pipeline using save.pretrained(model). And trying to load the pipeline with the help of below code
```
from transformers import pipeline
model = '\Huggingface-Sentiment-Pipeline'
classifier = pipeline(task='sentiment-analysis', model=model, tokenizer=model, from_pt=True)
```
The Huggingface-Sentiment-Pipeline contains 6 files. I'm mentioning below
```
-> Huggingface-Sentiment-Pipeline
-> config.json
-> modelcard.json
-> pytorch_model.bin
-> special_tokens_map.json
-> tokenizer_config.json
-> vocab.txt
```
The error I'm getting is given below
```
404 Client Error: Not Found for url: https://huggingface.co/%5CHuggingface-Sentiment-Pipeline/resolve/main/config.json
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
~\sentiment_pipeline\lib\site-packages\transformers\configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
423 local_files_only=local_files_only,
--> 424 use_auth_token=use_auth_token,
425 )
~\sentiment_pipeline\lib\site-packages\transformers\file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, use_auth_token, local_files_only)
1085 use_auth_token=use_auth_token,
-> 1086 local_files_only=local_files_only,
1087 )
~\sentiment_pipeline\lib\site-packages\transformers\file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, use_auth_token, local_files_only)
1215 r = requests.head(url, headers=headers, allow_redirects=False, proxies=proxies, timeout=etag_timeout)
-> 1216 r.raise_for_status()
1217 etag = r.headers.get("X-Linked-Etag") or r.headers.get("ETag")
~\sentiment_pipeline\lib\site-packages\requests\models.py in raise_for_status(self)
942 if http_error_msg:
--> 943 raise HTTPError(http_error_msg, response=self)
944
HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/%5CHuggingface-Sentiment-Pipeline/resolve/main/config.json
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-7-5074b39a82b6> in <module>
----> 1 classifier = pipeline(task='sentiment-analysis', model=model, tokenizer=model, from_pt=True)
~\sentiment_pipeline\lib\site-packages\transformers\pipelines\__init__.py in pipeline(task, model, config, tokenizer, framework, revision, use_fast, **kwargs)
338 model = get_default_model(targeted_task, framework, task_options)
339
--> 340 framework = framework or get_framework(model)
341
342 task_class, model_class = targeted_task["impl"], targeted_task[framework]
~\sentiment_pipeline\lib\site-packages\transformers\pipelines\base.py in get_framework(model, revision)
64 if isinstance(model, str):
65 if is_torch_available() and not is_tf_available():
---> 66 model = AutoModel.from_pretrained(model, revision=revision)
67 elif is_tf_available() and not is_torch_available():
68 model = TFAutoModel.from_pretrained(model, revision=revision)
~\sentiment_pipeline\lib\site-packages\transformers\models\auto\modeling_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
768 if not isinstance(config, PretrainedConfig):
769 config, kwargs = AutoConfig.from_pretrained(
--> 770 pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs
771 )
772
~\sentiment_pipeline\lib\site-packages\transformers\models\auto\configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
366 {'foo': False}
367 """
--> 368 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
369
370 if "model_type" in config_dict:
~\sentiment_pipeline\lib\site-packages\transformers\configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
434 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a {CONFIG_NAME} file\n\n"
435 )
--> 436 raise EnvironmentError(msg)
437
438 except json.JSONDecodeError:
OSError: Can't load config for '\Huggingface-Sentiment-Pipeline'. Make sure that:
- '\Huggingface-Sentiment-Pipeline' is a correct model identifier listed on 'https://huggingface.co/models'
- or '\Huggingface-Sentiment-Pipeline' is the correct path to a directory containing a config.json file
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11035/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11034 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11034/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11034/comments | https://api.github.com/repos/huggingface/transformers/issues/11034/events | https://github.com/huggingface/transformers/issues/11034 | 848,939,310 | MDU6SXNzdWU4NDg5MzkzMTA= | 11,034 | GPT-2 example is broken? | {
"login": "ba305",
"id": 35350330,
"node_id": "MDQ6VXNlcjM1MzUwMzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/35350330?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ba305",
"html_url": "https://github.com/ba305",
"followers_url": "https://api.github.com/users/ba305/followers",
"following_url": "https://api.github.com/users/ba305/following{/other_user}",
"gists_url": "https://api.github.com/users/ba305/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ba305/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ba305/subscriptions",
"organizations_url": "https://api.github.com/users/ba305/orgs",
"repos_url": "https://api.github.com/users/ba305/repos",
"events_url": "https://api.github.com/users/ba305/events{/privacy}",
"received_events_url": "https://api.github.com/users/ba305/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! Sorry to hear the example doesn't work well for you. To be honest, it doesn't really make sense to try and generate a single token like it is done in that example. I have slightly modified the example so that it generates the 20 following tokens.\r\n\r\nAlso, I've removed the space at the end of the sequence because I believe it is there by mistake:\r\n\r\n```py\r\nfrom transformers import AutoModelWithLMHead, AutoTokenizer, top_k_top_p_filtering\r\nimport torch\r\nfrom torch.nn import functional as F\r\ntokenizer = AutoTokenizer.from_pretrained(\"gpt2\")\r\nmodel = AutoModelWithLMHead.from_pretrained(\"gpt2\")\r\nsequence = f\"Hugging Face is based in DUMBO, New York City, and\"\r\ninput_ids = tokenizer.encode(sequence, return_tensors=\"pt\")\r\n# get logits of last hidden state\r\ngenerated = input_ids\r\nfor i in range(20):\r\n next_token_logits = model(generated).logits[:, -1, :]\r\n # filter\r\n filtered_next_token_logits = top_k_top_p_filtering(next_token_logits, top_k=50, top_p=1.0)\r\n # sample\r\n probs = F.softmax(filtered_next_token_logits, dim=-1)\r\n next_token = torch.multinomial(probs, num_samples=1)\r\n generated = torch.cat([generated, next_token], dim=-1)\r\n\r\nresulting_string = tokenizer.decode(generated.tolist()[0])\r\n\r\nprint(resulting_string)\r\n```\r\n\r\nRunning this gives me the following examples (not cherry-picked):\r\n\r\n```\r\nHugging Face is based in DUMBO, New York City, and is produced by Eltas & Co., Inc. (a wholly owned subsidiary of Eltas\r\nHugging Face is based in DUMBO, New York City, and focuses primarily on the music and entertainment industry, and is funded by the Hudson River Chamber of Commerce.\r\nHugging Face is based in DUMBO, New York City, and has aired in dozens of local, national and foreign programs, including The Brady Bunch, The Colbert\r\n\r\n```",
"Thanks a lot for your help Lysandre!\r\n\r\nRemoving the space at the end of the example sequence solves the issue. Now I am getting normal results. It would be great if you could update the website since I imagine other people will run into the same issue at some point!\r\n\r\nAlso, thanks for adding the code to generate 20 tokens. That is helpful as well, although I believe the main problem was the space at the end of the input sequence.\r\n\r\nThanks again for your prompt reply. Feel free to close the issue whenever you want",
"Great, nice to hear this fixes the issue! I've updated the docs on the `master` branch."
] | 1,617 | 1,617 | 1,617 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: I have had this issue with both 4.3.0 and 4.4.2 (and probably other versions as well)
- Python version: 3.7.6
- PyTorch version (GPU?): 1.7.0
- Using GPU in script?: No, I just tested it on the CPU, but it would probably also happen on the GPU
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
- gpt2: @patrickvonplaten, @LysandreJik
- Documentation: @sgugger
## Information
Model I am using (Bert, XLNet ...): gpt2
The problem arises when using:
* [ x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
## To reproduce
Hello, I am trying to run this example here: https://huggingface.co/transformers/task_summary.html#causal-language-modeling. When I run that code, exactly the same as it is on that page, I get strange/very bad results. Even when I change the input text, it still gives weird results (e.g., predicting empty spaces or strange characters). I also asked my coworker to try it on her computer, and she also got strange results.
I am planning to fine-tune GPT-2 for a different purpose later, but was a bit concerned because I couldn't even get this simple example demo to work. Thanks for your help!
Steps to reproduce the behavior:
1. Just run the exact example code that I linked above
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11034/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11033 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11033/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11033/comments | https://api.github.com/repos/huggingface/transformers/issues/11033/events | https://github.com/huggingface/transformers/issues/11033 | 848,936,573 | MDU6SXNzdWU4NDg5MzY1NzM= | 11,033 | RuntimeError: The size of tensor a (1024) must match the size of tensor b (1025) at non-singleton dimension 3 | {
"login": "yananchen1989",
"id": 26405281,
"node_id": "MDQ6VXNlcjI2NDA1Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yananchen1989",
"html_url": "https://github.com/yananchen1989",
"followers_url": "https://api.github.com/users/yananchen1989/followers",
"following_url": "https://api.github.com/users/yananchen1989/following{/other_user}",
"gists_url": "https://api.github.com/users/yananchen1989/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yananchen1989/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yananchen1989/subscriptions",
"organizations_url": "https://api.github.com/users/yananchen1989/orgs",
"repos_url": "https://api.github.com/users/yananchen1989/repos",
"events_url": "https://api.github.com/users/yananchen1989/events{/privacy}",
"received_events_url": "https://api.github.com/users/yananchen1989/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I am also having the same issue and I don't have input/output longer than 1024/.",
"Experiencing the same problem using the 2048 size with GPT-J."
] | 1,617 | 1,661 | 1,620 | NONE | null | Here I try to use gpt2 to generation the text under the prompt text. I have several datasets, some of them, such as AG_NEWS and POP_NEWS, are made of short sentences while when I use YAHOO_NEWS, consisting of longer sentences, the error came out.
Anything to modify for my codes?
Thanks.
```
from transformers import (
CTRLLMHeadModel,
CTRLTokenizer,
GPT2LMHeadModel,
GPT2Tokenizer,
OpenAIGPTLMHeadModel,
OpenAIGPTTokenizer,
TransfoXLLMHeadModel,
TransfoXLTokenizer,
XLMTokenizer,
XLMWithLMHeadModel,
XLNetLMHeadModel,
XLNetTokenizer,
)
class generation():
def __init__(self, model_name='gpt2',num_return_sequences=1):
self.model_name = model_name
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
self.MODEL_CLASSES = {
"gpt2": (GPT2LMHeadModel, GPT2Tokenizer),
"ctrl": (CTRLLMHeadModel, CTRLTokenizer),
"openai-gpt": (OpenAIGPTLMHeadModel, OpenAIGPTTokenizer),
"xlnet-base-cased": (XLNetLMHeadModel, XLNetTokenizer),
"transfo-xl": (TransfoXLLMHeadModel, TransfoXLTokenizer),
"xlm": (XLMWithLMHeadModel, XLMTokenizer),
}
self.length = 100
self.k = 0
self.p = 0.9
self.num_return_sequences = num_return_sequences
self.model_class, self.tokenizer_class = self.MODEL_CLASSES[self.model_name]
self.tokenizer = self.tokenizer_class.from_pretrained(self.model_name)
self.model = self.model_class.from_pretrained(self.model_name)
self.model.to(self.device)
if self.model_name == "xlnet-base-cased":
self.p=0.95
self.k=60
self.length = self.adjust_length_to_model(self.length, max_sequence_length=self.model.config.max_position_embeddings)
if self.model_name == 'ctrl':
self.temperature = 0.3
self.repetition_penalty = 1.2
else:
self.temperature = 1.0
self.repetition_penalty = 1.0
def adjust_length_to_model(self, length, max_sequence_length):
if length < 0 and max_sequence_length > 0:
length = max_sequence_length
elif 0 < max_sequence_length < length:
length = max_sequence_length # No generation bigger than model size
elif length < 0:
length = 1000 # avoid infinite loop
return length
def ctrl_label2prefix(self, label):
# https://github.com/salesforce/ctrl/blob/master/control_codes.py
'''
'Pregnancy Christianity Explain Fitness Saving Ask Ass Joke Questions Thoughts Retail
Feminism Writing Atheism Netflix Computing Opinion Alone Funny Gaming Human India Joker Diet
Legal Norman Tip Weight Movies Running Science Horror Confession Finance Politics Scary Support
Technologies Teenage Event Learned Notion Wikipedia Books Extract Confessions Conspiracy Links
Narcissus Relationship Relationships Reviews News Translation multilingual'
'''
return 'News'
if label in ('Sci/Tech', 'tech'):
return 'Technologies'
elif label in ('politics'):
return 'Politics'
elif label in ('Sports', 'sport'):
return 'Fitness'
else:
return 'News'
def augment(self, prompt_text):
if self.model_name == 'ctrl':
prefix = 'News '
else:
prefix = ''
encoded_prompt = self.tokenizer.encode(prefix + prompt_text, add_special_tokens=False, return_tensors="pt")
encoded_prompt = encoded_prompt.to(self.device)
if encoded_prompt.size()[-1] == 0:
input_ids = None
else:
input_ids = encoded_prompt
output_sequences = self.model.generate(
input_ids=input_ids,
max_length= self.length + len(encoded_prompt[0]),
temperature=self.temperature,
top_k=self.k,
top_p=self.p,
repetition_penalty=self.repetition_penalty,
do_sample=True,
num_return_sequences=self.num_return_sequences,
)
# Decode text
text_generated = self.tokenizer.decode(output_sequences[0][len(encoded_prompt[0]):], clean_up_tokenization_spaces=True)
return text_generated
# unit test
'''
augmentor = generation('gpt2')
prompt_text = "Microsoft has said it will replace more than 14 million power cables for its Xbox consoles due to safety concerns."
prompt_text = "Versace art portfolio up for sale The art collection of murdered fashion designer Gianni Versace could fetch \
up to Β£9m ($17m) when it is auctioned in New York and \
London later this year. <eod> </s> <eos>"
augmentor.augment(prompt_text)
'''
```
ERROR information:
> File "baseline_classifier.py", line 45, in run_benchmark
> ds.df_train['content_aug'] = ds.df_train['content'].map(lambda x: augmentor.augment(x))
> File "/workspace/.conda/miniconda/lib/python3.7/site-packages/pandas/core/series.py", line 3382, in map
> arg, na_action=na_action)
> File "/workspace/.conda/miniconda/lib/python3.7/site-packages/pandas/core/base.py", line 1218, in _map_values
> new_values = map_f(values, mapper)
> File "pandas/_libs/lib.pyx", line 2217, in pandas._libs.lib.map_infer
> File "baseline_classifier.py", line 45, in <lambda>
> ds.df_train['content_aug'] = ds.df_train['content'].map(lambda x: augmentor.augment(x))
> File "/workspace/user-workspace/topic_classification_augmentation/aug_generation.py", line 110, in augment
> num_return_sequences=self.num_return_sequences,
> File "/workspace/.conda/miniconda/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
> return func(*args, **kwargs)
> File "/workspace/.conda/miniconda/lib/python3.7/site-packages/transformers/generation_utils.py", line 1019, in generate
> **model_kwargs,
> File "/workspace/.conda/miniconda/lib/python3.7/site-packages/transformers/generation_utils.py", line 1486, in sample
> output_hidden_states=output_hidden_states,
> File "/workspace/.conda/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
> result = self.forward(*input, **kwargs)
> File "/workspace/.conda/miniconda/lib/python3.7/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 917, in forward
> return_dict=return_dict,
> File "/workspace/.conda/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
> result = self.forward(*input, **kwargs)
> File "/workspace/.conda/miniconda/lib/python3.7/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 760, in forward
> output_attentions=output_attentions,
> File "/workspace/.conda/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
> result = self.forward(*input, **kwargs)
> File "/workspace/.conda/miniconda/lib/python3.7/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 296, in forward
> output_attentions=output_attentions,
> File "/workspace/.conda/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
> result = self.forward(*input, **kwargs)
> File "/workspace/.conda/miniconda/lib/python3.7/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 241, in forward
> attn_outputs = self._attn(query, key, value, attention_mask, head_mask, output_attentions)
> File "/workspace/.conda/miniconda/lib/python3.7/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 176, in _attn
> w = torch.where(mask.bool(), w, self.masked_bias.to(w.dtype))
> RuntimeError: The size of tensor a (1024) must match the size of tensor b (1025) at non-singleton dimension 3 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11033/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11032 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11032/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11032/comments | https://api.github.com/repos/huggingface/transformers/issues/11032/events | https://github.com/huggingface/transformers/issues/11032 | 848,921,982 | MDU6SXNzdWU4NDg5MjE5ODI= | 11,032 | How to get masked word prediction for other languages | {
"login": "AnnaSou",
"id": 43326583,
"node_id": "MDQ6VXNlcjQzMzI2NTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/43326583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AnnaSou",
"html_url": "https://github.com/AnnaSou",
"followers_url": "https://api.github.com/users/AnnaSou/followers",
"following_url": "https://api.github.com/users/AnnaSou/following{/other_user}",
"gists_url": "https://api.github.com/users/AnnaSou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AnnaSou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AnnaSou/subscriptions",
"organizations_url": "https://api.github.com/users/AnnaSou/orgs",
"repos_url": "https://api.github.com/users/AnnaSou/repos",
"events_url": "https://api.github.com/users/AnnaSou/events{/privacy}",
"received_events_url": "https://api.github.com/users/AnnaSou/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can filter the model hub on 'ru' (Russian) and 'fill-mask' (masked language modeling): https://huggingface.co/models?filter=ru&pipeline_tag=fill-mask\r\n\r\nrobert-base was trained on English text only, so it will not work for Russian. \r\n\r\nMaybe a good choice is this model: https://huggingface.co/blinoff/roberta-base-russian-v0",
"How about XLM-RoBERTa? That should be trained with multiple languages, but I got similar issues. \r\n\r\n```\r\nThe specified target token ` courageuse` does not exist in the model vocabulary. Replacing with `βcourage`.\r\n[{'sequence': '<s> Cette femme est belle.</s>', 'score': 0.002463364042341709, 'token': 21525, 'token_str': 'βbelle'}, {'sequence': '<s> Cette femme est courage.</s>', 'score': 4.064602762809955e-06, 'token': 116252, 'token_str': 'βcourage'}]\r\n```\r\n\r\n> You can filter the model hub on 'ru' (Russian) and 'fill-mask' (masked language modeling): https://huggingface.co/models?filter=ru&pipeline_tag=fill-mask\r\n> \r\n> robert-base was trained on English text only, so it will not work for Russian.\r\n> \r\n> Maybe a good choice is this model: https://huggingface.co/blinoff/roberta-base-russian-v0\r\n\r\n",
"Thank you for the reply! I second the comment above regarding XLM-Roberta. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,617 | 1,620 | 1,620 | NONE | null | Hello,
I trying to get masked words predictions for languages except English with Roberta or XLM Roberta.
```
from transformers import pipeline
nlp = pipeline("fill-mask", model="roberta-base")
template = f"That woman is {nlp.tokenizer.mask_token}."
output = nlp(template)
nlp4 = pipeline("fill-mask", model="roberta-base")
nlp4(f"ΠΠ΅Π½ΡΠΈΠ½Ρ ΡΠ°Π±ΠΎΡΠ°ΡΡ {nlp4.tokenizer.mask_token}.")
```
The output for English example is quite good, while for Russian one does not make sense at all:
`[{'sequence': 'ΠΠ΅Π½ΡΠΈΠ½Ρ ΡΠ°Π±ΠΎΡΠ°ΡΡΠ°.', 'score': 0.2504434883594513, 'token': 26161, 'token_str': 'Π°'}, {'sequence': 'ΠΠ΅Π½ΡΠΈΠ½Ρ ΡΠ°Π±ΠΎΡΠ°ΡΡΡ.', 'score': 0.24665573239326477, 'token': 47015, 'token_str': 'Ρ'}, {'sequence': 'ΠΠ΅Π½ΡΠΈΠ½Ρ ΡΠ°Π±ΠΎΡΠ°ΡΡΡ.', 'score': 0.1454654186964035, 'token': 46800, 'token_str': 'Ρ'}, {'sequence': 'ΠΠ΅Π½ΡΠΈΠ½Ρ ΡΠ°Π±ΠΎΡΠ°ΡΡΠ΅.', 'score': 0.07919821888208389, 'token': 25482, 'token_str': 'Π΅'}, {'sequence': 'ΠΠ΅Π½ΡΠΈΠ½Ρ ΡΠ°Π±ΠΎΡΠ°ΡΡΠΈ.', 'score': 0.07401203364133835, 'token': 35328, 'token_str': 'ΠΈ'}]`
Neither "roberta-base" nor "xlm-roberta-base" work for Russian language example.
Maybe I am doing it wrong, but how would one use masked word prediction for other languages?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11032/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11031 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11031/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11031/comments | https://api.github.com/repos/huggingface/transformers/issues/11031/events | https://github.com/huggingface/transformers/issues/11031 | 848,921,632 | MDU6SXNzdWU4NDg5MjE2MzI= | 11,031 | Roberta and XLNet sentence pair training example | {
"login": "azizcu",
"id": 63375421,
"node_id": "MDQ6VXNlcjYzMzc1NDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/63375421?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/azizcu",
"html_url": "https://github.com/azizcu",
"followers_url": "https://api.github.com/users/azizcu/followers",
"following_url": "https://api.github.com/users/azizcu/following{/other_user}",
"gists_url": "https://api.github.com/users/azizcu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/azizcu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/azizcu/subscriptions",
"organizations_url": "https://api.github.com/users/azizcu/orgs",
"repos_url": "https://api.github.com/users/azizcu/repos",
"events_url": "https://api.github.com/users/azizcu/events{/privacy}",
"received_events_url": "https://api.github.com/users/azizcu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nThanks!"
] | 1,617 | 1,617 | 1,617 | NONE | null | I want to use RoBERTa and XLNet for sentence pair input task (like sentence similarity task pair input). Can you explain with code example? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11031/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11030 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11030/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11030/comments | https://api.github.com/repos/huggingface/transformers/issues/11030/events | https://github.com/huggingface/transformers/issues/11030 | 848,823,702 | MDU6SXNzdWU4NDg4MjM3MDI= | 11,030 | pipeline.from_pretrained | {
"login": "cronoik",
"id": 18630848,
"node_id": "MDQ6VXNlcjE4NjMwODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cronoik",
"html_url": "https://github.com/cronoik",
"followers_url": "https://api.github.com/users/cronoik/followers",
"following_url": "https://api.github.com/users/cronoik/following{/other_user}",
"gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cronoik/subscriptions",
"organizations_url": "https://api.github.com/users/cronoik/orgs",
"repos_url": "https://api.github.com/users/cronoik/repos",
"events_url": "https://api.github.com/users/cronoik/events{/privacy}",
"received_events_url": "https://api.github.com/users/cronoik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,617 | 1,619 | 1,619 | CONTRIBUTOR | null | # π Feature request
Nearly everyone who is using the transformers library is aware of the `from_pretrained()` and `save_pretrained()` concept. The [Pipeline class](https://huggingface.co/transformers/main_classes/pipelines.html#parent-class-pipeline) is currently only providing the `save_pretrained()` method which can cause confusion for some users as saving and loading of the pipeline needs to be done like this:
```
from transformers import pipeline
TASK = 'something'
DIRECTORY='something'
classifier = pipeline(TASK)
classifier.save_pretrained(DIRECTORY)
c2 = pipeline(task = TASK, model=DIRECTORY, tokenizer=DIRECTORY)
```
This is probably not that obvious for people who just read the documentation and not the code. I suggest implementing the `from_pretrained()` method for the pipelines to make the library even more intuitive.
## Your contribution
There is actually not much to do since the tokenizer and model are loaded by the corresponding Auto classes. The only information that is missing to construct the saved pipeline with `pipeline.from_pretraind(PATH)` is the task identifier. This information should be stored in a new separate file called `pipeline_config.json`.
I can provide a PR if you think this is a useful enhancement for the library @LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11030/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11029 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11029/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11029/comments | https://api.github.com/repos/huggingface/transformers/issues/11029/events | https://github.com/huggingface/transformers/pull/11029 | 848,798,224 | MDExOlB1bGxSZXF1ZXN0NjA3Njc4Nzg3 | 11,029 | Documentation about loading a fast tokenizer within Transformers | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for the suggestion @sgugger!"
] | 1,617 | 1,617 | 1,617 | MEMBER | null | This PR does two things:
- Allows to load a fast tokenizer from an instantiated `tokenizers` object
- Adds a page to document how to use these tokenizers within `transformers`
See [here](https://190138-155220641-gh.circle-artifacts.com/0/docs/_build/html/fast_tokenizers.html) for the generated docs | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11029/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11029",
"html_url": "https://github.com/huggingface/transformers/pull/11029",
"diff_url": "https://github.com/huggingface/transformers/pull/11029.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11029.patch",
"merged_at": 1617634276000
} |
https://api.github.com/repos/huggingface/transformers/issues/11028 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11028/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11028/comments | https://api.github.com/repos/huggingface/transformers/issues/11028/events | https://github.com/huggingface/transformers/issues/11028 | 848,769,061 | MDU6SXNzdWU4NDg3NjkwNjE= | 11,028 | Fine Tune GPT-NEO 2.7B | {
"login": "cppntn",
"id": 26765504,
"node_id": "MDQ6VXNlcjI2NzY1NTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26765504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cppntn",
"html_url": "https://github.com/cppntn",
"followers_url": "https://api.github.com/users/cppntn/followers",
"following_url": "https://api.github.com/users/cppntn/following{/other_user}",
"gists_url": "https://api.github.com/users/cppntn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cppntn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cppntn/subscriptions",
"organizations_url": "https://api.github.com/users/cppntn/orgs",
"repos_url": "https://api.github.com/users/cppntn/repos",
"events_url": "https://api.github.com/users/cppntn/events{/privacy}",
"received_events_url": "https://api.github.com/users/cppntn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Did you try to use `run_clm.py`?",
"@LysandreJik Yea, it's working. But i cannot fine-tune it using \"GPU\" in colab and the \"TPU\" gives me this error ```exception: process 0 terminated with signal SIGKILL```, I tried the solutions in the issues but still ",
"SIGKILL is probably an out of memory error, meaning you're running out of RAM. I'm not entirely sure colab gives you the hardware necessary to do such big trainings, even when using DeepSpeed.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,617 | 1,620 | 1,620 | NONE | null | Hello to everyone, is there a script to fine tune this new model?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11028/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11027 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11027/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11027/comments | https://api.github.com/repos/huggingface/transformers/issues/11027/events | https://github.com/huggingface/transformers/pull/11027 | 848,767,936 | MDExOlB1bGxSZXF1ZXN0NjA3NjUzMTAy | 11,027 | Refactor AutoModel classes and add Flax Auto classes | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,617 | 1,617 | 1,617 | COLLABORATOR | null | # What does this PR do?
This PR refactors the logic behind all the Auto model classes in one function that automatically builds those classes from a template. In passing, it uses this new function to build the auto classes for FLAX (at least the ones that have at least one model implemented). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11027/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11027/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11027",
"html_url": "https://github.com/huggingface/transformers/pull/11027",
"diff_url": "https://github.com/huggingface/transformers/pull/11027.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11027.patch",
"merged_at": 1617631889000
} |
https://api.github.com/repos/huggingface/transformers/issues/11026 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11026/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11026/comments | https://api.github.com/repos/huggingface/transformers/issues/11026/events | https://github.com/huggingface/transformers/pull/11026 | 848,754,983 | MDExOlB1bGxSZXF1ZXN0NjA3NjQyMjM1 | 11,026 | Add `examples/language_modeling/run_clm_no_trainer.py` | {
"login": "hemildesai",
"id": 8195444,
"node_id": "MDQ6VXNlcjgxOTU0NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8195444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hemildesai",
"html_url": "https://github.com/hemildesai",
"followers_url": "https://api.github.com/users/hemildesai/followers",
"following_url": "https://api.github.com/users/hemildesai/following{/other_user}",
"gists_url": "https://api.github.com/users/hemildesai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hemildesai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hemildesai/subscriptions",
"organizations_url": "https://api.github.com/users/hemildesai/orgs",
"repos_url": "https://api.github.com/users/hemildesai/repos",
"events_url": "https://api.github.com/users/hemildesai/events{/privacy}",
"received_events_url": "https://api.github.com/users/hemildesai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sgugger Awesome, I'll create a new PR for it.",
"Merging this one then, thanks again!"
] | 1,617 | 1,617 | 1,617 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds an example of finetuning a Causal Language Model (without using Trainer) to show the functionalities of the new accelerate library.
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11026/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11026",
"html_url": "https://github.com/huggingface/transformers/pull/11026",
"diff_url": "https://github.com/huggingface/transformers/pull/11026.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11026.patch",
"merged_at": 1617640073000
} |
https://api.github.com/repos/huggingface/transformers/issues/11025 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11025/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11025/comments | https://api.github.com/repos/huggingface/transformers/issues/11025/events | https://github.com/huggingface/transformers/pull/11025 | 848,727,782 | MDExOlB1bGxSZXF1ZXN0NjA3NjE5NzY0 | 11,025 | fixed typo: logging instead of logger | {
"login": "versis",
"id": 5721737,
"node_id": "MDQ6VXNlcjU3MjE3Mzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5721737?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/versis",
"html_url": "https://github.com/versis",
"followers_url": "https://api.github.com/users/versis/followers",
"following_url": "https://api.github.com/users/versis/following{/other_user}",
"gists_url": "https://api.github.com/users/versis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/versis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/versis/subscriptions",
"organizations_url": "https://api.github.com/users/versis/orgs",
"repos_url": "https://api.github.com/users/versis/repos",
"events_url": "https://api.github.com/users/versis/events{/privacy}",
"received_events_url": "https://api.github.com/users/versis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,617 | 1,617 | 1,617 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11025/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11025",
"html_url": "https://github.com/huggingface/transformers/pull/11025",
"diff_url": "https://github.com/huggingface/transformers/pull/11025.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11025.patch",
"merged_at": 1617369742000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/11024 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11024/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11024/comments | https://api.github.com/repos/huggingface/transformers/issues/11024/events | https://github.com/huggingface/transformers/pull/11024 | 848,717,134 | MDExOlB1bGxSZXF1ZXN0NjA3NjEwNjQ1 | 11,024 | Add a script to check inits are consistent | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,617 | 1,617 | 1,617 | COLLABORATOR | null | # What does this PR do?
Most inits in the project define the same objects twice (once in `_import_structure` and once in TYPE_CHECKING) to have a fast import so objects are only grabbed when actually needed. The problem is that those two halves have a tendency to diverge as contributors do not always pay attention to have them exactly match.
Well not anymore. Introducing `utils/check_inits.py`. This script will parse all the inits with two halves and return an error with a delightful and informative message telling the user what they did wrong. It is enforced in the CI and added to `make fixup` and `make quality`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11024/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11024",
"html_url": "https://github.com/huggingface/transformers/pull/11024",
"diff_url": "https://github.com/huggingface/transformers/pull/11024.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11024.patch",
"merged_at": 1617583294000
} |
https://api.github.com/repos/huggingface/transformers/issues/11023 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11023/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11023/comments | https://api.github.com/repos/huggingface/transformers/issues/11023/events | https://github.com/huggingface/transformers/issues/11023 | 848,680,168 | MDU6SXNzdWU4NDg2ODAxNjg= | 11,023 | Strange ValueError with GPT-2 | {
"login": "AI-Guru",
"id": 32195399,
"node_id": "MDQ6VXNlcjMyMTk1Mzk5",
"avatar_url": "https://avatars.githubusercontent.com/u/32195399?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AI-Guru",
"html_url": "https://github.com/AI-Guru",
"followers_url": "https://api.github.com/users/AI-Guru/followers",
"following_url": "https://api.github.com/users/AI-Guru/following{/other_user}",
"gists_url": "https://api.github.com/users/AI-Guru/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AI-Guru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AI-Guru/subscriptions",
"organizations_url": "https://api.github.com/users/AI-Guru/orgs",
"repos_url": "https://api.github.com/users/AI-Guru/repos",
"events_url": "https://api.github.com/users/AI-Guru/events{/privacy}",
"received_events_url": "https://api.github.com/users/AI-Guru/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, could you give a reproducible code example?",
"With pleasure!\r\n\r\n```\r\nimport tensorflow as tf\r\nfrom transformers import GPT2Config, TFGPT2LMHeadModel\r\nimport numpy as np\r\n\r\n# Prepare the dataset.\r\ninputs = np.random.randint(0, 128, (1025, 256))\r\nlabels = np.random.randint(0, 128, (1025, 256))\r\ndataset = tf.data.Dataset.from_tensor_slices((inputs, labels))\r\ndataset = dataset.shuffle(1000).batch(32, drop_remainder=True)\r\n\r\n# Create the model configuration.\r\nmodel_config = GPT2Config(\r\n vocab_size=128,\r\n bos_token_id=0,\r\n eos_token_id=127,\r\n n_head=8,\r\n n_layer=6,\r\n n_embd=512,\r\n n_positions=2048\r\n)\r\nprint(model_config)\r\n\r\n# Create the model.\r\nmodel = TFGPT2LMHeadModel(model_config)\r\n\r\n# Compile the model.\r\noptimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)\r\nloss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)\r\nmetric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')\r\nmodel.compile(\r\n optimizer=optimizer,\r\n loss=[loss, *[None] * model.config.n_layer],\r\n metrics=[metric]\r\n)\r\n\r\n# Train the model.\r\nhistory = model.fit(\r\n dataset, \r\n epochs=10,\r\n)\r\n\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,617 | 1,620 | 1,620 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.2
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.6
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.4.0 (False)
- Using GPU in script?: No.
- Using distributed or parallel set-up in script?: No.
### Who can help
Models:
- gpt2: @patrickvonplaten, @LysandreJik
## Information
Model I am using (Bert, XLNet ...): GPT-2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Create and train GPT-2 with 8 heads and batch size 32.
I get an error message when training GPT-2. When number of heads and batch size are the same it works. Looks like a shape check is wronk. See error message below.
```
File "train.py", line 66, in <module>
history = model.fit(
File "/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1100, in fit
tmp_logs = self.train_function(iterator)
File "/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 828, in __call__
result = self._call(*args, **kwds)
File "/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 871, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 725, in _initialize
self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
File "/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 2969, in _get_concrete_function_internal_garbage_collected
graph_function, _ = self._maybe_define_function(args, kwargs)
File "/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3361, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3196, in _create_graph_function
func_graph_module.func_graph_from_py_func(
File "/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 990, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 634, in wrapped_fn
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 977, in wrapper
raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:
/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:805 train_function *
return step_function(self, iterator)
/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:795 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:1259 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica
return fn(*args, **kwargs)
/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:788 run_step **
outputs = model.train_step(data)
/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:758 train_step
self.compiled_metrics.update_state(y, y_pred, sample_weight)
/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/keras/engine/compile_utils.py:408 update_state
metric_obj.update_state(y_t, y_p, sample_weight=mask)
/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/keras/utils/metrics_utils.py:90 decorated
update_op = update_state_fn(*args, **kwargs)
/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/keras/metrics.py:177 update_state_fn
return ag_update_state(*args, **kwargs)
/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/keras/metrics.py:618 update_state **
matches = ag_fn(y_true, y_pred, **self._fn_kwargs)
/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/keras/metrics.py:3315 sparse_categorical_accuracy
return math_ops.cast(math_ops.equal(y_true, y_pred), K.floatx())
/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py:1679 equal
return gen_math_ops.equal(x, y, name=name)
/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/ops/gen_math_ops.py:3177 equal
_, _, _op, _outputs = _op_def_library._apply_op_helper(
/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/framework/op_def_library.py:748 _apply_op_helper
op = g._create_op_internal(op_type_name, inputs, dtypes=None,
/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py:590 _create_op_internal
return super(FuncGraph, self)._create_op_internal( # pylint: disable=protected-access
/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/framework/ops.py:3528 _create_op_internal
ret = Operation(
/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/framework/ops.py:2015 __init__
self._c_op = _create_c_op(self._graph, node_def, inputs,
/Users/tristanbehrens/Development/python-venvs/tf2-p38/lib/python3.8/site-packages/tensorflow/python/framework/ops.py:1856 _create_c_op
raise ValueError(str(e))
ValueError: Dimensions must be equal, but are 32 and 8 for '{{node Equal_1}} = Equal[T=DT_FLOAT, incompatible_shape_error=true](Cast_6, Cast_7)' with input shapes: [32,301], [2,32,8,301].
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11023/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11022 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11022/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11022/comments | https://api.github.com/repos/huggingface/transformers/issues/11022/events | https://github.com/huggingface/transformers/issues/11022 | 848,679,174 | MDU6SXNzdWU4NDg2NzkxNzQ= | 11,022 | cannot import name 'AutoModelForSequenceClassification' from 'transformers' | {
"login": "nithinreddyy",
"id": 56256685,
"node_id": "MDQ6VXNlcjU2MjU2Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/56256685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nithinreddyy",
"html_url": "https://github.com/nithinreddyy",
"followers_url": "https://api.github.com/users/nithinreddyy/followers",
"following_url": "https://api.github.com/users/nithinreddyy/following{/other_user}",
"gists_url": "https://api.github.com/users/nithinreddyy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nithinreddyy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nithinreddyy/subscriptions",
"organizations_url": "https://api.github.com/users/nithinreddyy/orgs",
"repos_url": "https://api.github.com/users/nithinreddyy/repos",
"events_url": "https://api.github.com/users/nithinreddyy/events{/privacy}",
"received_events_url": "https://api.github.com/users/nithinreddyy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please tell us which version you are using and also check [your SO question](https://stackoverflow.com/questions/66906652/how-to-download-hugging-face-sentiment-analysis-pipeline-to-use-it-offline/66907181#comment118269245_66907181) again because it was revised.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I am trying the below codes\r\n\r\n```\r\nfrom transformers import AutoTokenizer\r\nfrom transformers import AutoModelForSequenceClassification\r\nfrom scipy.special import softmax\r\n```\r\n\r\nhowever, getting the below errors in jupyter notebook\r\n\r\n`---------------------------------------------------------------------------\r\nImportError Traceback (most recent call last)\r\nCell In[49], line 2\r\n 1 from transformers import AutoTokenizer\r\n----> 2 from transformers import AutoModelForSequenceClassification\r\n 3 from scipy.special import softmax\r\n\r\nImportError: cannot import name 'AutoModelForSequenceClassification' from 'transformers' (C:\\Users\\Abhik\\anaconda3\\Lib\\site-packages\\transformers\\__init__.py)`\r\n\r\n\r\nare there any changes to it? I am using jupyter notebbok 7"
] | 1,617 | 1,696 | 1,620 | NONE | null | ```
from transformers import pipeline
classifier = pipeline('sentiment-analysis') #This code will download the pipeline
classifier('We are very happy to show you the π€ Transformers library.')
classifier.save_pretrained('/some/directory')
```
I'm trying to save the model and trying to perform the sentiment-analysis operation offline
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
t = AutoTokenizer.from_pretrained('/some/directory')
m = AutoModelForSequenceClassification.from_pretrained('/some/directory')
c2 = pipeline(task = 'sentiment-analysis', model=m, tokenizer=t)
I'm facing import error for jupyter notebook as given below
`**cannot import name 'AutoModelForSequenceClassification' from 'transformers'**`
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11022/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11021 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11021/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11021/comments | https://api.github.com/repos/huggingface/transformers/issues/11021/events | https://github.com/huggingface/transformers/issues/11021 | 848,651,434 | MDU6SXNzdWU4NDg2NTE0MzQ= | 11,021 | Module Not found: datasets_modules.datasets.output | {
"login": "ashleylew",
"id": 68515763,
"node_id": "MDQ6VXNlcjY4NTE1NzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/68515763?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashleylew",
"html_url": "https://github.com/ashleylew",
"followers_url": "https://api.github.com/users/ashleylew/followers",
"following_url": "https://api.github.com/users/ashleylew/following{/other_user}",
"gists_url": "https://api.github.com/users/ashleylew/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashleylew/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashleylew/subscriptions",
"organizations_url": "https://api.github.com/users/ashleylew/orgs",
"repos_url": "https://api.github.com/users/ashleylew/repos",
"events_url": "https://api.github.com/users/ashleylew/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashleylew/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ashleylew were you able to fix this? Could you please share what did?\r\n",
"I was just loading the data incorrectly. I misunderstood how to load custom data in and was treating it like one of Hugging Face's datasets rather than a custom one, but once I fixed that, everything was fine! I can give more details if necessary.",
"I'll check that part out in my script then, thanks!\r\n",
"@ashleylew Hi I'm getting the same error, can you share the data loading script? I tried doing stuff but it doesn't work for me. \r\nThanks\r\n",
"Yeah! So in my command script that I posted here, I used ` --dataset_name data/output.jsonl \\ ` but that command is for pre-loaded data sets, whereas mine is a custom one. So instead you'll want to use: \r\n\r\n```\r\n --train_file train.json \\\r\n --validation_file validation.json \\\r\n```"
] | 1,617 | 1,618 | 1,617 | NONE | null | ## Environment info
- `transformers` version: 4.5.0.dev0
- Platform: Linux-3.10.0-1160.15.2.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: not sure
- Using distributed or parallel set-up in script?: <fill in> ?
### Who can help
@patil-suraj
## Information
Model I am using (Bert, XLNet ...): BART seq2seq
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. "Install from source" method
2. ran this command, where "data/output.jsonl" is my dataset:
```
python examples/seq2seq/run_translation.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--source_lang en \
--target_lang de \
--source_prefix "Translate English to Logical Forms: " \
--dataset_name data/output.jsonl \
--output_dir /tmp/tst-translation \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
```
3. Got the following error:
```
ModuleNotFoundError: No module named 'datasets_modules.datasets.output'
```
At first it told me that "datasets" was not installed, so I did ```pip install datasets``` and that worked fine. Then I got this error and haven't been able to figure out what it means or how to fix it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11021/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11020 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11020/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11020/comments | https://api.github.com/repos/huggingface/transformers/issues/11020/events | https://github.com/huggingface/transformers/issues/11020 | 848,566,666 | MDU6SXNzdWU4NDg1NjY2NjY= | 11,020 | Trainer API crashes GPUs | {
"login": "dmitriydligach",
"id": 5121609,
"node_id": "MDQ6VXNlcjUxMjE2MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5121609?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dmitriydligach",
"html_url": "https://github.com/dmitriydligach",
"followers_url": "https://api.github.com/users/dmitriydligach/followers",
"following_url": "https://api.github.com/users/dmitriydligach/following{/other_user}",
"gists_url": "https://api.github.com/users/dmitriydligach/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dmitriydligach/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dmitriydligach/subscriptions",
"organizations_url": "https://api.github.com/users/dmitriydligach/orgs",
"repos_url": "https://api.github.com/users/dmitriydligach/repos",
"events_url": "https://api.github.com/users/dmitriydligach/events{/privacy}",
"received_events_url": "https://api.github.com/users/dmitriydligach/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! This is a weird error, I haven't seen it before. I don't know how you've setup your installation and which CUDA version is torch using, but I believe it still has incompatibilities with CUDA 11.2. If you installed it as a wheel I think CUDA is included and it doesn't really matter, but as the only thing I see from your report is CUDA version 11.2 I can't help but wonder if that's the issue.\r\n\r\nIn any case I doubt that it's linked to `transformers`, have you checked the following issues on the PyTorch github? https://github.com/pytorch/pytorch/issues/31702 and https://github.com/pytorch/pytorch/issues/27837\r\n\r\nIt seems that memory can be an issue, but given the size of a Quadro 8000 it doubt that's the issue here ...",
"@LysandreJik Thank you for getting back to me so quickly. I just checked which CUDA version torch is seeing:\r\n\r\n>>> torch.__version__\r\n'1.7.1'\r\n>>> torch.version.cuda\r\n'11.1'\r\n\r\nI'm surprised that it's not CUDA 11.2 which is what nvidia-smi shows. Does this information help? \r\n\r\nThis doesn't seem like a GPU memory issues because the example script runs fine for a few minutes and I see (using nvidia-smi) that GPU memory is not being fully used.",
"> but I believe it still has incompatibilities with CUDA 11.2.\r\n\r\nCorrect: https://github.com/pytorch/pytorch/issues/50232#issuecomment-777703998\r\n\r\n> RuntimeError: CUDA error: unspecified launch failure\r\n\r\nDue to its async nature, often the only way to get to see the real error is to run pytorch with env var: `CUDA_LAUNCH_BLOCKING=1`\r\n\r\nPerhaps if you do that you will get better information.\r\n\r\nAlso please check the output of `dmesg -T` - sometimes the `nvidia` kernel module throws a kernel-level traceback in system logs.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Responding to avoid making this issue stale: \r\n\r\nThis issue has not been resolved. I tried a number of different configurations including different versions of pytorch, but it didn't help.",
"Care to try with CUDA_LAUNCH_BLOCKING as suggested in https://github.com/huggingface/transformers/issues/11020#issuecomment-812960095",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,617 | 1,623 | 1,623 | NONE | null | ## Environment info
- `transformers` version: 4.5.0.dev0
- Platform: Ubuntu 20.04.2 LTS
- Python version: Python 3.8.5
- PyTorch version (GPU?): 1.7.1
- Tensorflow version (GPU?): 2.4.1
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
My scripts that use Trainer API crash GPUs on a Linux server that has 4 Quadro RTX 8000 GPUs (NVIDIA-SMI 460.39, Driver Version: 460.39, CUDA Version: 11.2). In order to understand if this is my problem or not, I installed Huggingface examples as described in
https://huggingface.co/transformers/examples.html.
I then run
python3 examples/seq2seq/run_summarization.py \
> --model_name_or_path t5-large \
> --do_train \
> --do_eval \
> --dataset_name cnn_dailymail \
> --dataset_config "3.0.0" \
> --source_prefix "summarize: " \
> --output_dir /tmp/tst-summarization \
> --per_device_train_batch_size=2 \
> --per_device_eval_batch_size=2 \
> --overwrite_output_dir \
> --predict_with_generate
After this script runs for a few minutes (and I can see that the GPUs are being utilized when I run nvidia-smi), all GPUs crash with the following error:
Traceback (most recent call last):
File "examples/seq2seq/run_summarization.py", line 591, in <module>
main()
File "examples/seq2seq/run_summarization.py", line 529, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/dima/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1120, in train
tr_loss += self.training_step(model, inputs)
File "/home/dima/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1542, in training_step
loss.backward()
File "/usr/lib/python3/dist-packages/torch/tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/usr/lib/python3/dist-packages/torch/autograd/__init__.py", line 130, in backward
Variable._execution_engine.run_backward(
RuntimeError: CUDA error: unspecified launch failure
When I run nvidia-smi, I get:
Unable to determine the device handle for GPU 0000:40:00.0: Unknown Error
Rebooting the server helps to restore the GPUs, but the same problem happens again if I try to run the example script above.
Please help! :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11020/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11020/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11019 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11019/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11019/comments | https://api.github.com/repos/huggingface/transformers/issues/11019/events | https://github.com/huggingface/transformers/issues/11019 | 848,543,462 | MDU6SXNzdWU4NDg1NDM0NjI= | 11,019 | Enable multiple `eval_dataset` in `Trainer` API | {
"login": "simonschoe",
"id": 53626067,
"node_id": "MDQ6VXNlcjUzNjI2MDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonschoe",
"html_url": "https://github.com/simonschoe",
"followers_url": "https://api.github.com/users/simonschoe/followers",
"following_url": "https://api.github.com/users/simonschoe/following{/other_user}",
"gists_url": "https://api.github.com/users/simonschoe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simonschoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonschoe/subscriptions",
"organizations_url": "https://api.github.com/users/simonschoe/orgs",
"repos_url": "https://api.github.com/users/simonschoe/repos",
"events_url": "https://api.github.com/users/simonschoe/events{/privacy}",
"received_events_url": "https://api.github.com/users/simonschoe/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"cc @sgugger ",
"This would require quite a lot of changes in the main Trainer for a use case that is a bit unique. You should subclass the `Trainer` for this problem and just add a second `evaluate` in the training loop for your second dataset.",
"@sgugger thanks for the reply! Any chance you could provide a little guidance of how this would look like? Looking at the source code of `Trainer` I admit I feel a little bit lost..."
] | 1,617 | 1,617 | null | NONE | null | # π Feature request
Allow for two or more (equally long) validation sets to be passed to the `Trainer` API which are evaluated sequentially each `eval_steps`.
## Motivation
You can find my motivation in this [thread](https://discuss.huggingface.co/t/use-trainer-api-with-two-valiation-sets/5212) and the referenced paper. My idea would be to evaluate language model pre-training on an overlapping validation set (coming from the same data distribution as the training set) and a non-overlapping validation set (sampled from future periods or another domain). Ideally, I would like to track and log the validation loss during pre-training for both validation sets. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11019/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11018 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11018/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11018/comments | https://api.github.com/repos/huggingface/transformers/issues/11018/events | https://github.com/huggingface/transformers/issues/11018 | 848,537,240 | MDU6SXNzdWU4NDg1MzcyNDA= | 11,018 | T5 documentation for computing pretraining loss seems to have a mistake | {
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @dorost1234,\r\n\r\nyou're right to be fully correct we should set the `<extra_id_0>` to `-100`. Maybe we could add the following lines to the doc\r\n\r\n```python\r\n# make sure no loss is computed on sentinel tokens. Sentinel tokens are in `additional_special_tokens`.\r\nlabels = [label if label not in tokenizer.additional_special_tokens else -100]\r\n```\r\n\r\nWould you like (and have the time) to open a PR for this to improve the docs maybe? Otherwise I can do it as well! :-) ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,617 | 1,621 | 1,621 | NONE | null | Dear @patrickvonplaten
The documentation of T5 for computing loss of pretraining seems to have a mistake, where it talks on the loss formulation:
https://huggingface.co/transformers/model_doc/t5.html?highlight=decoder_input_ids
```
input_ids = tokenizer('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt').input_ids
labels = tokenizer('<extra_id_0> cute dog <extra_id_1> the <extra_id_2>', return_tensors='pt').input_ids
# the forward function automatically creates the correct decoder_input_ids
loss = model(input_ids=input_ids, labels=labels).loss
```
1) the loss as per mt5 paper "T5 is pre-trained on a masked language modeling βspan-corruptionβ objective, where consecutive spans of input tokens are replaced with a mask token and the model is trained to **reconstruct the masked-out tokens.** "
https://arxiv.org/pdf/2010.11934.pdf
So I believe you need to set the labels of masked tokens in labels to -100 then compute the loss in this example for pretraining.
2) Based on other examples in the repository, I think one need to do shift_right to compute the correct ` decoder_input_ids` before modifying the labels for the loss (before replacing -100 for the masked tokens) and pass it to the model as well, please correct me if mistaken.
I greatly appreciate updating the documentations to show the correct procedure.
thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11018/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11018/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11017 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11017/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11017/comments | https://api.github.com/repos/huggingface/transformers/issues/11017/events | https://github.com/huggingface/transformers/issues/11017 | 848,524,306 | MDU6SXNzdWU4NDg1MjQzMDY= | 11,017 | Cannot run the gpt neo 2.7B example | {
"login": "donno2048",
"id": 61805754,
"node_id": "MDQ6VXNlcjYxODA1NzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/61805754?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/donno2048",
"html_url": "https://github.com/donno2048",
"followers_url": "https://api.github.com/users/donno2048/followers",
"following_url": "https://api.github.com/users/donno2048/following{/other_user}",
"gists_url": "https://api.github.com/users/donno2048/gists{/gist_id}",
"starred_url": "https://api.github.com/users/donno2048/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donno2048/subscriptions",
"organizations_url": "https://api.github.com/users/donno2048/orgs",
"repos_url": "https://api.github.com/users/donno2048/repos",
"events_url": "https://api.github.com/users/donno2048/events{/privacy}",
"received_events_url": "https://api.github.com/users/donno2048/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @donno2048. the GPT Neo is available on the master branch, and is not yet in a release. I invite you to install transformers from source, with the following:\r\n```\r\npip install git+https://github.com/huggingface/transformers\r\n```",
"Thanks",
"Does the min_length param work for you? I do the same as above and it doesn't seem to change anything "
] | 1,617 | 1,622 | 1,617 | NONE | null | ## Environment info
- `transformers` version: 4.4.2
- Platform: Windows and Linux (using wsl)
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1+cpu (False)
- Tensorflow version (GPU?): 2.4.1 (False)
Library:
- text generation: @patrickvonplaten
- pipelines: @LysandreJik
## Information
When running the example for gpt-neo i.e.
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B')
>>> generator("EleutherAI has", do_sample=True, min_length=50)
```
I get this:
```
Downloading: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1.46k/1.46k [00:00<00:00, 2.27MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/donno2048/.local/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 344, in pipeline
framework = framework or get_framework(model)
File "/home/donno2048/.local/lib/python3.8/site-packages/transformers/pipelines/base.py", line 71, in get_framework
model = AutoModel.from_pretrained(model, revision=revision)
File "/home/donno2048/.local/lib/python3.8/site-packages/transformers/models/auto/modeling_auto.py", line 809, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/home/donno2048/.local/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 389, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
KeyError: 'gpt_neo'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11017/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11017/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11016 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11016/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11016/comments | https://api.github.com/repos/huggingface/transformers/issues/11016/events | https://github.com/huggingface/transformers/issues/11016 | 848,490,060 | MDU6SXNzdWU4NDg0OTAwNjA= | 11,016 | Add new CANINE model | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Update on that: model and checkpoints are released:\r\n\r\nhttps://github.com/google-research/language/tree/master/language/canine\r\n\r\n:hugs: ",
"Hi @stefan-it, thanks for the update.\r\nDo you know how we can use those pre-trained tensorflow checkpoints to get the pooled text representations from CANINE model?\r\nThanks",
"any updates on this ? ",
"Hi, \r\n\r\nI've started working on this. Forward pass in PyTorch is working, and giving me the same output tensors as the TF implementation on the same input data.\r\n\r\nWill open a PR soon",
"Hi @dhgarrette,\r\n\r\nI don't want to spam the CANINE PR with this question/discussion, so I'm asking it here in this issue π
\r\n\r\nSo I would like to use CANINE for token classification (I'm currently implementing it into Flair framwork...), and for that reason tokenized input is passed to the model. For token classification using e.g. BERT one would use the first subword as \"pooling strategy\". But when using CANINE and following the subword \"analogy\", using the embedding of the first - let's say - character is a good strategy (instead of e.g. `mean`) π€ "
] | 1,617 | 1,625 | 1,625 | COLLABORATOR | null | # π New model addition
## Model description
Google recently proposed a new **C**haracter **A**rchitecture with **N**o tokenization **I**n **N**eural **E**ncoders architecture (CANINE). Not only the title is exciting:
> Pipelined NLP systems have largely been superseded by end-to-end neural modeling, yet nearly all commonly-used models still require an explicit tokenization step. While recent tokenization approaches based on data-derived subword lexicons are less brittle than manually engineered tokenizers, these techniques are not equally suited to all languages, and the use of any fixed vocabulary may limit a model's ability to adapt. In this paper, we present CANINE, a neural encoder that operates directly on character sequences, without explicit tokenization or vocabulary, and a pre-training strategy that operates either directly on characters or optionally uses subwords as a soft inductive bias. To use its finer-grained input effectively and efficiently, CANINE combines downsampling, which reduces the input sequence length, with a deep transformer stack, which encodes context. CANINE outperforms a comparable mBERT model by 2.8 F1 on TyDi QA, a challenging multilingual benchmark, despite having 28% fewer model parameters.
Overview of the architecture:

Paper is available [here](https://arxiv.org/abs/2103.06874).
We heavily need this architecture in Transformers (RIP subword tokenization)!
The first author (Jonathan Clark) said on [Twitter](https://twitter.com/JonClarkSeattle/status/1377505048029134856) that the model and code will be released in April :partying_face:
## Open source status
* [x] the model implementation is available: [here](https://caninemodel.page.link/code)
* [x] the model weights are available: [here](https://caninemodel.page.link/code)
* [x] who are the authors: @jhclark-google, @dhgarrette, @jwieting (not sure)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11016/reactions",
"total_count": 8,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 5
} | https://api.github.com/repos/huggingface/transformers/issues/11016/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11015 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11015/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11015/comments | https://api.github.com/repos/huggingface/transformers/issues/11015/events | https://github.com/huggingface/transformers/pull/11015 | 848,392,439 | MDExOlB1bGxSZXF1ZXN0NjA3MzM3OTc2 | 11,015 | added new notebook and merge of trainer | {
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,617 | 1,617 | 1,617 | MEMBER | null | # What does this PR do?
* Adds a new Notebook for SageMaker
* Adjusts documentation for the latest merge of `SageMakerTrainer` and `Trainer` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11015/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11015/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11015",
"html_url": "https://github.com/huggingface/transformers/pull/11015",
"diff_url": "https://github.com/huggingface/transformers/pull/11015.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11015.patch",
"merged_at": 1617311627000
} |
https://api.github.com/repos/huggingface/transformers/issues/11014 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11014/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11014/comments | https://api.github.com/repos/huggingface/transformers/issues/11014/events | https://github.com/huggingface/transformers/issues/11014 | 848,375,119 | MDU6SXNzdWU4NDgzNzUxMTk= | 11,014 | OSError: Can't load config for '/content/wav2vec2-large-xlsr-asr-demo'. Make sure that: | {
"login": "Kowsher",
"id": 16461536,
"node_id": "MDQ6VXNlcjE2NDYxNTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/16461536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kowsher",
"html_url": "https://github.com/Kowsher",
"followers_url": "https://api.github.com/users/Kowsher/followers",
"following_url": "https://api.github.com/users/Kowsher/following{/other_user}",
"gists_url": "https://api.github.com/users/Kowsher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kowsher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kowsher/subscriptions",
"organizations_url": "https://api.github.com/users/Kowsher/orgs",
"repos_url": "https://api.github.com/users/Kowsher/repos",
"events_url": "https://api.github.com/users/Kowsher/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kowsher/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can you `ls` what's in `/content/wav2vec2-large-xlsr-asr-demo`?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,617 | 1,620 | 1,620 | NONE | null | I'm using
pip install transformers==4.4.2
After completing the training process of ASR I can not read the trained file from my local storage. Although the path is right. But can read from hugging face
model = Wav2Vec2ForCTC.from_pretrained("/content/wav2vec2-large-xlsr-asr-demo").to("cuda")
The error:
OSError: Can't load config for '/content/wav2vec2-large-xlsr-asr-demo'. Make sure that:
- '/content/wav2vec2-large-xlsr-asr-demo' is a correct model identifier listed on 'https://huggingface.co/models'
- or '/content/wav2vec2-large-xlsr-asr-demo' is the correct path to a directory containing a config.json file | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11014/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11014/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11013 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11013/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11013/comments | https://api.github.com/repos/huggingface/transformers/issues/11013/events | https://github.com/huggingface/transformers/issues/11013 | 848,349,453 | MDU6SXNzdWU4NDgzNDk0NTM= | 11,013 | use `BaseModelOutput` as common interface for all different `BaseModelOutputWith*`? | {
"login": "JoanFM",
"id": 19825685,
"node_id": "MDQ6VXNlcjE5ODI1Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/19825685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoanFM",
"html_url": "https://github.com/JoanFM",
"followers_url": "https://api.github.com/users/JoanFM/followers",
"following_url": "https://api.github.com/users/JoanFM/following{/other_user}",
"gists_url": "https://api.github.com/users/JoanFM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JoanFM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoanFM/subscriptions",
"organizations_url": "https://api.github.com/users/JoanFM/orgs",
"repos_url": "https://api.github.com/users/JoanFM/repos",
"events_url": "https://api.github.com/users/JoanFM/events{/privacy}",
"received_events_url": "https://api.github.com/users/JoanFM/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [] | 1,617 | 1,617 | null | NONE | null | Hello team,
I have been taking a look at the `different` output models from your models, and I wonder if it would make sense to inherit all the `BaseModelOutputWithPool` and all the other flavours of modeling output, instead of using `ModelOutput`.
https://github.com/huggingface/transformers/blob/c301c26370dfa48f6a6d0408b5bb9eb70ca831b3/src/transformers/modeling_outputs.py#L24
We are trying to build a wrapper around many of the public models hosted on hugging face, and it would be useful to know if we can assume that all the potential `outputs` of the models will contain `hidden_states`. Since now they all only inherit from `ModelOutput` it seems a little confusing.
Am I missing something? Is it not something that can be assumed? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11013/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11012 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11012/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11012/comments | https://api.github.com/repos/huggingface/transformers/issues/11012/events | https://github.com/huggingface/transformers/pull/11012 | 848,275,273 | MDExOlB1bGxSZXF1ZXN0NjA3MjM3OTQ4 | 11,012 | Add multi-class, multi-label and regression to transformers | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,617 | 1,620 | 1,620 | MEMBER | null | This PR adds support for single/multi column regression and single/multi label classification tasks to `SequenceClassification` models. The `problem_type` can be specified in the config: `regression`, `single_label_classification`, `multi_label_classification`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11012/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11012",
"html_url": "https://github.com/huggingface/transformers/pull/11012",
"diff_url": "https://github.com/huggingface/transformers/pull/11012.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11012.patch",
"merged_at": 1620109420000
} |
https://api.github.com/repos/huggingface/transformers/issues/11011 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11011/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11011/comments | https://api.github.com/repos/huggingface/transformers/issues/11011/events | https://github.com/huggingface/transformers/issues/11011 | 848,073,466 | MDU6SXNzdWU4NDgwNzM0NjY= | 11,011 | a memory leak in evaluation | {
"login": "nooblyh",
"id": 44236710,
"node_id": "MDQ6VXNlcjQ0MjM2NzEw",
"avatar_url": "https://avatars.githubusercontent.com/u/44236710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nooblyh",
"html_url": "https://github.com/nooblyh",
"followers_url": "https://api.github.com/users/nooblyh/followers",
"following_url": "https://api.github.com/users/nooblyh/following{/other_user}",
"gists_url": "https://api.github.com/users/nooblyh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nooblyh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nooblyh/subscriptions",
"organizations_url": "https://api.github.com/users/nooblyh/orgs",
"repos_url": "https://api.github.com/users/nooblyh/repos",
"events_url": "https://api.github.com/users/nooblyh/events{/privacy}",
"received_events_url": "https://api.github.com/users/nooblyh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Which model are you using? There is no reason the predictions for QQP should OOM even your GPU, unless the model is outputting more than the logits.",
"Thank you very much for your reply! My model config is as below:\r\n```JSON\r\n{\r\n \"architectures\": [\r\n \"AlbertForMaskedLM\"\r\n ],\r\n \"attention_probs_dropout_prob\": 0,\r\n \"bos_token_id\": 2,\r\n \"classifier_dropout_prob\": 0.1,\r\n \"down_scale_factor\": 1,\r\n \"embedding_size\": 128,\r\n \"eos_token_id\": 3,\r\n \"gap_size\": 0,\r\n \"hidden_act\": \"gelu_new\",\r\n \"hidden_dropout_prob\": 0,\r\n \"hidden_size\": 2048,\r\n \"initializer_range\": 0.02,\r\n \"inner_group_num\": 1,\r\n \"intermediate_size\": 3072,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 512,\r\n \"model_type\": \"albert\",\r\n \"net_structure_type\": 0,\r\n \"num_attention_heads\": 16,\r\n \"num_hidden_groups\": 1,\r\n \"num_hidden_layers\": 12,\r\n \"num_memory_blocks\": 0,\r\n \"pad_token_id\": 0,\r\n \"type_vocab_size\": 2,\r\n \"vocab_size\": 30000\r\n}\r\n```\r\n\r\nAnd I'm loading my model like this:\r\n```Python\r\nstate_dict = torch.load(os.path.join(model_args.model_name_or_path, \"checkpoint.pth\"))\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\r\n pretrained_model_name_or_path=None,\r\n config=config,\r\n state_dict=state_dict,\r\n use_auth_token=True if model_args.use_auth_token else None\r\n)\r\n```\r\n\r\nAnd these are all the named parameters:\r\n```\r\nalbert.embeddings.position_ids torch.Size([1, 512])\r\nalbert.embeddings.word_embeddings.weight torch.Size([30000, 128])\r\nalbert.embeddings.position_embeddings.weight torch.Size([512, 128])\r\nalbert.embeddings.token_type_embeddings.weight torch.Size([2, 128])\r\nalbert.embeddings.LayerNorm.weight torch.Size([128])\r\nalbert.embeddings.LayerNorm.bias torch.Size([128])\r\nalbert.encoder.embedding_hidden_mapping_in.weight torch.Size([2048, 128])\r\nalbert.encoder.embedding_hidden_mapping_in.bias torch.Size([2048])\r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.full_layer_layer_norm.weight torch.Size([2048])\r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.full_layer_layer_norm.bias torch.Size([2048])\r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.attention.query.weight torch.Size([2048, 2048])\r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.attention.query.bias torch.Size([2048])\r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.attention.key.weight torch.Size([2048, 2048])\r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.attention.key.bias torch.Size([2048])\r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.attention.value.weight torch.Size([2048, 2048])\r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.attention.value.bias torch.Size([2048])\r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.attention.dense.weight torch.Size([2048, 2048])\r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.attention.dense.bias torch.Size([2048])\r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.attention.LayerNorm.weight torch.Size([2048])\r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.attention.LayerNorm.bias torch.Size([2048])\r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.ffn.weight torch.Size([3072, 2048])\r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.ffn.bias torch.Size([3072])\r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.ffn_output.weight torch.Size([2048, 3072])\r\nalbert.embeddings.LayerNorm.weight torch.Size([128]) \r\nalbert.embeddings.LayerNorm.bias torch.Size([128]) \r\nalbert.encoder.embedding_hidden_mapping_in.weight torch.Size([2048, 128]) \r\nalbert.encoder.embedding_hidden_mapping_in.bias torch.Size([2048]) \r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.full_layer_layer_norm.weight torch.Size([2048]) \r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.full_layer_layer_norm.bias torch.Size([2048]) \r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.attention.query.weight torch.Size([2048, 2048]) \r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.attention.query.bias torch.Size([2048]) \r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.attention.key.weight torch.Size([2048, 2048]) \r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.attention.key.bias torch.Size([2048]) \r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.attention.value.weight torch.Size([2048, 2048]) \r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.attention.value.bias torch.Size([2048]) \r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.attention.dense.weight torch.Size([2048, 2048]) \r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.attention.dense.bias torch.Size([2048]) \r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.attention.LayerNorm.weight torch.Size([2048]) \r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.attention.LayerNorm.bias torch.Size([2048]) \r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.ffn.weight torch.Size([3072, 2048]) \r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.ffn.bias torch.Size([3072]) \r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.ffn_output.weight torch.Size([2048, 3072]) \r\nalbert.encoder.albert_layer_groups.0.albert_layers.0.ffn_output.bias torch.Size([2048]) \r\nalbert.pooler.weight torch.Size([2048, 2048]) \r\nalbert.pooler.bias torch.Size([2048]) \r\nclassifier.weight torch.Size([2, 2048]) \r\nclassifier.bias torch.Size([2]) \r\n```\r\nThe training and evaluation code is run_glue.py.",
"By the way, my model is pretrained(distilled) in a distributed manner(distributedDataParallel). I'm wondering if it is ok to run GLUE tasks this way? I will be grateful for any help you can provide. @sgugger ",
"I find these logs and I guess this is why my training and evaluation failed.\r\n```\r\n[INFO|trainer.py:472] 2021-04-04 23:14:56,386 >> The following columns in the training set don't have a corresponding argument in `AlbertForSequenceClassification.forward` and have been ignored: question1, question2, idx.\r\n[INFO|trainer.py:472] 2021-04-04 23:14:56,389 >> The following columns in the evaluation set don't have a corresponding argument in `AlbertForSequenceClassification.forward` and have been ignored: question1, question2, idx.\r\n```\r\nBut I am confused about what happened inside the trainer.",
"The fact the model has been trained in a distributed manner is not relevant and shouldn't impact this. The warning you get is also not related and normal if you're running the `run_glue` script: it's just informing you that the `Trainer` is dropping those columns after the preprocessing, since they is no model argument matching.\r\n\r\nI'm trying to reproduce but everything is working fine on my side. If you just use a randomly initialized ALBERT with this config do you have the same problem? (I can run evaluation without problem on my side for that)",
"Thank you for your reply. I double-check my config today and find that I am reusing the config from distillation and the output_hidden_states is set to true......I am very sorry for my carelessness and thank you so much for your time and attention.",
"Ah I understand better now :-) "
] | 1,617 | 1,617 | 1,617 | NONE | null | ## Environment info
- `transformers` version: 4.4.0.dev0
- Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.7.1+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help
@sgugger
## Information
Model I am using (Bert, XLNet ...):
albert-base-v2 but use a hidden_size of 2048 and a num_attention_heads of 16, distilled from albert-xlarge-v2.
The problem arises when using:
* [x] the official example scripts: (give details below)
examples/text-classification/run_glue.py
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
GLUE QQP task
## To reproduce
Steps to reproduce the behavior:
I want to evaluate my model on the GLUE QQP task. If I don't use eval_accumulation_step, my GPUs are OOM. But if I use eval_accumulation_step and my memory usage will grow up to the memory limit (>250GB) until the first process is killed due to this issue. So I assumed that maybe there is a memory leak.
My running script is as below.
```
CUDA_VISIBLE_DEVICES=0 ~/.conda/envs/thesis-lyh/bin/python run_glue.py \
--model_name_or_path $MODEL_PATH \
--task_name $TASK_NAME \
--eval_accumulation_step 1 \
--do_eval \
--max_seq_length 128 \
--per_device_eval_batch_size 1 \
--output_dir output/glue/$TASK_NAME/$MODEL_NAME/
```
No matter what batch_size and accumulation_step are set to, the above problem still occurs.
But I am doing fine in models hosted in the model hub and a smaller model I distilled in the same way.
## Expected behavior
I have 250GB RAM so it should be enough to save the result.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11011/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11010 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11010/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11010/comments | https://api.github.com/repos/huggingface/transformers/issues/11010/events | https://github.com/huggingface/transformers/issues/11010 | 848,030,700 | MDU6SXNzdWU4NDgwMzA3MDA= | 11,010 | run_seq2seq.py meet bug in using huggingface datasets billsum | {
"login": "LeopoldACC",
"id": 44536699,
"node_id": "MDQ6VXNlcjQ0NTM2Njk5",
"avatar_url": "https://avatars.githubusercontent.com/u/44536699?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LeopoldACC",
"html_url": "https://github.com/LeopoldACC",
"followers_url": "https://api.github.com/users/LeopoldACC/followers",
"following_url": "https://api.github.com/users/LeopoldACC/following{/other_user}",
"gists_url": "https://api.github.com/users/LeopoldACC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LeopoldACC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeopoldACC/subscriptions",
"organizations_url": "https://api.github.com/users/LeopoldACC/orgs",
"repos_url": "https://api.github.com/users/LeopoldACC/repos",
"events_url": "https://api.github.com/users/LeopoldACC/events{/privacy}",
"received_events_url": "https://api.github.com/users/LeopoldACC/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,617 | 1,620 | 1,620 | NONE | null | run code below
```shell
python examples/seq2seq/run_seq2seq_tune.py --model_name_or_path /home2/zhenggo1/checkpoint/pegasus_billsum --do_eval --task summarization_billsum --dataset_name billsum --output_dir /home2/zhenggo1/checkpoint/pegasus_billsum --per_device_train_batch_size=8 --per_device_eval_batch_size=8 --predict_with_generate --tune --tuned_checkpoint="/home2/zhenggo1/checkpoint/pegasus_billsum" --max_source_length 1024 --max_target_length=256 --val_max_target_length=256 --do_calibration
```
bug below,in my opinion,is that the newest dataset process module don't match the billsum?
```python
Traceback (most recent call last):
File "examples/seq2seq/run_seq2seq_tune.py", line 694, in <module>
main()
File "examples/seq2seq/run_seq2seq_tune.py", line 374, in main
column_names = datasets["validation"].column_names
KeyError: 'validation'
```
using `dataset_name` and print() it show as below
```
DatasetDict({
train: Dataset({
features: ['text', 'summary', 'title'],
num_rows: 18949
})
test: Dataset({
features: ['text', 'summary', 'title'],
num_rows: 3269
})
ca_test: Dataset({
features: ['text', 'summary', 'title'],
num_rows: 1237
})
})
```
I want to try another way to load the local dataset,but the dataset's oldest version is below
```
train.source
train.target
val.source
val.target
test.source
test.target
```
which can be process by the oldest code:
```python
train_dataset = (
dataset_class(
tokenizer,
type_path="train",
data_dir=data_args.data_dir,
n_obs=data_args.n_train,
max_target_length=data_args.max_target_length,
max_source_length=data_args.max_source_length,
prefix=model.config.prefix or "",
)
if training_args.do_train
else None
)
```
but not the newest code:
```python
if data_args.dataset_name is not None:
# Downloading and loading a dataset from the hub.
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
else:
data_files = {}
if data_args.train_file is not None:
data_files["train"] = data_args.train_file
extension = data_args.train_file.split(".")[-1]
if data_args.validation_file is not None:
data_files["validation"] = data_args.validation_file
extension = data_args.validation_file.split(".")[-1]
if data_args.test_file is not None:
data_files["test"] = data_args.test_file
extension = data_args.test_file.split(".")[-1]
datasets = load_dataset(extension, data_files=data_files)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11010/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11009 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11009/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11009/comments | https://api.github.com/repos/huggingface/transformers/issues/11009/events | https://github.com/huggingface/transformers/issues/11009 | 847,973,413 | MDU6SXNzdWU4NDc5NzM0MTM= | 11,009 | How to load weights from a private server? | {
"login": "hwijeen",
"id": 29157715,
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hwijeen",
"html_url": "https://github.com/hwijeen",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"My workaround was to manually download checkpoints using `cached_file` function to local, and instantiate a model from the download file using `from_pretrained`.",
"Yes that's indeed the preferred workaround. Thanks!"
] | 1,617 | 1,618 | 1,618 | CONTRIBUTOR | null | Hi, thank you for the great library!
I am trying instantiate a model with weights uploaded on my private server. By looking at [`is_remote_url`](https://github.com/huggingface/transformers/blob/8780caa388c7b2aa937454ed96bcdd3f097f851d/src/transformers/modeling_utils.py#L1011) function, it seems that transformers supports loading from a private server, but it seems a bit tricky.
```python
BertModel.from_pretrained('http://my-server/my-bert-cased/pytorch_model.bin') # cannot not find config
BertModel.from_pretrained('http://my-server/my-bert-cased/config.json') # finds config, but cannot find model weights
BertModel.from_pretrained('http://my-server/my-bert-cased', config='http://my-server/my-bert-cased/config.json') # works!
```
Although the third one works, it is cumbersome as I need to download config from private server to my local machine beforehand.
I would appreciate it if someone could share or point to a better way!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11009/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11008 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11008/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11008/comments | https://api.github.com/repos/huggingface/transformers/issues/11008/events | https://github.com/huggingface/transformers/issues/11008 | 847,776,651 | MDU6SXNzdWU4NDc3NzY2NTE= | 11,008 | error: fine-tunes language model with added_tokens | {
"login": "nghuyong",
"id": 16462374,
"node_id": "MDQ6VXNlcjE2NDYyMzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/16462374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nghuyong",
"html_url": "https://github.com/nghuyong",
"followers_url": "https://api.github.com/users/nghuyong/followers",
"following_url": "https://api.github.com/users/nghuyong/following{/other_user}",
"gists_url": "https://api.github.com/users/nghuyong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nghuyong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nghuyong/subscriptions",
"organizations_url": "https://api.github.com/users/nghuyong/orgs",
"repos_url": "https://api.github.com/users/nghuyong/repos",
"events_url": "https://api.github.com/users/nghuyong/events{/privacy}",
"received_events_url": "https://api.github.com/users/nghuyong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you share the command you are using to launch the script? I'm trying to reproduce but it works fine for me.\r\nAlso your error seems like a CUDA setup error, so is the script running properly without the change?",
"@sgugger \r\n\r\n```\r\nexport BASE_PATH=/data/huyong/code/socialbert\r\nexport CUDA_VISIBLE_DEVICES=1\r\npython run_mlm.py \\\r\n --config_name $BASE_PATH/pretrained_models/bert\\\r\n --model_type bert \\\r\n --max_seq_length 128 \\\r\n --preprocessing_num_workers 20 \\\r\n --model_name_or_path $BASE_PATH/pretrained_models/bert \\\r\n --train_file $BASE_PATH/data/mini.txt \\\r\n --line_by_line \\\r\n --do_train \\\r\n --save_total_limit 3 \\\r\n --per_device_train_batch_size 8 \\\r\n --max_train_samples 100000 \\\r\n --output_dir $BASE_PATH/checkpoint/bert\r\n```",
"Thanks, but no one will be able to help you if you're using a personal model you don't share, as we can't debug something we can't reproduce. Also, you did not tell us if the script was running fine before the change.",
"@sgugger \r\nThanks. \r\nActually, I don't use my personal model, and the model I use to continue pre-train is the [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext). And I manually download three files: `vocab.txt`,`config.json` and `pytorch_model.bin`, and run the script by specifying the model dir and get wrong. But when I directly use the model name like the following, and it works!\r\n```bash\r\nexport BASE_PATH=/data/huyong/code/socialbert\r\nexport CUDA_VISIBLE_DEVICES=1\r\npython run_mlm.py \\\r\n --config_name hfl/chinese-roberta-wwm-ext \\\r\n --model_name_or_path hfl/chinese-roberta-wwm-ext \\\r\n --model_type bert \\\r\n --max_seq_length 128 \\\r\n --preprocessing_num_workers 20 \\\r\n --train_file $BASE_PATH/data/mini.txt \\\r\n --line_by_line \\\r\n --do_train \\\r\n --save_total_limit 3 \\\r\n --per_device_train_batch_size 8 \\\r\n --max_train_samples 100000 \\\r\n --output_dir $BASE_PATH/checkpoint/bert\r\n```\r\nThanks a lot !"
] | 1,617 | 1,617 | 1,617 | CONTRIBUTOR | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: version: 4.3.3
- Platform: Linux-4.15.0-29-generic-x86_64-with-debian-stretch-sid
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@LysandreJik, @LysandreJik
## Information
I fine-tune BERT on my own social media data, I do this follow the instruction in the `examples/language-modeling/README.md`. **I follow the official run_mlm.py file, and the only change is that I add some new tokens after the tokenizer inits, and then I got the Cuda runtime error. ** If I don't add some new tokens, it works well.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Only add one line in `examples/language-modeling/run_mlm.py`
start from run_mlm.py L291:
https://github.com/huggingface/transformers/blob/838f83d84ccf57f968e0ace7f400e43b92643552/examples/language-modeling/run_mlm.py#L291
```Python
...
if model_args.tokenizer_name:
tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, **tokenizer_kwargs)
elif model_args.model_name_or_path:
tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, **tokenizer_kwargs)
else:
raise ValueError(
"You are instantiating a new tokenizer from scratch. This is not supported by this script."
"You can do it from another script, save it, and load it from here, using --tokenizer_name."
)
# only add this line!
tokenizer.add_tokens(['[awsl]', '[happy]', '[doge]', ... , '[cry]'])
...
```
running log
```
[INFO|configuration_utils.py:485] 2021-04-01 10:49:28,166 >> Model config BertConfig {
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"directionality": "bidi",
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"output_past": true,
"pad_token_id": 1,
"pooler_fc_size": 768,
"pooler_num_attention_heads": 12,
"pooler_num_fc_layers": 3,
"pooler_size_per_head": 128,
"pooler_type": "first_token_transform",
"position_embedding_type": "absolute",
"transformers_version": "4.3.3",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 21128
}
[INFO|modeling_utils.py:1025] 2021-04-01 10:49:28,167 >> loading weights file /data/huyong/code/socialbert/pretrained_models/roberta/pytorch_model.bin
[WARNING|modeling_utils.py:1135] 2021-04-01 10:49:31,389 >> Some weights of the model checkpoint at /data/huyong/code/socialbert/pretrained_models/roberta were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']
- This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[INFO|modeling_utils.py:1152] 2021-04-01 10:49:31,389 >> All the weights of BertForMaskedLM were initialized from the model checkpoint at /data/huyong/code/socialbert/pretrained_models/roberta.
If your task is similar to the task the model of the checkpoint was trained on, you can already use BertForMaskedLM for predictions without further training.
[INFO|trainer.py:837] 2021-04-01 10:49:31,469 >> ***** Running training *****
[INFO|trainer.py:838] 2021-04-01 10:49:31,469 >> Num examples = 100000
[INFO|trainer.py:839] 2021-04-01 10:49:31,469 >> Num Epochs = 3
[INFO|trainer.py:840] 2021-04-01 10:49:31,469 >> Instantaneous batch size per device = 8
[INFO|trainer.py:841] 2021-04-01 10:49:31,469 >> Total train batch size (w. parallel, distributed & accumulation) = 8
[INFO|trainer.py:842] 2021-04-01 10:49:31,469 >> Gradient Accumulation steps = 1
[INFO|trainer.py:843] 2021-04-01 10:49:31,469 >> Total optimization steps = 37500
0%| | 0/37500 [00:00<?, ?it/s]
0%| | 1/37500 [00:00<1:48:39, 5.75it/s]/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [485,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
...
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [488,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [488,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [488,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [488,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [488,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [488,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/pytorch/aten/src/THC/THCTensorIndex.cu:361: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [488,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
Traceback (most recent call last):
File "mlm.py", line 537, in <module>
main()
File "mlm.py", line 503, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/huyong/miniconda3/lib/python3.6/site-packages/transformers/trainer.py", line 940, in train
tr_loss += self.training_step(model, inputs)
File "/home/huyong/miniconda3/lib/python3.6/site-packages/transformers/trainer.py", line 1304, in training_step
loss = self.compute_loss(model, inputs)
File "/home/huyong/miniconda3/lib/python3.6/site-packages/transformers/trainer.py", line 1334, in compute_loss
outputs = model(**inputs)
File "/home/huyong/miniconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/home/huyong/miniconda3/lib/python3.6/site-packages/transformers/models/bert/modeling_bert.py", line 1315, in forward
return_dict=return_dict,
File "/home/huyong/miniconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/home/huyong/miniconda3/lib/python3.6/site-packages/transformers/models/bert/modeling_bert.py", line 976, in forward
return_dict=return_dict,
File "/home/huyong/miniconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/home/huyong/miniconda3/lib/python3.6/site-packages/transformers/models/bert/modeling_bert.py", line 574, in forward
output_attentions,
File "/home/huyong/miniconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/home/huyong/miniconda3/lib/python3.6/site-packages/transformers/models/bert/modeling_bert.py", line 496, in forward
self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output
File "/home/huyong/miniconda3/lib/python3.6/site-packages/transformers/modeling_utils.py", line 1787, in apply_chunking_to_forward
return forward_fn(*input_tensors)
File "/home/huyong/miniconda3/lib/python3.6/site-packages/transformers/models/bert/modeling_bert.py", line 507, in feed_forward_chunk
intermediate_output = self.intermediate(attention_output)
File "/home/huyong/miniconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/home/huyong/miniconda3/lib/python3.6/site-packages/transformers/models/bert/modeling_bert.py", line 410, in forward
hidden_states = self.dense(hidden_states)
File "/home/huyong/miniconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/home/huyong/miniconda3/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/home/huyong/miniconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 1372, in linear
output = input.matmul(weight.t())
RuntimeError: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I found in the `run_mlm.py` , it has `model.resize_token_embeddings(len(tokenizer))`, Why still get the error ? Thanks
<!-- A clear and concise description of what you would expect to happen. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11008/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/11008/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11007 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11007/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11007/comments | https://api.github.com/repos/huggingface/transformers/issues/11007/events | https://github.com/huggingface/transformers/issues/11007 | 847,769,869 | MDU6SXNzdWU4NDc3Njk4Njk= | 11,007 | about .py file | {
"login": "rabrabrab",
"id": 81627288,
"node_id": "MDQ6VXNlcjgxNjI3Mjg4",
"avatar_url": "https://avatars.githubusercontent.com/u/81627288?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabrabrab",
"html_url": "https://github.com/rabrabrab",
"followers_url": "https://api.github.com/users/rabrabrab/followers",
"following_url": "https://api.github.com/users/rabrabrab/following{/other_user}",
"gists_url": "https://api.github.com/users/rabrabrab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabrabrab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabrabrab/subscriptions",
"organizations_url": "https://api.github.com/users/rabrabrab/orgs",
"repos_url": "https://api.github.com/users/rabrabrab/repos",
"events_url": "https://api.github.com/users/rabrabrab/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabrabrab/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, could you provide the location of the wrong links? Without additional information we cannot help you.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,617 | 1,620 | 1,620 | NONE | null | i can't download the file" convert_tf_checkpoint_to_pytorch.py" and three other .py file that you set a hyperlink on it. if i click on the link it notice me: 404. Where can i get them? thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11007/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11006 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11006/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11006/comments | https://api.github.com/repos/huggingface/transformers/issues/11006/events | https://github.com/huggingface/transformers/issues/11006 | 847,755,321 | MDU6SXNzdWU4NDc3NTUzMjE= | 11,006 | "Converting Tensorflow Checkpoints" meets ('Pointer shape torch.Size([312]) and array shape (128,) mismatched', torch.Size([312]), (128,)) | {
"login": "LivinLuo1993",
"id": 44887637,
"node_id": "MDQ6VXNlcjQ0ODg3NjM3",
"avatar_url": "https://avatars.githubusercontent.com/u/44887637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LivinLuo1993",
"html_url": "https://github.com/LivinLuo1993",
"followers_url": "https://api.github.com/users/LivinLuo1993/followers",
"following_url": "https://api.github.com/users/LivinLuo1993/following{/other_user}",
"gists_url": "https://api.github.com/users/LivinLuo1993/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LivinLuo1993/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LivinLuo1993/subscriptions",
"organizations_url": "https://api.github.com/users/LivinLuo1993/orgs",
"repos_url": "https://api.github.com/users/LivinLuo1993/repos",
"events_url": "https://api.github.com/users/LivinLuo1993/events{/privacy}",
"received_events_url": "https://api.github.com/users/LivinLuo1993/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,617 | 1,620 | 1,620 | NONE | null | when "Converting Tensorflow Checkpoints", I see this "Pointer shape {pointer.shape} and array shape {array.shape} mismatched"
AssertionError: ('Pointer shape torch.Size([312]) and array shape (128,) mismatched', torch.Size([312]), (128,)), and the pretrainmodel comes from https://github.com/ZhuiyiTechnology/pretrained-models
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11006/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11006/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11005 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11005/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11005/comments | https://api.github.com/repos/huggingface/transformers/issues/11005/events | https://github.com/huggingface/transformers/issues/11005 | 847,400,200 | MDU6SXNzdWU4NDc0MDAyMDA= | 11,005 | ReduceLROnPlateau-like functionality? | {
"login": "tchang1997",
"id": 30159285,
"node_id": "MDQ6VXNlcjMwMTU5Mjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/30159285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tchang1997",
"html_url": "https://github.com/tchang1997",
"followers_url": "https://api.github.com/users/tchang1997/followers",
"following_url": "https://api.github.com/users/tchang1997/following{/other_user}",
"gists_url": "https://api.github.com/users/tchang1997/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tchang1997/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tchang1997/subscriptions",
"organizations_url": "https://api.github.com/users/tchang1997/orgs",
"repos_url": "https://api.github.com/users/tchang1997/repos",
"events_url": "https://api.github.com/users/tchang1997/events{/privacy}",
"received_events_url": "https://api.github.com/users/tchang1997/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [] | 1,617 | 1,617 | null | NONE | null | # π Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
**Description:** Dynamic learning rate reduction upon metric saturation, as in `torch.optim.lr_scheduler.ReduceLROnPlateau`, integrated into the `Trainer` API.
Alternately, if there's any way (if hacky) to get dynamic learning rate reduction using the `Trainer` API as it is, that would be extremely helpful as well.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
LR schedules are a commonly used trick for ML optimization, and the `transformers` library already provides a significant number of baseline schedules (i.e. linear, cosine schedulers, warmup/no-warmup, restarts). However, these schedules are all static: updates to them occur at fixed steps in the optimization -- one can always tell what the learning rate at, say, step 1000 will be given these fixed schedules.
Reducing learning rate dynamically is also a common practical technique, usually applied when loss saturates (fails to improve after N iterations).
The difficulty is that dynamic learning rate reduction follows a non-fixed update schedule, meaning that working within the `LambdaLR` framework used by the other scheduler is less straightforward.
## Your contribution
I don't have a working implementation yet. At a high level, I tried to implement this myself as a `TrainerCallback` modeled on both the `EarlyStoppingCallback` in the `transformers` library as well as the `ReduceLROnPlateau` implementation in PyTorch. I was able to modify the optimizer object; however, learning rate updates to the optimizer would get overwritten by the scheduler. In any case, I also don't know if it's good style/even possible to modify the optimizer and scheduler in this way using a Callback -- seems like the `control` object is the only thing that changes within a `TrainerCallback`.
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11005/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11004 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11004/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11004/comments | https://api.github.com/repos/huggingface/transformers/issues/11004/events | https://github.com/huggingface/transformers/issues/11004 | 847,393,804 | MDU6SXNzdWU4NDczOTM4MDQ= | 11,004 | Getting `raise NotImplementedError` for base_model.get_input_embeddings() when upgrading from pytorch-transformers | {
"login": "gsrivas4",
"id": 23170843,
"node_id": "MDQ6VXNlcjIzMTcwODQz",
"avatar_url": "https://avatars.githubusercontent.com/u/23170843?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gsrivas4",
"html_url": "https://github.com/gsrivas4",
"followers_url": "https://api.github.com/users/gsrivas4/followers",
"following_url": "https://api.github.com/users/gsrivas4/following{/other_user}",
"gists_url": "https://api.github.com/users/gsrivas4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gsrivas4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gsrivas4/subscriptions",
"organizations_url": "https://api.github.com/users/gsrivas4/orgs",
"repos_url": "https://api.github.com/users/gsrivas4/repos",
"events_url": "https://api.github.com/users/gsrivas4/events{/privacy}",
"received_events_url": "https://api.github.com/users/gsrivas4/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1897896961,
"node_id": "MDU6TGFiZWwxODk3ODk2OTYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Migration",
"name": "Migration",
"color": "e99695",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hello! Do you have a reproducible code example so that we can try to understand what's happening here? Thank you!",
"I have generated a simplified version of the original Oscar (https://github.com/microsoft/Oscar) codebase here - https://github.com/gsrivas4/Oscar_latest. The branch `old_transformers` -https://github.com/gsrivas4/Oscar_latest/tree/old_transformers uses an old version of hugging face without an issue. However, the branch `latest_transformers` - https://github.com/gsrivas4/Oscar_latest/tree/latest_transformer gets below error when I run the command `oscar/run_captioning.py --model_name_or_path pretrained_models/base-vg-labels/ep_67_588997 --do_train --do_lower_case --evaluate_during_training --add_od_labels --learning_rate 0.00003 --per_gpu_train_batch_size 64 --num_train_epochs 30 --save_steps 5000 --output_dir output/`:\r\n```\r\nTraceback (most recent call last):\r\n File \"oscar/run_captioning.py\", line 1010, in <module>\r\n main()\r\n File \"oscar/run_captioning.py\", line 966, in main\r\n from_tf=bool('.ckpt' in args.model_name_or_path), config=config)\r\n File \"/usr/local/lib/python3.6/site-packages/transformers/modeling_utils.py\", line 1185, in from_pretrained\r\n model.tie_weights()\r\n File \"/usr/local/lib/python3.6/site-packages/transformers/modeling_utils.py\", line 497, in tie_weights\r\n self._tie_or_clone_weights(output_embeddings, self.get_input_embeddings())\r\n File \"/usr/local/lib/python3.6/site-packages/transformers/modeling_utils.py\", line 462, in get_input_embeddings\r\n return base_model.get_input_embeddings()\r\n File \"/usr/local/lib/python3.6/site-packages/transformers/modeling_utils.py\", line 464, in get_input_embeddings\r\n raise NotImplementedError\r\nNotImplementedError\r\n``` \r\n\r\nTo replicate the experiment, follow the [README.md](https://github.com/gsrivas4/Oscar_latest/blob/old_transformers/README.md) file to use old version of transformers - https://github.com/huggingface/transformers/tree/067923d3267325f525f4e46f357360c191ba562e. Follow the [README.md](https://github.com/gsrivas4/Oscar_latest/blob/latest_transformer/README.md) to run the code with latest transformers. \r\n\r\nThe platform information is below:\r\nPlatform: x86_64 GNU/Linux\r\nPython version: 3.6.8\r\nPyTorch version (GPU?): 1.7.0+cu101 (GPU)\r\nTensorflow version (GPU?): 2.3.0 (GPU)\r\nUsing GPU in script?: yes\r\nUsing distributed or parallel set-up in script?: No\r\n\r\nLet me know if you have any issues generating the setup.\r\n",
"It seems your `BertForImageCaptioning` doesn't have a `get_input_embeddings()` method, and neither does your `CaptionPreTrainedModel`. \r\n\r\nYou should implement that method on either of those in order to be able to resize them, like it is done in the `BertModel` for example:\r\n\r\nhttps://github.com/huggingface/transformers/blob/6c25f5228e7fb48a520f63ee82dd9ce25b27d6df/src/transformers/models/bert/modeling_bert.py#L853-L854\r\n\r\nSorry for the inconvenience!\r\n",
"@LysandreJik I understand that I have to define the function `get_input_embeddings()` and I have also looked at the sample example where this function is defined - https://github.com/huggingface/transformers/blob/master/examples/research_projects/movement-pruning/emmental/modeling_bert_masked.py#L487-L488. It would be great if some description is given about the inputs and outputs of this function in a bit more detailed way. It would be also beneficial if the details about this function are documented in the migration document.\r\n ",
"@LysandreJik I could resolve the issue by adding definition for the function at following lines in my code - https://github.com/gsrivas4/Oscar_latest/blob/latest_transformer/oscar/modeling/modeling_bert.py#L190-L191. Thanks for the help.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,617 | 1,620 | 1,620 | NONE | null | # π Migration
## Information
<!-- Important information -->
Getting `raise NotImplementedError` at
the lines - https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_utils.py#L474-L477 when I am trying to upgrade my code from pytorch-transformers to transformers
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below): Not sure
* [ ] my own modified scripts: (give details below): Yes
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name) No
* [ ] my own task or dataset: (give details below): No
## Details
<!-- A clear and concise description of the migration issue.
If you have code snippets, please provide it here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
-->
I am using Oscar repo (https://github.com/microsoft/Oscar), which uses an older version of Huggingface pytorch-transformers (https://github.com/huggingface/transformers/tree/067923d3267325f525f4e46f357360c191ba562e). I am trying to upgrade the repo to use latest version of transformers (https://github.com/huggingface/transformers). However, I am getting below error :
```
Traceback (most recent call last):
File "oscar/run_captioning_airsplay.py", line 1019, in <module>
main()
File "oscar/run_captioning_airsplay.py", line 966, in main
from_tf=bool('.ckpt' in args.model_name_or_path), config=config)
File "/home/default/ephemeral_drive/work/image_captioning/Oscar_edited/transformers/src/transformers/modeling_utils.py", line 1188, in from_pretrained
model.tie_weights()
File "/home/default/ephemeral_drive/work/image_captioning/Oscar_edited/transformers/src/transformers/modeling_utils.py", line 504, in tie_weights
self._tie_or_clone_weights(output_embeddings, self.get_input_embeddings())
File "/home/default/ephemeral_drive/work/image_captioning/Oscar_edited/transformers/src/transformers/modeling_utils.py", line 469, in get_input_embeddings
return base_model.get_input_embeddings()
File "/home/default/ephemeral_drive/work/image_captioning/Oscar_edited/transformers/src/transformers/modeling_utils.py", line 471, in get_input_embeddings
raise NotImplementedError
NotImplementedError
```
The error occurs at this block in the transformers code - https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_utils.py#L474-L477. My code runs fine when I use an older version of hugging face transformers - https://github.com/huggingface/transformers/tree/067923d3267325f525f4e46f357360c191ba562e, possibly, because pytorch-transformers did not have a requirement that `set_input_embeddings()` should be defined for base_model. The base model that I am using is a custom defined model `BertForImageCaptioning` (https://github.com/microsoft/Oscar/blob/df79152b708c3c46f2dc93324776a27406ccc634/oscar/modeling/modeling_bert.py#L604), which has a custom defined parent class ` CaptionPreTrainedModel` (https://github.com/microsoft/Oscar/blob/df79152b708c3c46f2dc93324776a27406ccc634/oscar/modeling/modeling_utils.py#L21), which has a parent class `BertPreTrainedModel`.
I have not seen any mention of how to deal with this issue in the migration documents from Pytorch-transformers or from transformers 3.x. (https://huggingface.co/transformers/migration.html#migrating-from-transformers-v3-x-to-v4-x).
I have looked into examples to check how to define the function, but this did not give enough details to define the function at my side - https://github.com/huggingface/transformers/blob/master/examples/research_projects/movement-pruning/emmental/modeling_bert_masked.py#L487-L488.
How should I define the function `get_input_embeddings()` for my use case and what are the guidelines for doing the same. Are there any examples explaining the process of defining the function.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: https://github.com/huggingface/transformers
- Platform: x86_64 GNU/Linux
- Python version: 3.6.8
- PyTorch version (GPU?): 1.7.0+cu101 (GPU)
- Tensorflow version (GPU?): 2.3.0 (GPU)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: No
<!-- IMPORTANT: which version of the former library do you use? -->
* `pytorch-transformers` or `pytorch-pretrained-bert` version (or branch): https://github.com/huggingface/transformers/tree/067923d3267325f525f4e46f357360c191ba562e
## Checklist
- [ Yes] I have read the migration guide in the readme.
([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers);
[pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [ yes] I checked if a related official extension example runs on my machine.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11004/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11004/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11003 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11003/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11003/comments | https://api.github.com/repos/huggingface/transformers/issues/11003/events | https://github.com/huggingface/transformers/issues/11003 | 847,294,653 | MDU6SXNzdWU4NDcyOTQ2NTM= | 11,003 | conda install transformers (not working) behaving differently from pip install transformers (working) for CentOS 7.9 | {
"login": "harrisonbay",
"id": 18337728,
"node_id": "MDQ6VXNlcjE4MzM3NzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/18337728?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harrisonbay",
"html_url": "https://github.com/harrisonbay",
"followers_url": "https://api.github.com/users/harrisonbay/followers",
"following_url": "https://api.github.com/users/harrisonbay/following{/other_user}",
"gists_url": "https://api.github.com/users/harrisonbay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harrisonbay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harrisonbay/subscriptions",
"organizations_url": "https://api.github.com/users/harrisonbay/orgs",
"repos_url": "https://api.github.com/users/harrisonbay/repos",
"events_url": "https://api.github.com/users/harrisonbay/events{/privacy}",
"received_events_url": "https://api.github.com/users/harrisonbay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! From what I'm seeing, the error comes from the `tokenizers` library instead:\r\n```\r\n[...]\r\n File \"/homes/gws/hcybay/miniconda3/envs/test2/lib/python3.8/site-packages/transformers-4.4.2-py3.8.egg/transformers/tokenization_utils_fast.py\", line 25, in <module>\r\n File \"/homes/gws/hcybay/miniconda3/envs/test2/lib/python3.8/site-packages/tokenizers/__init__.py\", line 79, in <module>\r\n from .tokenizers import (\r\nImportError: /lib64/libc.so.6: version `GLIBC_2.18' not found (required by /homes/gws/hcybay/miniconda3/envs/test2/lib/python3.8/site-packages/tokenizers/tokenizers.cpython-38-x86_64-linux-gnu.so)\r\n```\r\n\r\nDo you mind opening an issue there? They'll probably be able to help out better.",
"Sure--sorry, didn't know which to open it in",
"Looks like I definitely should've searched the issues there first... https://github.com/huggingface/tokenizers/issues/585"
] | 1,617 | 1,617 | 1,617 | NONE | null | A fresh environment where I `conda install pytorch torchvision torchaudio -c pytorch` then `conda install transformers` produces a glibc2.18 error on CentOS 7.9 upon import with `python -c "from transformers import AutoTokenizer"`. I suspect this is a similar error to #2980, i.e., CentOS 7.9 might just be incompatible. However, a different fresh environment where I `pip install torch torchvision torchaudio` then `pip install transformers` does not produce any error upon import with `python -c "from transformers import AutoTokenizer"`.
## Environment info (pip-installed)
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.2
- Platform: Linux-4.19.182-1.el7.retpoline.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: N/A
- Using distributed or parallel set-up in script?: <fill in>
## Environment info (conda-installed)
In fact, this command doesn't even work. See attached `cli_error_trace.txt`.
### Who can help
I'm not sure if I did this right since this seems to be more of a lower-level issue than implementation issue.
-huggingface/transformers/blob/master/src/transformers/models/auto/tokenization_auto.py @LysandreJik
## Information
Model I am using (Bert, XLNet ...):
N/A
## To reproduce
This is all done on CentOS 7.9.
##### Steps to reproduce the good, pip-installed behavior:
1. conda create --name test python=3.8
2. conda activate test
3. pip install torch torchvision torchaudio
4. pip install transformers
5. python -c "from transformers import AutoTokenizer"
##### Steps to reproduce the bad, conda-installed behavior:
1. conda create --name test2 python=3.8
2. conda activate test2
3. conda install pytorch torchvision torchaudio -c pytorch
4. conda install -c huggingface transformers
5. python -c "from transformers import AutoTokenizer"
Additionally, I have attached the `environment.yml` files for both environments and also the trace for the `transformers-cli env` command and the trace for the import error (both for the `conda install`-ed environment). The traces look pretty similar, and it seems the issue is with the dependencies of tokenizers. The .yml files have an appended .txt extension since apparently GitHub doesn't support the .yml extension for uploaded files.
[environment_pip.yml.txt](https://github.com/huggingface/transformers/files/6239328/environment_pip.yml.txt)
[environment_conda.yml.txt](https://github.com/huggingface/transformers/files/6239327/environment_conda.yml.txt)
[cli_error_trace.txt](https://github.com/huggingface/transformers/files/6239326/cli_error_trace.txt)
[import_error_trace.txt](https://github.com/huggingface/transformers/files/6239329/import_error_trace.txt)
## Expected behavior
I would expect `conda install`-ing and `pip install`-ing to both work as intended. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11003/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11002 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11002/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11002/comments | https://api.github.com/repos/huggingface/transformers/issues/11002/events | https://github.com/huggingface/transformers/issues/11002 | 847,229,369 | MDU6SXNzdWU4NDcyMjkzNjk= | 11,002 | KeyError: 'gpt_neo' with EleutherAI/gpt-neo-1.3B | {
"login": "fatihbeyhan",
"id": 48058209,
"node_id": "MDQ6VXNlcjQ4MDU4MjA5",
"avatar_url": "https://avatars.githubusercontent.com/u/48058209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fatihbeyhan",
"html_url": "https://github.com/fatihbeyhan",
"followers_url": "https://api.github.com/users/fatihbeyhan/followers",
"following_url": "https://api.github.com/users/fatihbeyhan/following{/other_user}",
"gists_url": "https://api.github.com/users/fatihbeyhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fatihbeyhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fatihbeyhan/subscriptions",
"organizations_url": "https://api.github.com/users/fatihbeyhan/orgs",
"repos_url": "https://api.github.com/users/fatihbeyhan/repos",
"events_url": "https://api.github.com/users/fatihbeyhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/fatihbeyhan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello! GPT Neo is available on the master branch, while you're installing the version v4.2.2.\r\n\r\nYou should change `pip install transformers` to `pip install git+https://github.com/huggingface/transformers` and reload your kernel",
"Doing so results in this error: \r\n\r\n```\r\n>>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B')\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"C:\\Users\\jerkm\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\transformers\\pipelines\\__init__.py\", line 540, in pipeline\r\n framework, model = infer_framework_load_model(\r\n File \"C:\\Users\\jerkm\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\transformers\\pipelines\\base.py\", line 235, in infer_framework_load_model\r\n raise ValueError(f\"Could not load model {model} with any of the following classes: {class_tuple}.\")\r\nValueError: Could not load model EleutherAI/gpt-neo-2.7B with any of the following classes: (<class 'transformers.models.auto.modeling_tf_auto.TFAutoModelForCausalLM'>,).\r\n```",
"I think this was resolved when I installed pytorch"
] | 1,617 | 1,642 | 1,617 | NONE | null | I am trying to try new GPT-3 checkpoint however I am getting an error on local and Google Colab platform as well.
Google Colab:
```
!pip install transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B")
```
The error I get is:
```
KeyError Traceback (most recent call last)
<ipython-input-5-333740565b3a> in <module>()
3 from transformers import AutoTokenizer, AutoModelForCausalLM
4
----> 5 tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")
6 model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B")
1 frames
/usr/local/lib/python3.7/dist-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
387 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
388 if "model_type" in config_dict:
--> 389 config_class = CONFIG_MAPPING[config_dict["model_type"]]
390 return config_class.from_dict(config_dict, **kwargs)
391 else:
KeyError: 'gpt_neo'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11002/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11002/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/11001 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11001/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11001/comments | https://api.github.com/repos/huggingface/transformers/issues/11001/events | https://github.com/huggingface/transformers/pull/11001 | 847,097,736 | MDExOlB1bGxSZXF1ZXN0NjA2MTM2NzQ2 | 11,001 | Add `examples/language_modeling/run_mlm_no_trainer.py` | {
"login": "hemildesai",
"id": 8195444,
"node_id": "MDQ6VXNlcjgxOTU0NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8195444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hemildesai",
"html_url": "https://github.com/hemildesai",
"followers_url": "https://api.github.com/users/hemildesai/followers",
"following_url": "https://api.github.com/users/hemildesai/following{/other_user}",
"gists_url": "https://api.github.com/users/hemildesai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hemildesai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hemildesai/subscriptions",
"organizations_url": "https://api.github.com/users/hemildesai/orgs",
"repos_url": "https://api.github.com/users/hemildesai/repos",
"events_url": "https://api.github.com/users/hemildesai/events{/privacy}",
"received_events_url": "https://api.github.com/users/hemildesai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks again!",
"How to distributed training?~~~ In no trainer mlm",
"The same way as any other scripts: `python -m torch.distributed.launch --nproc_per_node xxx run_mlm_no_trainer.py --script_args`.",
"hello@sgugger , when I used multi-gpu, I got this error message:\r\n(basic_dl) root@PM00011093:/data/zhaoyichen/workplace/transformers-master/examples# python -m torch.distributed.launch \\\r\n> --nproc_per_node 2 pytorch/language-modeling/run_mlm_no_trainer.py \\\r\n> --dataset_name wikitext \\\r\n> --dataset_config_name wikitext-2-raw-v1 \\\r\n> --model_name_or_path roberta-base \\\r\n> --output_dir /tmp/test-mlm\r\n*****************************************\r\nSetting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. \r\n*****************************************\r\nusage: run_mlm_no_trainer.py [-h] [--dataset_name DATASET_NAME]\r\n [--dataset_config_name DATASET_CONFIG_NAME]\r\n [--train_file TRAIN_FILE]\r\n [--validation_file VALIDATION_FILE]\r\n [--validation_split_percentage VALIDATION_SPLIT_PERCENTAGE]\r\n [--pad_to_max_length] --model_name_or_path\r\n MODEL_NAME_OR_PATH [--config_name CONFIG_NAME]\r\n [--tokenizer_name TOKENIZER_NAME]\r\n [--use_slow_tokenizer]\r\n [--per_device_train_batch_size PER_DEVICE_TRAIN_BATCH_SIZE]\r\n [--per_device_eval_batch_size PER_DEVICE_EVAL_BATCH_SIZE]\r\n [--learning_rate LEARNING_RATE]\r\n [--weight_decay WEIGHT_DECAY]\r\n [--num_train_epochs NUM_TRAIN_EPOCHS]\r\n [--max_train_steps MAX_TRAIN_STEPS]\r\n [--gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS]\r\n [--lr_scheduler_type {linear,cosine,cosine_with_restarts,polynomial,constant,constant_with_warmup}]\r\n [--num_warmup_steps NUM_WARMUP_STEPS]\r\n [--output_dir OUTPUT_DIR] [--seed SEED]\r\n [--model_type {clip,bigbird_pegasus,deit,luke,gpt_neo,big_bird,speech_to_text,vit,wav2vec2,m2m_100,convbert,led,blenderbot-small,retribert,mt5,t5,pegasus,marian,mbart,blenderbot,distilbert,albert,camembert,xlm-roberta,bart,longformer,roberta,layoutlm,squeezebert,bert,openai-gpt,gpt2,megatron-bert,mobilebert,transfo-xl,xlnet,flaubert,fsmt,xlm,ctrl,electra,reformer,funnel,lxmert,bert-generation,deberta,deberta-v2,dpr,xlm-prophetnet,prophetnet,mpnet,tapas,ibert}]\r\n [--max_seq_length MAX_SEQ_LENGTH]\r\n [--line_by_line LINE_BY_LINE]\r\n [--preprocessing_num_workers PREPROCESSING_NUM_WORKERS]\r\n [--overwrite_cache OVERWRITE_CACHE]\r\n [--mlm_probability MLM_PROBABILITY]\r\nrun_mlm_no_trainer.py: error: unrecognized arguments: --local_rank=0\r\nusage: run_mlm_no_trainer.py [-h] [--dataset_name DATASET_NAME]\r\n [--dataset_config_name DATASET_CONFIG_NAME]\r\n [--train_file TRAIN_FILE]\r\n [--validation_file VALIDATION_FILE]\r\n [--validation_split_percentage VALIDATION_SPLIT_PERCENTAGE]\r\n [--pad_to_max_length] --model_name_or_path\r\n MODEL_NAME_OR_PATH [--config_name CONFIG_NAME]\r\n [--tokenizer_name TOKENIZER_NAME]\r\n [--use_slow_tokenizer]\r\n [--per_device_train_batch_size PER_DEVICE_TRAIN_BATCH_SIZE]\r\n [--per_device_eval_batch_size PER_DEVICE_EVAL_BATCH_SIZE]\r\n [--learning_rate LEARNING_RATE]\r\n [--weight_decay WEIGHT_DECAY]\r\n [--num_train_epochs NUM_TRAIN_EPOCHS]\r\n [--max_train_steps MAX_TRAIN_STEPS]\r\n [--gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS]\r\n [--lr_scheduler_type {linear,cosine,cosine_with_restarts,polynomial,constant,constant_with_warmup}]\r\n [--num_warmup_steps NUM_WARMUP_STEPS]\r\n [--output_dir OUTPUT_DIR] [--seed SEED]\r\n [--model_type {clip,bigbird_pegasus,deit,luke,gpt_neo,big_bird,speech_to_text,vit,wav2vec2,m2m_100,convbert,led,blenderbot-small,retribert,mt5,t5,pegasus,marian,mbart,blenderbot,distilbert,albert,camembert,xlm-roberta,bart,longformer,roberta,layoutlm,squeezebert,bert,openai-gpt,gpt2,megatron-bert,mobilebert,transfo-xl,xlnet,flaubert,fsmt,xlm,ctrl,electra,reformer,funnel,lxmert,bert-generation,deberta,deberta-v2,dpr,xlm-prophetnet,prophetnet,mpnet,tapas,ibert}]\r\n [--max_seq_length MAX_SEQ_LENGTH]\r\n [--line_by_line LINE_BY_LINE]\r\n [--preprocessing_num_workers PREPROCESSING_NUM_WORKERS]\r\n [--overwrite_cache OVERWRITE_CACHE]\r\n [--mlm_probability MLM_PROBABILITY]\r\nrun_mlm_no_trainer.py: error: unrecognized arguments: --local_rank=1\r\nTraceback (most recent call last):\r\n File \"/root/anaconda3/lib/python3.6/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/root/anaconda3/lib/python3.6/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/data/zhaoyichen/pyvenv/basic_dl/lib/python3.6/site-packages/torch/distributed/launch.py\", line 261, in <module>\r\n main()\r\n File \"/data/zhaoyichen/pyvenv/basic_dl/lib/python3.6/site-packages/torch/distributed/launch.py\", line 257, in main\r\n cmd=cmd)\r\nsubprocess.CalledProcessError: Command '['/data/zhaoyichen/pyvenv/basic_dl/bin/python', '-u', 'pytorch/language-modeling/run_mlm_no_trainer.py', '--local_rank=1', '--dataset_name', 'wikitext', '--dataset_config_name', 'wikitext-2-raw-v1', '--model_name_or_path', 'roberta-base', '--output_dir', '/tmp/test-mlm']' returned non-zero exit status 2.\r\n\r\n",
"You need to launch it with `--use_env` when using the PyTorch launcher (or use `accelerate launch`)."
] | 1,617 | 1,621 | 1,617 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds an example of finetuning a Masked Language Model (without using `Trainer`) to show the functionalities of the new accelerate library.
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11001/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11001",
"html_url": "https://github.com/huggingface/transformers/pull/11001",
"diff_url": "https://github.com/huggingface/transformers/pull/11001.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11001.patch",
"merged_at": 1617230986000
} |
https://api.github.com/repos/huggingface/transformers/issues/11000 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/11000/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/11000/comments | https://api.github.com/repos/huggingface/transformers/issues/11000/events | https://github.com/huggingface/transformers/pull/11000 | 846,943,225 | MDExOlB1bGxSZXF1ZXN0NjA1OTk1MDA0 | 11,000 | In the group by length documentation length is misspelled as legnth | {
"login": "JohnnyC08",
"id": 10440346,
"node_id": "MDQ6VXNlcjEwNDQwMzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/10440346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnnyC08",
"html_url": "https://github.com/JohnnyC08",
"followers_url": "https://api.github.com/users/JohnnyC08/followers",
"following_url": "https://api.github.com/users/JohnnyC08/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnnyC08/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnnyC08/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnnyC08/subscriptions",
"organizations_url": "https://api.github.com/users/JohnnyC08/orgs",
"repos_url": "https://api.github.com/users/JohnnyC08/repos",
"events_url": "https://api.github.com/users/JohnnyC08/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnnyC08/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,617 | 1,617 | 1,617 | CONTRIBUTOR | null | In the group by length documentation length is misspelled as legnth
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/11000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/11000/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/11000",
"html_url": "https://github.com/huggingface/transformers/pull/11000",
"diff_url": "https://github.com/huggingface/transformers/pull/11000.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/11000.patch",
"merged_at": 1617229687000
} |
https://api.github.com/repos/huggingface/transformers/issues/10999 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10999/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10999/comments | https://api.github.com/repos/huggingface/transformers/issues/10999/events | https://github.com/huggingface/transformers/issues/10999 | 846,934,106 | MDU6SXNzdWU4NDY5MzQxMDY= | 10,999 | ROUGE Multiple References | {
"login": "dptam",
"id": 9755416,
"node_id": "MDQ6VXNlcjk3NTU0MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9755416?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dptam",
"html_url": "https://github.com/dptam",
"followers_url": "https://api.github.com/users/dptam/followers",
"following_url": "https://api.github.com/users/dptam/following{/other_user}",
"gists_url": "https://api.github.com/users/dptam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dptam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dptam/subscriptions",
"organizations_url": "https://api.github.com/users/dptam/orgs",
"repos_url": "https://api.github.com/users/dptam/repos",
"events_url": "https://api.github.com/users/dptam/events{/privacy}",
"received_events_url": "https://api.github.com/users/dptam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,617 | 1,620 | 1,620 | NONE | null | It appears the current ROUGE metric computes the score with 1 reference per candidate. I was wondering if there is way to compute ROUGE with multiple references per candidate? Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10999/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10998 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10998/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10998/comments | https://api.github.com/repos/huggingface/transformers/issues/10998/events | https://github.com/huggingface/transformers/issues/10998 | 846,881,541 | MDU6SXNzdWU4NDY4ODE1NDE= | 10,998 | Get following error with EncoderDecoder model: TypeError: forward() got an unexpected keyword argument 'use_cache' | {
"login": "mandareln",
"id": 34381470,
"node_id": "MDQ6VXNlcjM0MzgxNDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/34381470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mandareln",
"html_url": "https://github.com/mandareln",
"followers_url": "https://api.github.com/users/mandareln/followers",
"following_url": "https://api.github.com/users/mandareln/following{/other_user}",
"gists_url": "https://api.github.com/users/mandareln/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mandareln/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mandareln/subscriptions",
"organizations_url": "https://api.github.com/users/mandareln/orgs",
"repos_url": "https://api.github.com/users/mandareln/repos",
"events_url": "https://api.github.com/users/mandareln/events{/privacy}",
"received_events_url": "https://api.github.com/users/mandareln/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @mandareln \r\n\r\nYou should use `BertLMHeadModel` class if you want to use bert as decoder, here you are using `BertForMaskedLM` which is the reason for this error as it does not have the `use_cache` argument.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,617 | 1,621 | 1,621 | NONE | null | Hi
I am trying to create an EncoderDecoder model where i want to use a pre-trained encoder model and initialise decoder from scratch. Following the code snippet.
----------------------------------------------------------
encoder = AutoModel.from_pretrained('bert-base-uncased')
decoder_config = BertConfig(vocab_size = vocabsize,
max_position_embeddings = max_length,
num_attention_heads = num_attention_heads,
num_hidden_layers = num_hidden_layers,
hidden_size = hidden_size,
type_vocab_size = 1,
is_decoder=True)
decoder = BertForMaskedLM(config=decoder_config)
model = EncoderDecoderModel(encoder=encoder, decoder=decoder)
----------------------------------------------------------
Model gets built without any errors but when i try to make a forward pass, i get the error:
TypeError: forward() got an unexpected keyword argument 'use_cache'.
Following is the dummy forward pass function
outputs = model(input_ids=input_ids, decoder_input_ids=input_ids, labels=input_ids)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10998/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10997 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10997/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10997/comments | https://api.github.com/repos/huggingface/transformers/issues/10997/events | https://github.com/huggingface/transformers/pull/10997 | 846,859,033 | MDExOlB1bGxSZXF1ZXN0NjA1OTIyMjEy | 10,997 | [Docs] Add blog to BigBird docs | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @sgugger "
] | 1,617 | 1,617 | 1,617 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10997/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10997",
"html_url": "https://github.com/huggingface/transformers/pull/10997",
"diff_url": "https://github.com/huggingface/transformers/pull/10997.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10997.patch",
"merged_at": 1617204960000
} |
https://api.github.com/repos/huggingface/transformers/issues/10996 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10996/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10996/comments | https://api.github.com/repos/huggingface/transformers/issues/10996/events | https://github.com/huggingface/transformers/issues/10996 | 846,768,753 | MDU6SXNzdWU4NDY3Njg3NTM= | 10,996 | GPT Neo, Print Most Probable Next Word: String Indices Must Be Integers | {
"login": "BigSalmon2",
"id": 61605789,
"node_id": "MDQ6VXNlcjYxNjA1Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/61605789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BigSalmon2",
"html_url": "https://github.com/BigSalmon2",
"followers_url": "https://api.github.com/users/BigSalmon2/followers",
"following_url": "https://api.github.com/users/BigSalmon2/following{/other_user}",
"gists_url": "https://api.github.com/users/BigSalmon2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BigSalmon2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BigSalmon2/subscriptions",
"organizations_url": "https://api.github.com/users/BigSalmon2/orgs",
"repos_url": "https://api.github.com/users/BigSalmon2/repos",
"events_url": "https://api.github.com/users/BigSalmon2/events{/privacy}",
"received_events_url": "https://api.github.com/users/BigSalmon2/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You're doing something wrong here:\r\n```py\r\nlogits, past = model(myinput, past_key_values = past)\r\n```\r\n\r\nThe model returns a dict. Your `logits` and `past` are the keys of the dicts.\r\n\r\nIf you want the values, then either do:\r\n```py\r\noutput = model(myinput, past_key_values = past)\r\nlogits = output.logits\r\npast = output.past_key_values\r\n```\r\nor\r\n```py\r\nlogits, past = model(myinput, past_key_values = past, return_dict=False)\r\n```\r\n\r\nThis code must have worked with versions <=3. Please read the migration guide relative to switching to version 4 [here](https://huggingface.co/transformers/migration.html#switching-the-return-dict-argument-to-true-by-default)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,617 | 1,620 | 1,620 | NONE | null |
This code is supposed to generate the next most probable word. However, the following problem arises.
```
!pip install git+https://github.com/huggingface/transformers.git
import torch
from transformers import GPTNeoForCausalLM, AutoTokenizer
model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")
prompt = """In the"""
prompt = prompt.strip()
text = tokenizer.encode(prompt)
myinput, past = torch.tensor([text]), None
logits, past = model(myinput, past_key_values = past)
logits = logits[0,-1]
probabilities = torch.nn.functional.softmax(logits)
best_logits, best_indices = logits.topk(10)
best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
text.append(best_indices[0].item())
best_probabilities = probabilities[best_indices].tolist()
words = []
for i in range(10):
m = (best_words[i])
print(m)
```
`TypeError: string indices must be integers` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10996/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10995 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10995/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10995/comments | https://api.github.com/repos/huggingface/transformers/issues/10995/events | https://github.com/huggingface/transformers/pull/10995 | 846,720,261 | MDExOlB1bGxSZXF1ZXN0NjA1Nzk2MDY5 | 10,995 | [Notebook] add BigBird trivia qa notebook | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,617 | 1,617 | 1,617 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10995/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10995",
"html_url": "https://github.com/huggingface/transformers/pull/10995",
"diff_url": "https://github.com/huggingface/transformers/pull/10995.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10995.patch",
"merged_at": 1617199257000
} |
https://api.github.com/repos/huggingface/transformers/issues/10994 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10994/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10994/comments | https://api.github.com/repos/huggingface/transformers/issues/10994/events | https://github.com/huggingface/transformers/pull/10994 | 846,528,133 | MDExOlB1bGxSZXF1ZXN0NjA1NjE4MDI2 | 10,994 | Fix the checkpoint for I-BERT | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,617 | 1,617 | 1,617 | MEMBER | null | The I-BERT checkpoint was not configured correctly in the `_CHECKPOINT_FOR_DOC`
Fixes https://github.com/huggingface/transformers/issues/10990 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10994/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10994",
"html_url": "https://github.com/huggingface/transformers/pull/10994",
"diff_url": "https://github.com/huggingface/transformers/pull/10994.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10994.patch",
"merged_at": 1617192172000
} |
https://api.github.com/repos/huggingface/transformers/issues/10993 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10993/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10993/comments | https://api.github.com/repos/huggingface/transformers/issues/10993/events | https://github.com/huggingface/transformers/pull/10993 | 846,519,849 | MDExOlB1bGxSZXF1ZXN0NjA1NjEwMzI4 | 10,993 | [GPT Neo] fix example in config | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,617 | 1,617 | 1,617 | MEMBER | null | # What does this PR do?
Fix example in doc
Thanks a lot for spotting this @NielsRogge
cc @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10993/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10993/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10993",
"html_url": "https://github.com/huggingface/transformers/pull/10993",
"diff_url": "https://github.com/huggingface/transformers/pull/10993.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10993.patch",
"merged_at": 1617192537000
} |
https://api.github.com/repos/huggingface/transformers/issues/10992 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10992/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10992/comments | https://api.github.com/repos/huggingface/transformers/issues/10992/events | https://github.com/huggingface/transformers/pull/10992 | 846,515,474 | MDExOlB1bGxSZXF1ZXN0NjA1NjA2MTk0 | 10,992 | GPT Neo configuration needs to be set to use GPT2 tokenizer | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,617 | 1,617 | 1,617 | MEMBER | null | The tokenizer wasn't correctly set and ended up making ~200 slow tests fail. The run in question is here: https://github.com/huggingface/transformers/runs/2232656252?check_suite_focus=true
This PR fixes that! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10992/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10992/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10992",
"html_url": "https://github.com/huggingface/transformers/pull/10992",
"diff_url": "https://github.com/huggingface/transformers/pull/10992.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10992.patch",
"merged_at": 1617192200000
} |
https://api.github.com/repos/huggingface/transformers/issues/10991 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10991/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10991/comments | https://api.github.com/repos/huggingface/transformers/issues/10991/events | https://github.com/huggingface/transformers/pull/10991 | 846,506,078 | MDExOlB1bGxSZXF1ZXN0NjA1NTk3NDQw | 10,991 | Add BigBirdPegasus | {
"login": "thevasudevgupta",
"id": 53136577,
"node_id": "MDQ6VXNlcjUzMTM2NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thevasudevgupta",
"html_url": "https://github.com/thevasudevgupta",
"followers_url": "https://api.github.com/users/thevasudevgupta/followers",
"following_url": "https://api.github.com/users/thevasudevgupta/following{/other_user}",
"gists_url": "https://api.github.com/users/thevasudevgupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thevasudevgupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thevasudevgupta/subscriptions",
"organizations_url": "https://api.github.com/users/thevasudevgupta/orgs",
"repos_url": "https://api.github.com/users/thevasudevgupta/repos",
"events_url": "https://api.github.com/users/thevasudevgupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/thevasudevgupta/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"For running conversion script for `BigBirdPegasus`:\r\n\r\n```shell\r\npython3 src/transformers/models/bigbird_pegasus/convert_bigbird_pegasus_tf_to_pytorch.py --tf_ckpt_path src/tf_ckpt/bigbird-pegasus-large-arxiv/model.ckpt-300000 --save_dir src/google/bigbird-pegasus-large-arxiv\r\n```\r\n\r\nFor running conversion script for bigbird-roberta `EncoderDecoderModel`:\r\n\r\n```shell\r\npython3 src/transformers/models/bigbird_pegasus/convert_bigbird_roberta_tf_to_pytorch.py --tf_ckpt_path src/tf_ckpt/bigbird-roberta-arxiv/model.ckpt-300000 --save_dir src/google/bigbird-roberta-arxiv\r\n```",
"@LysandreJik, yes we are planning to add this [notebook](https://colab.research.google.com/github/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb) with a few modifications.",
"@patrickvonplaten, Test failing on CircleCi: `tests/test_modeling_bigbird_pegasus.py::BigBirdPegasusStandaloneDecoderModelTest::test_decoder_model_attn_mask_past` is passing for me locally.\r\n\r\nEverything else is fixed!!"
] | 1,617 | 1,620 | 1,620 | CONTRIBUTOR | null | # What does this PR do?
This PR will add Google's BigBird-Pegasus. Extending #10183
Following checkpoints will be added:
- [x] [bigbird-pegasus-large-pubmed](https://huggingface.co/google/bigbird-pegasus-large-pubmed)
- [x] [bigbird-pegasus-large-arxiv](https://huggingface.co/google/bigbird-pegasus-large-arxiv)
- [x] [bigbird-pegasus-large-bigpatent](https://huggingface.co/google/bigbird-pegasus-large-bigpatent)
It is verified that uploaded models work correctly, see:
- BigBird Pegasus Arxiv: https://colab.research.google.com/drive/1ntBBkiDgccbKwKmOECB8VWEFeFmZebLN?usp=sharing
- BigBird Pegasus BigPatent: https://colab.research.google.com/drive/1RKI0BG3JUy4Hn8VdIzNLE5QduwtaiXYZ?usp=sharing
- BigBird Pegasus Pubmed: https://colab.research.google.com/drive/1LebnFVp4unqZWRx5gez1hVyqR9cibIoH?usp=sharing
Here a notebook showing how well BigBirdPegasus works on long-document summarization: https://colab.research.google.com/github/vasudevgupta7/bigbird/blob/main/notebooks/bigbird_pegasus_evaluation.ipynb
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10991/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 6,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10991/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10991",
"html_url": "https://github.com/huggingface/transformers/pull/10991",
"diff_url": "https://github.com/huggingface/transformers/pull/10991.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10991.patch",
"merged_at": 1620372463000
} |
https://api.github.com/repos/huggingface/transformers/issues/10990 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10990/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10990/comments | https://api.github.com/repos/huggingface/transformers/issues/10990/events | https://github.com/huggingface/transformers/issues/10990 | 846,506,062 | MDU6SXNzdWU4NDY1MDYwNjI= | 10,990 | Can't find ibert-roberta-base model | {
"login": "shon-otmazgin",
"id": 17564565,
"node_id": "MDQ6VXNlcjE3NTY0NTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/17564565?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shon-otmazgin",
"html_url": "https://github.com/shon-otmazgin",
"followers_url": "https://api.github.com/users/shon-otmazgin/followers",
"following_url": "https://api.github.com/users/shon-otmazgin/following{/other_user}",
"gists_url": "https://api.github.com/users/shon-otmazgin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shon-otmazgin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shon-otmazgin/subscriptions",
"organizations_url": "https://api.github.com/users/shon-otmazgin/orgs",
"repos_url": "https://api.github.com/users/shon-otmazgin/repos",
"events_url": "https://api.github.com/users/shon-otmazgin/events{/privacy}",
"received_events_url": "https://api.github.com/users/shon-otmazgin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @shon-otmazgin, here's the model: https://huggingface.co/kssteven/ibert-roberta-base\r\n\r\nThe documentation is unfortunately wrong. I'm updating it.",
"Fixing it in https://github.com/huggingface/transformers/pull/10994",
"Hello @LysandreJik, \r\nWe should specify `kssteven/ibert-roberta-base` in `from_pretrained` function?",
"Yes, that's right! That's the checkpoint you're looking for.\r\n\r\nThe docs are now updated on `master`, and the next release (next few days) will have them. ",
"Thanks @LysandreJik\r\n\r\nAfter reinstalling from source:\r\n\r\npython\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/sotmazgi/PycharmProjects/s2e-coref/ibert_test.py\", line 3, in <module>\r\n tokenizer = RobertaTokenizer.from_pretrained('kssteven/ibert-roberta-base')\r\n File \"/home/sotmazgi/PycharmProjects/s2e-coref/venv/lib/python3.6/site-packages/transformers/tokenization_utils_base.py\", line 1705, in from_pretrained\r\n resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs\r\n File \"/home/sotmazgi/PycharmProjects/s2e-coref/venv/lib/python3.6/site-packages/transformers/tokenization_utils_base.py\", line 1776, in _from_pretrained\r\n tokenizer = cls(*init_inputs, **init_kwargs)\r\n File \"/home/sotmazgi/PycharmProjects/s2e-coref/venv/lib/python3.6/site-packages/transformers/models/roberta/tokenization_roberta.py\", line 171, in __init__\r\n **kwargs,\r\n File \"/home/sotmazgi/PycharmProjects/s2e-coref/venv/lib/python3.6/site-packages/transformers/models/gpt2/tokenization_gpt2.py\", line 179, in __init__\r\n with open(vocab_file, encoding=\"utf-8\") as vocab_handle:\r\nTypeError: expected str, bytes or os.PathLike object, not NoneType\r\n```",
"Ah, it seems that the I-BERT authors have not uploaded some slow tokenizer files. Can you try it with a `RobertaTokenizerFast` instead of a `RobertaTokenizer` and let me know if it works for you?",
"Yes thank you"
] | 1,617 | 1,617 | 1,617 | NONE | null | ## Environment info
- `transformers` version: 4.4.2
- Platform: Linux-4.15.0-1109-azure-x86_64-with-Ubuntu-16.04-xenial
- Python version: 3.6.13
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
Models:
- ibert: @kssteven418
Documentation:
- @sgugger
## Information
Model I am using I-BERT:
The problem arises when using:
* [X] the official example scripts: (give details below)
## To reproduce
From [documentation](https://huggingface.co/transformers/model_doc/ibert.html):
```python
from transformers import RobertaTokenizer, IBertModel
import torch
tokenizer = RobertaTokenizer.from_pretrained('ibert-roberta-base')
model = IBertModel.from_pretrained('ibert-roberta-base')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
Steps to reproduce the behavior:
1. install pytorch and transformers
2. run the code example from docs
```python
Traceback (most recent call last):
File "/home/sotmazgi/PycharmProjects/s2e-coref/ibert_test.py", line 3, in <module>
tokenizer = RobertaTokenizer.from_pretrained('ibert-roberta-base')
File "/home/sotmazgi/PycharmProjects/s2e-coref/venv/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1693, in from_pretrained
raise EnvironmentError(msg)
OSError: Can't load tokenizer for 'ibert-roberta-base'. Make sure that:
- 'ibert-roberta-base' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'ibert-roberta-base' is the correct path to a directory containing relevant tokenizer files
```
## Expected behavior
get the embedding for the example sentence
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10990/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10989 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10989/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10989/comments | https://api.github.com/repos/huggingface/transformers/issues/10989/events | https://github.com/huggingface/transformers/pull/10989 | 846,367,862 | MDExOlB1bGxSZXF1ZXN0NjA1NDY4Nzkw | 10,989 | Fixed some typos and removed legacy url | {
"login": "WybeKoper",
"id": 40920213,
"node_id": "MDQ6VXNlcjQwOTIwMjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/40920213?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WybeKoper",
"html_url": "https://github.com/WybeKoper",
"followers_url": "https://api.github.com/users/WybeKoper/followers",
"following_url": "https://api.github.com/users/WybeKoper/following{/other_user}",
"gists_url": "https://api.github.com/users/WybeKoper/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WybeKoper/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WybeKoper/subscriptions",
"organizations_url": "https://api.github.com/users/WybeKoper/orgs",
"repos_url": "https://api.github.com/users/WybeKoper/repos",
"events_url": "https://api.github.com/users/WybeKoper/events{/privacy}",
"received_events_url": "https://api.github.com/users/WybeKoper/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks a lot for doing this!"
] | 1,617 | 1,617 | 1,617 | CONTRIBUTOR | null | # What does this PR do?
Removed legacy url to colab notebook in examples/multiple-choice/README.md
Fixed some typos.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
@patil-suraj
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10989/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10989",
"html_url": "https://github.com/huggingface/transformers/pull/10989",
"diff_url": "https://github.com/huggingface/transformers/pull/10989.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10989.patch",
"merged_at": 1617189795000
} |
https://api.github.com/repos/huggingface/transformers/issues/10988 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10988/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10988/comments | https://api.github.com/repos/huggingface/transformers/issues/10988/events | https://github.com/huggingface/transformers/issues/10988 | 846,343,138 | MDU6SXNzdWU4NDYzNDMxMzg= | 10,988 | unable to use multiple GPUs with HF integration of DeepSpeed on Jupyter notebooks | {
"login": "saichandrapandraju",
"id": 41769919,
"node_id": "MDQ6VXNlcjQxNzY5OTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/41769919?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saichandrapandraju",
"html_url": "https://github.com/saichandrapandraju",
"followers_url": "https://api.github.com/users/saichandrapandraju/followers",
"following_url": "https://api.github.com/users/saichandrapandraju/following{/other_user}",
"gists_url": "https://api.github.com/users/saichandrapandraju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saichandrapandraju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saichandrapandraju/subscriptions",
"organizations_url": "https://api.github.com/users/saichandrapandraju/orgs",
"repos_url": "https://api.github.com/users/saichandrapandraju/repos",
"events_url": "https://api.github.com/users/saichandrapandraju/events{/privacy}",
"received_events_url": "https://api.github.com/users/saichandrapandraju/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"That's correct. \r\n\r\nYou need a separate process for each gpu under DeepSpeed for communications to work. I will update the docs to make this clear.\r\n\r\nIf you want to use multiple gpus you must use the launcher. So you can still use the notebook to set things up, but the training must happen in external process, e.g. see:\r\n\r\nhttps://github.com/stas00/porting/blob/master/transformers/deepspeed/DeepSpeed_on_colab_CLI.ipynb\r\n\r\nbut edit the launcher line to use `deepspeed --num_gpus 2`\r\n\r\nI will close this for now as it's a clear \"this is not possible due to the DeepSpeed design\", but if you have some further questions please don't hesitate to follow up."
] | 1,617 | 1,617 | 1,617 | NONE | null | Hi ,
I'm using HF integration of DeepSpeed in my Jupyter Notebook by setting following env variables as suggested [here](https://huggingface.co/transformers/main_classes/trainer.html#deployment-in-notebooks)
```
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '9889'
os.environ['RANK'] = "0"
os.environ['LOCAL_RANK'] = "0"
os.environ['WORLD_SIZE'] = "1"
os.environ['NCCL_SOCKET_IFNAME'] = 'lo' ##because of my kubeflow setup
```
with this setup I'm unable to utilize both GPU's that I have and here's the log info before it starts training-
```
[2021-03-31 09:49:02,510] [INFO] [logging.py:60:log_dist] [Rank 0] DeepSpeed info: version=0.3.13+7fcc891, git-hash=7fcc891, git-branch=master
[2021-03-31 09:49:02,540] [INFO] [engine.py:80:_initialize_parameter_parallel_groups] data_parallel_size: 1, parameter_parallel_size: 1
[2021-03-31 09:49:05,758] [INFO] [engine.py:608:_configure_optimizer] Using DeepSpeed Optimizer param name adamw as basic optimizer
[2021-03-31 09:49:05,760] [INFO] [engine.py:612:_configure_optimizer] DeepSpeed Basic Optimizer = DeepSpeedCPUAdam
Checking ZeRO support for optimizer=DeepSpeedCPUAdam type=<class 'deepspeed.ops.adam.cpu_adam.DeepSpeedCPUAdam'>
[2021-03-31 09:49:05,761] [INFO] [logging.py:60:log_dist] [Rank 0] Creating fp16 ZeRO stage 2 optimizer
[2021-03-31 09:49:05,764] [INFO] [stage2.py:130:__init__] Reduce bucket size 150000000.0
[2021-03-31 09:49:05,765] [INFO] [stage2.py:131:__init__] Allgather bucket size 150000000.0
[2021-03-31 09:49:05,766] [INFO] [stage2.py:132:__init__] CPU Offload: True
[2021-03-31 09:49:12,585] [INFO] [stage2.py:399:__init__] optimizer state initialized
[2021-03-31 09:49:12,588] [INFO] [logging.py:60:log_dist] [Rank 0] DeepSpeed Final Optimizer = adamw
[2021-03-31 09:49:12,589] [INFO] [engine.py:445:_configure_lr_scheduler] DeepSpeed using configured LR scheduler = WarmupLR
[2021-03-31 09:49:12,590] [INFO] [logging.py:60:log_dist] [Rank 0] DeepSpeed LR Scheduler = <deepspeed.runtime.lr_schedules.WarmupLR object at 0x7fef53ed1d68>
[2021-03-31 09:49:12,591] [INFO] [logging.py:60:log_dist] [Rank 0] step=0, skipped=0, lr=[3e-05], mom=[[0.8, 0.999]]
[2021-03-31 09:49:12,593] [INFO] [config.py:737:print] DeepSpeedEngine configuration:
[2021-03-31 09:49:12,594] [INFO] [config.py:741:print] activation_checkpointing_config {
"contiguous_memory_optimization": false,
"cpu_checkpointing": false,
"number_checkpoints": null,
"partition_activations": false,
"profile": false,
"synchronize_checkpoint_boundary": false
}
[2021-03-31 09:49:12,594] [INFO] [config.py:741:print] allreduce_always_fp32 ........ False
[2021-03-31 09:49:12,595] [INFO] [config.py:741:print] amp_enabled .................. False
[2021-03-31 09:49:12,596] [INFO] [config.py:741:print] amp_params ................... False
[2021-03-31 09:49:12,596] [INFO] [config.py:741:print] checkpoint_tag_validation_enabled True
[2021-03-31 09:49:12,597] [INFO] [config.py:741:print] checkpoint_tag_validation_fail False
[2021-03-31 09:49:12,600] [INFO] [config.py:741:print] disable_allgather ............ False
[2021-03-31 09:49:12,601] [INFO] [config.py:741:print] dump_state ................... False
[2021-03-31 09:49:12,602] [INFO] [config.py:741:print] dynamic_loss_scale_args ...... {'init_scale': 4294967296, 'scale_window': 1000, 'delayed_shift': 2, 'min_scale': 1}
[2021-03-31 09:49:12,603] [INFO] [config.py:741:print] elasticity_enabled ........... False
[2021-03-31 09:49:12,603] [INFO] [config.py:741:print] flops_profiler_config ........ {
"detailed": true,
"enabled": false,
"module_depth": -1,
"profile_step": 1,
"top_modules": 3
}
[2021-03-31 09:49:12,604] [INFO] [config.py:741:print] fp16_enabled ................. True
[2021-03-31 09:49:12,605] [INFO] [config.py:741:print] global_rank .................. 0
[2021-03-31 09:49:12,605] [INFO] [config.py:741:print] gradient_accumulation_steps .. 1
[2021-03-31 09:49:12,606] [INFO] [config.py:741:print] gradient_clipping ............ 1.0
[2021-03-31 09:49:12,607] [INFO] [config.py:741:print] gradient_predivide_factor .... 1.0
[2021-03-31 09:49:12,607] [INFO] [config.py:741:print] initial_dynamic_scale ........ 4294967296
[2021-03-31 09:49:12,608] [INFO] [config.py:741:print] loss_scale ................... 0
[2021-03-31 09:49:12,609] [INFO] [config.py:741:print] memory_breakdown ............. False
[2021-03-31 09:49:12,610] [INFO] [config.py:741:print] optimizer_legacy_fusion ...... False
[2021-03-31 09:49:12,610] [INFO] [config.py:741:print] optimizer_name ............... adamw
[2021-03-31 09:49:12,611] [INFO] [config.py:741:print] optimizer_params ............. {'lr': 3e-05, 'betas': [0.8, 0.999], 'eps': 1e-08, 'weight_decay': 3e-07}
[2021-03-31 09:49:12,612] [INFO] [config.py:741:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0}
[2021-03-31 09:49:12,612] [INFO] [config.py:741:print] pld_enabled .................. False
[2021-03-31 09:49:12,613] [INFO] [config.py:741:print] pld_params ................... False
[2021-03-31 09:49:12,614] [INFO] [config.py:741:print] prescale_gradients ........... False
[2021-03-31 09:49:12,616] [INFO] [config.py:741:print] scheduler_name ............... WarmupLR
[2021-03-31 09:49:12,616] [INFO] [config.py:741:print] scheduler_params ............. {'warmup_min_lr': 0, 'warmup_max_lr': 3e-05, 'warmup_num_steps': 500}
[2021-03-31 09:49:12,617] [INFO] [config.py:741:print] sparse_attention ............. None
[2021-03-31 09:49:12,618] [INFO] [config.py:741:print] sparse_gradients_enabled ..... False
[2021-03-31 09:49:12,618] [INFO] [config.py:741:print] steps_per_print .............. 2000
[2021-03-31 09:49:12,619] [INFO] [config.py:741:print] tensorboard_enabled .......... False
[2021-03-31 09:49:12,620] [INFO] [config.py:741:print] tensorboard_job_name ......... DeepSpeedJobName
[2021-03-31 09:49:12,620] [INFO] [config.py:741:print] tensorboard_output_path ......
[2021-03-31 09:49:12,621] [INFO] [config.py:741:print] train_batch_size ............. 4
[2021-03-31 09:49:12,622] [INFO] [config.py:741:print] train_micro_batch_size_per_gpu 4
[2021-03-31 09:49:12,622] [INFO] [config.py:741:print] wall_clock_breakdown ......... False
[2021-03-31 09:49:12,623] [INFO] [config.py:741:print] world_size ................... 1
[2021-03-31 09:49:12,624] [INFO] [config.py:741:print] zero_allow_untested_optimizer False
[2021-03-31 09:49:12,625] [INFO] [config.py:741:print] zero_config .................. {
"allgather_bucket_size": 150000000.0,
"allgather_partitions": true,
"contiguous_gradients": true,
"cpu_offload": true,
"cpu_offload_params": false,
"cpu_offload_use_pin_memory": false,
"elastic_checkpoint": true,
"gather_fp16_weights_on_model_save": false,
"load_from_fp32_weights": true,
"max_live_parameters": 1000000000,
"max_reuse_distance": 1000000000,
"overlap_comm": true,
"param_persistence_threshold": 100000,
"prefetch_bucket_size": 50000000,
"reduce_bucket_size": 150000000.0,
"reduce_scatter": true,
"stage": 2,
"sub_group_size": 1000000000000
}
[2021-03-31 09:49:12,625] [INFO] [config.py:741:print] zero_enabled ................. True
[2021-03-31 09:49:12,626] [INFO] [config.py:741:print] zero_optimization_stage ...... 2
[2021-03-31 09:49:12,628] [INFO] [config.py:748:print] json = {
"fp16":{
"enabled":true,
"hysteresis":2,
"loss_scale":0,
"loss_scale_window":1000,
"min_loss_scale":1
},
"gradient_accumulation_steps":1,
"gradient_clipping":1.0,
"optimizer":{
"params":{
"betas":[
0.8,
0.999
],
"eps":1e-08,
"lr":3e-05,
"weight_decay":3e-07
},
"type":"AdamW"
},
"scheduler":{
"params":{
"warmup_max_lr":3e-05,
"warmup_min_lr":0,
"warmup_num_steps":500
},
"type":"WarmupLR"
},
"steps_per_print":2000,
"train_micro_batch_size_per_gpu":4,
"wall_clock_breakdown":false,
"zero_optimization":{
"allgather_bucket_size":150000000.0,
"allgather_partitions":true,
"contiguous_gradients":true,
"cpu_offload":true,
"overlap_comm":true,
"reduce_bucket_size":150000000.0,
"reduce_scatter":true,
"stage":2
}
}
```
But both GPUs were used if I convert my notebook to python script and run script with command - `!NCCL_SOCKET_IFNAME=lo deepspeed Deberta_V2_XXLarge.py --deepspeed ds_config.json` and here's the log -
```
[2021-03-31 09:26:36,687] [WARNING] [runner.py:117:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2021-03-31 09:26:36,714] [INFO] [runner.py:358:main] cmd = /usr/bin/python3 -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMV19 --master_addr=127.0.0.1 --master_port=29500 Deberta_V2_XXLarge.py --deepspeed ds_config.json
[2021-03-31 09:26:38,357] [INFO] [launch.py:73:main] 0 NCCL_SOCKET_IFNAME lo
[2021-03-31 09:26:38,357] [INFO] [launch.py:80:main] WORLD INFO DICT: {'localhost': [0, 1]}
[2021-03-31 09:26:38,357] [INFO] [launch.py:89:main] nnodes=1, num_local_procs=2, node_rank=0
[2021-03-31 09:26:38,357] [INFO] [launch.py:101:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1]})
[2021-03-31 09:26:38,358] [INFO] [launch.py:102:main] dist_world_size=2
[2021-03-31 09:26:38,358] [INFO] [launch.py:105:main] Setting CUDA_VISIBLE_DEVICES=0,1
2021-03-31 09:26:40.269001: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-03-31 09:26:40.269004: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
[2021-03-31 09:27:27,272] [INFO] [distributed.py:47:init_distributed] Initializing torch distributed with backend: nccl
[2021-03-31 09:27:31,495] [INFO] [logging.py:60:log_dist] [Rank 0] DeepSpeed info: version=0.3.13+7fcc891, git-hash=7fcc891, git-branch=master
[2021-03-31 09:27:32,428] [INFO] [engine.py:80:_initialize_parameter_parallel_groups] data_parallel_size: 2, parameter_parallel_size: 2
[2021-03-31 09:27:32,834] [INFO] [engine.py:80:_initialize_parameter_parallel_groups] data_parallel_size: 2, parameter_parallel_size: 2
Adam Optimizer #0 is created with AVX512 arithmetic capability.
Config: alpha=0.000030, betas=(0.800000, 0.999000), weight_decay=0.000000, adam_w=1
[2021-03-31 09:27:36,438] [INFO] [engine.py:608:_configure_optimizer] Using DeepSpeed Optimizer param name adamw as basic optimizer
[2021-03-31 09:27:36,438] [INFO] [engine.py:612:_configure_optimizer] DeepSpeed Basic Optimizer = DeepSpeedCPUAdam
Checking ZeRO support for optimizer=DeepSpeedCPUAdam type=<class 'deepspeed.ops.adam.cpu_adam.DeepSpeedCPUAdam'>
[2021-03-31 09:27:36,439] [INFO] [logging.py:60:log_dist] [Rank 0] Creating fp16 ZeRO stage 2 optimizer
Adam Optimizer #0 is created with AVX512 arithmetic capability.
Config: alpha=0.000030, betas=(0.800000, 0.999000), weight_decay=0.000000, adam_w=1
[2021-03-31 09:27:36,441] [INFO] [stage2.py:130:__init__] Reduce bucket size 150000000.0
[2021-03-31 09:27:36,441] [INFO] [stage2.py:131:__init__] Allgather bucket size 150000000.0
[2021-03-31 09:27:36,441] [INFO] [stage2.py:132:__init__] CPU Offload: True
[2021-03-31 09:27:40,524] [INFO] [stage2.py:399:__init__] optimizer state initialized
[2021-03-31 09:27:40,530] [INFO] [logging.py:60:log_dist] [Rank 0] DeepSpeed Final Optimizer = adamw
[2021-03-31 09:27:40,533] [INFO] [engine.py:445:_configure_lr_scheduler] DeepSpeed using configured LR scheduler = WarmupLR
[2021-03-31 09:27:40,534] [INFO] [logging.py:60:log_dist] [Rank 0] DeepSpeed LR Scheduler = <deepspeed.runtime.lr_schedules.WarmupLR object at 0x7f105c6a0b00>
[2021-03-31 09:27:40,535] [INFO] [logging.py:60:log_dist] [Rank 0] step=0, skipped=0, lr=[3e-05], mom=[[0.8, 0.999]]
[2021-03-31 09:27:40,536] [INFO] [config.py:737:print] DeepSpeedEngine configuration:
[2021-03-31 09:27:40,537] [INFO] [config.py:741:print] activation_checkpointing_config {
"contiguous_memory_optimization": false,
"cpu_checkpointing": false,
"number_checkpoints": null,
"partition_activations": false,
"profile": false,
"synchronize_checkpoint_boundary": false
}
[2021-03-31 09:27:40,537] [INFO] [config.py:741:print] allreduce_always_fp32 ........ False
[2021-03-31 09:27:40,538] [INFO] [config.py:741:print] amp_enabled .................. False
[2021-03-31 09:27:40,538] [INFO] [config.py:741:print] amp_params ................... False
[2021-03-31 09:27:40,538] [INFO] [config.py:741:print] checkpoint_tag_validation_enabled True
[2021-03-31 09:27:40,538] [INFO] [config.py:741:print] checkpoint_tag_validation_fail False
[2021-03-31 09:27:40,538] [INFO] [config.py:741:print] disable_allgather ............ False
[2021-03-31 09:27:40,538] [INFO] [config.py:741:print] dump_state ................... False
[2021-03-31 09:27:40,538] [INFO] [config.py:741:print] dynamic_loss_scale_args ...... {'init_scale': 4294967296, 'scale_window': 1000, 'delayed_shift': 2, 'min_scale': 1}
[2021-03-31 09:27:40,539] [INFO] [config.py:741:print] elasticity_enabled ........... False
[2021-03-31 09:27:40,539] [INFO] [config.py:741:print] flops_profiler_config ........ {
"detailed": true,
"enabled": false,
"module_depth": -1,
"profile_step": 1,
"top_modules": 3
}
[2021-03-31 09:27:40,540] [INFO] [config.py:741:print] fp16_enabled ................. True
[2021-03-31 09:27:40,540] [INFO] [config.py:741:print] global_rank .................. 0
[2021-03-31 09:27:40,540] [INFO] [config.py:741:print] gradient_accumulation_steps .. 1
[2021-03-31 09:27:40,540] [INFO] [config.py:741:print] gradient_clipping ............ 1.0
[2021-03-31 09:27:40,540] [INFO] [config.py:741:print] gradient_predivide_factor .... 1.0
[2021-03-31 09:27:40,540] [INFO] [config.py:741:print] initial_dynamic_scale ........ 4294967296
[2021-03-31 09:27:40,541] [INFO] [config.py:741:print] loss_scale ................... 0
[2021-03-31 09:27:40,541] [INFO] [config.py:741:print] memory_breakdown ............. False
[2021-03-31 09:27:40,541] [INFO] [config.py:741:print] optimizer_legacy_fusion ...... False
[2021-03-31 09:27:40,541] [INFO] [config.py:741:print] optimizer_name ............... adamw
[2021-03-31 09:27:40,541] [INFO] [config.py:741:print] optimizer_params ............. {'lr': 3e-05, 'betas': [0.8, 0.999], 'eps': 1e-08, 'weight_decay': 3e-07}
[2021-03-31 09:27:40,541] [INFO] [config.py:741:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0}
[2021-03-31 09:27:40,541] [INFO] [config.py:741:print] pld_enabled .................. False
[2021-03-31 09:27:40,541] [INFO] [config.py:741:print] pld_params ................... False
[2021-03-31 09:27:40,542] [INFO] [config.py:741:print] prescale_gradients ........... False
[2021-03-31 09:27:40,542] [INFO] [config.py:741:print] scheduler_name ............... WarmupLR
[2021-03-31 09:27:40,542] [INFO] [config.py:741:print] scheduler_params ............. {'warmup_min_lr': 0, 'warmup_max_lr': 3e-05, 'warmup_num_steps': 500}
[2021-03-31 09:27:40,542] [INFO] [config.py:741:print] sparse_attention ............. None
[2021-03-31 09:27:40,542] [INFO] [config.py:741:print] sparse_gradients_enabled ..... False
[2021-03-31 09:27:40,542] [INFO] [config.py:741:print] steps_per_print .............. 2000
[2021-03-31 09:27:40,542] [INFO] [config.py:741:print] tensorboard_enabled .......... False
[2021-03-31 09:27:40,542] [INFO] [config.py:741:print] tensorboard_job_name ......... DeepSpeedJobName
[2021-03-31 09:27:40,543] [INFO] [config.py:741:print] tensorboard_output_path ......
[2021-03-31 09:27:40,543] [INFO] [config.py:741:print] train_batch_size ............. 8
[2021-03-31 09:27:40,543] [INFO] [config.py:741:print] train_micro_batch_size_per_gpu 4
[2021-03-31 09:27:40,543] [INFO] [config.py:741:print] wall_clock_breakdown ......... False
[2021-03-31 09:27:40,543] [INFO] [config.py:741:print] world_size ................... 2
[2021-03-31 09:27:40,543] [INFO] [config.py:741:print] zero_allow_untested_optimizer False
[2021-03-31 09:27:40,544] [INFO] [config.py:741:print] zero_config .................. {
"allgather_bucket_size": 150000000.0,
"allgather_partitions": true,
"contiguous_gradients": true,
"cpu_offload": true,
"cpu_offload_params": false,
"cpu_offload_use_pin_memory": false,
"elastic_checkpoint": true,
"gather_fp16_weights_on_model_save": false,
"load_from_fp32_weights": true,
"max_live_parameters": 1000000000,
"max_reuse_distance": 1000000000,
"overlap_comm": true,
"param_persistence_threshold": 100000,
"prefetch_bucket_size": 50000000,
"reduce_bucket_size": 150000000.0,
"reduce_scatter": true,
"stage": 2,
"sub_group_size": 1000000000000
}
[2021-03-31 09:27:40,544] [INFO] [config.py:741:print] zero_enabled ................. True
[2021-03-31 09:27:40,545] [INFO] [config.py:741:print] zero_optimization_stage ...... 2
[2021-03-31 09:27:40,546] [INFO] [config.py:748:print] json = {
"fp16":{
"enabled":true,
"hysteresis":2,
"loss_scale":0,
"loss_scale_window":1000,
"min_loss_scale":1
},
"gradient_accumulation_steps":1,
"gradient_clipping":1.0,
"optimizer":{
"params":{
"betas":[
0.8,
0.999
],
"eps":1e-08,
"lr":3e-05,
"weight_decay":3e-07
},
"type":"AdamW"
},
"scheduler":{
"params":{
"warmup_max_lr":3e-05,
"warmup_min_lr":0,
"warmup_num_steps":500
},
"type":"WarmupLR"
},
"steps_per_print":2000,
"train_micro_batch_size_per_gpu":4,
"wall_clock_breakdown":false,
"zero_optimization":{
"allgather_bucket_size":150000000.0,
"allgather_partitions":true,
"contiguous_gradients":true,
"cpu_offload":true,
"overlap_comm":true,
"reduce_bucket_size":150000000.0,
"reduce_scatter":true,
"stage":2
}
}
```
Plz suggest whether I need to make some changes..
Verions I'm using -
```
torch-1.7.1+cu101
transformers-4.4.2
deepspeed-0.3.13
```
**Who can help**
@LysandreJik
@stas00
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10988/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10988/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10987 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10987/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10987/comments | https://api.github.com/repos/huggingface/transformers/issues/10987/events | https://github.com/huggingface/transformers/pull/10987 | 846,301,243 | MDExOlB1bGxSZXF1ZXN0NjA1NDA2OTU5 | 10,987 | Sagemaker test fix | {
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,617 | 1,617 | 1,617 | MEMBER | null | # What does this PR do?
Fixed test documentation `makefile` command and PyTorch-ddp test when #10975 is merged. Different validation function for `sagemaker-data-parallel`. Can be merged already. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10987/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10987",
"html_url": "https://github.com/huggingface/transformers/pull/10987",
"diff_url": "https://github.com/huggingface/transformers/pull/10987.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10987.patch",
"merged_at": 1617191062000
} |
https://api.github.com/repos/huggingface/transformers/issues/10986 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10986/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10986/comments | https://api.github.com/repos/huggingface/transformers/issues/10986/events | https://github.com/huggingface/transformers/issues/10986 | 846,156,982 | MDU6SXNzdWU4NDYxNTY5ODI= | 10,986 | BART : Cannot run trainer.evaluate() after trainer.train() | {
"login": "Avditvs",
"id": 32792728,
"node_id": "MDQ6VXNlcjMyNzkyNzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/32792728?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Avditvs",
"html_url": "https://github.com/Avditvs",
"followers_url": "https://api.github.com/users/Avditvs/followers",
"following_url": "https://api.github.com/users/Avditvs/following{/other_user}",
"gists_url": "https://api.github.com/users/Avditvs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Avditvs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Avditvs/subscriptions",
"organizations_url": "https://api.github.com/users/Avditvs/orgs",
"repos_url": "https://api.github.com/users/Avditvs/repos",
"events_url": "https://api.github.com/users/Avditvs/events{/privacy}",
"received_events_url": "https://api.github.com/users/Avditvs/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is not an issue with Transformers but with using Apex with the \"O3\" opt-level. This changes your model during the training and results in the error you're seeing. The best is to re-instantiate a clean `Trainer` for evaluation after you're done with training.",
"Thank you for your reply, I just tried it and it actually only works with Apex \"01\" opt-level !"
] | 1,617 | 1,617 | 1,617 | NONE | null | ## Environment info
- `transformers` version: 4.4.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help
trainer : @sgugger
bart : @patrickvonplaten
## Information
Model I am using : bart, barthez, mbart
I am working on text summarization.
The problem arises when using my own modified script, inspired by the official Seq2Seq example. I am using the Seq2SeqTrainer class.
I am unable to run trainer.evaluate(...) after trainer.train(...) as well as evaluating the model during training after x epochs or steps.
## To reproduce
I you want to try it out, here is a link to [my notebook](https://colab.research.google.com/drive/1CqxxM0nOdJRpre_SwLOajl_s9hlKTT9e?usp=sharing)
Steps to reproduce the behavior:
1. Download model, tokenizer, and dataset from hub
2. Run trainer.evaluate(...) (works)
3. Run trainer.train(...) (runs fine)
4. Run trainer.evaluate(...) (returns the error below)
```
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/usr/local/lib/python3.7/dist-packages/transformers/models/bart/modeling_bart.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
1160 elif return_dict and not isinstance(encoder_outputs, BaseModelOutput):
1161 encoder_outputs = BaseModelOutput(
-> 1162 last_hidden_state=encoder_outputs[0],
1163 hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None,
1164 attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None,
KeyError: 0
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10986/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10985 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10985/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10985/comments | https://api.github.com/repos/huggingface/transformers/issues/10985/events | https://github.com/huggingface/transformers/pull/10985 | 846,083,976 | MDExOlB1bGxSZXF1ZXN0NjA1MjA2ODUy | 10,985 | [WIP] GPT Neo cleanup | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"As explained in https://github.com/huggingface/transformers/issues/11076#issuecomment-814218202, the loss did decrease over time on this small sample so it looks like there are no regressions w.r.t training.\r\n\r\n~Merge when ready @patil-suraj.~ :point_down: ",
"It seems I'm not passing the slow tests locally, `test_gpt_neo_sample` fails with:\r\n\r\n```\r\nAssertionError: 'Today is a nice day and a wonderful time to be in Rome, though the sun wonβ' != 'Today is a nice day and if you donβt get the memo here is what you can'\r\n- Today is a nice day and a wonderful time to be in Rome, though the sun wonβ\r\n+ Today is a nice day and if you donβt get the memo here is what you can\r\n```",
"As seen with @patil-suraj, this is due to a wrongly initialized seed; and the other tests ensure that we have a correct attention mask and generation. Merging!"
] | 1,617 | 1,617 | 1,617 | MEMBER | null | # What does this PR do?
This PR refactors the `GPTNeoLocalSelfAttention` layer and adds more tests for it.
This PR
- adds the `AttentionMixin` class which contains the shared utilities for both global and local attention. The class is meant to be used as a mixin and makes it easy to test it.
- the `look_around` method is now replaced by the `AttentionMixin._look_back` method, which is now vectorized and can give up to 300x speed-up compared to old `look_around`
- The `GPTNeoLocalSelfAttention._create_attention_mask` is now simplified and is also giving nice speed-up as it uses `_look_back`. I've added more detailed comments to explain the mask creation logic.
- I've added multiple shape checks in the `AttentionMixin` to make it as robust as possible.
I didn't do a thorough benchmarking but I'm observing around ~3.9x speed-up when generating sequences of length 1024.
Verified that all slow-tets are passing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10985/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10985",
"html_url": "https://github.com/huggingface/transformers/pull/10985",
"diff_url": "https://github.com/huggingface/transformers/pull/10985.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10985.patch",
"merged_at": 1617726255000
} |
https://api.github.com/repos/huggingface/transformers/issues/10984 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10984/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10984/comments | https://api.github.com/repos/huggingface/transformers/issues/10984/events | https://github.com/huggingface/transformers/issues/10984 | 846,072,367 | MDU6SXNzdWU4NDYwNzIzNjc= | 10,984 | AttributeError due to multi-processing using PyTorchBenchmark | {
"login": "simonschoe",
"id": 53626067,
"node_id": "MDQ6VXNlcjUzNjI2MDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonschoe",
"html_url": "https://github.com/simonschoe",
"followers_url": "https://api.github.com/users/simonschoe/followers",
"following_url": "https://api.github.com/users/simonschoe/following{/other_user}",
"gists_url": "https://api.github.com/users/simonschoe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simonschoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonschoe/subscriptions",
"organizations_url": "https://api.github.com/users/simonschoe/orgs",
"repos_url": "https://api.github.com/users/simonschoe/repos",
"events_url": "https://api.github.com/users/simonschoe/events{/privacy}",
"received_events_url": "https://api.github.com/users/simonschoe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Same issue for me. \r\n\r\nWhen running\r\n```python\r\npython run_benchmark.py --no_speed --models a-ware/roberta-large-squad-classification --sequence_lengths 32 --batch_sizes 32\r\n```\r\nI get:\r\n```\r\nAttributeError: Can't pickle local object 'separate_process_wrapper_fn.<locals>.multi_process_func.<locals>.wrapper_func'\r\n```",
"Anyone from the `transformers` team got a solution for this?",
"Can you try setting `multi_process=False`?",
"@patrickvonplaten If I remember correctly the error disappeared when setting `multi_process` to false. However, I figured I should set it to true in order to obtain performance estimates which are as close as possible to reality?",
"For PyTorch it's totally fine to set `multi_process=False` -> it's only in TF where the memory consumption is a bit off then",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"This issue is still present in the latest version (4.11.3). I ran the example benchmark shown [here](https://huggingface.co/transformers/benchmarks.html) in TensorFlow (2.6.0) and got the same error:\r\n\r\n> AttributeError: Can't pickle local object 'separate_process_wrapper_fn.<locals>.multi_process_func.<locals>.wrapper_func\r\n"
] | 1,617 | 1,634 | 1,624 | NONE | null | Hi there,
likely my fault, but I can't find a proper solution yet. Tried to follow the structure as in `examples/benchmarking/run_benchmark.py`.
My Code:
```python
from transformers import AutoConfig, PyTorchBenchmark, PyTorchBenchmarkArguments
def main():
config = AutoConfig.from_pretrained('roberta-base')
# define args
args = PyTorchBenchmarkArguments(
models=['roberta-base'],
inference=False,
training=True,
speed=True,
memory=True,
save_to_csv=True,
train_memory_csv_file=f'models/filmo-large/train_memory_benchmark.csv',
train_time_csv_file=f'models/filmo-large/train_time_benchmark.csv',
env_info_csv_file=f'models/filmo-large/env.csv',
sequence_lengths=[64, 128, 256, 512],
batch_sizes=[8, 16],
fp16=True,
multi_process=True,
)
# create benchmark
benchmark = PyTorchBenchmark(
configs=[config],
args=args,
)
# run benchmark
benchmark.run()
if __name__ == '__main__':
main()
```
The error it yields:
```python
1 / 1
Traceback (most recent call last):
File "c:/Users/.../lm-train-benchmark.py", line 47, in <module>
main()
File "c:/Users/.../lm-train-benchmark.py", line 43, in main
benchmark.run()
File "C:\Users\...\.venv\lib\site-packages\transformers\benchmark\benchmark_utils.py", line 715, in run
memory, train_summary = self.train_memory(model_name, batch_size, sequence_length)
File "C:\Users\...\.venv\lib\site-packages\transformers\benchmark\benchmark_utils.py", line 679, in train_memory
return separate_process_wrapper_fn(self._train_memory, self.args.do_multi_processing)(*args, **kwargs)
File "C:\Users\...\.venv\lib\site-packages\transformers\benchmark\benchmark_utils.py", line 101, in multi_process_func
p.start()
File "C:\Python\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "C:\Python\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Python\lib\multiprocessing\context.py", line 327, in _Popen
return Popen(process_obj)
reduction.dump(process_obj, to_child)
File "C:\Python\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'separate_process_wrapper_fn.<locals>.multi_process_func.<locals>.wrapper_func'
PS C:\Users\...> Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Python\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:\Python\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
```
Already tried to rearrange the order and place individual components on the top level, without success. I am grateful for any advice π
Simon | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10984/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10984/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10983 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10983/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10983/comments | https://api.github.com/repos/huggingface/transformers/issues/10983/events | https://github.com/huggingface/transformers/issues/10983 | 846,029,865 | MDU6SXNzdWU4NDYwMjk4NjU= | 10,983 | FineTune XLSR-Wav2Vec2 on New Langauge WER still 1 | {
"login": "edwin-19",
"id": 13368628,
"node_id": "MDQ6VXNlcjEzMzY4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/13368628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/edwin-19",
"html_url": "https://github.com/edwin-19",
"followers_url": "https://api.github.com/users/edwin-19/followers",
"following_url": "https://api.github.com/users/edwin-19/following{/other_user}",
"gists_url": "https://api.github.com/users/edwin-19/gists{/gist_id}",
"starred_url": "https://api.github.com/users/edwin-19/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/edwin-19/subscriptions",
"organizations_url": "https://api.github.com/users/edwin-19/orgs",
"repos_url": "https://api.github.com/users/edwin-19/repos",
"events_url": "https://api.github.com/users/edwin-19/events{/privacy}",
"received_events_url": "https://api.github.com/users/edwin-19/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,617 | 1,620 | 1,620 | NONE | null | Hi, can i ask is there anyone facing issues fine tuning wav2vec2 for languages not in the common dataset.
I am trying to finetune for a language not within in the common datasets, but i get a WER of 1 not matter how many steps i tried to finetune, I have a similar issue here #10884
You can check my code in the repo here: (Note i did make some small changes that weren't in the original notebook by the huggingface team to fit the training notebook they provided)
https://github.com/edwin-19/wave2vec2-hf-sprint/blob/master/Train.ipynb | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10983/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10982 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10982/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10982/comments | https://api.github.com/repos/huggingface/transformers/issues/10982/events | https://github.com/huggingface/transformers/pull/10982 | 845,927,667 | MDExOlB1bGxSZXF1ZXN0NjA1MDYwOTQ0 | 10,982 | Update setup.py | {
"login": "maryjovita",
"id": 62052700,
"node_id": "MDQ6VXNlcjYyMDUyNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/62052700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maryjovita",
"html_url": "https://github.com/maryjovita",
"followers_url": "https://api.github.com/users/maryjovita/followers",
"following_url": "https://api.github.com/users/maryjovita/following{/other_user}",
"gists_url": "https://api.github.com/users/maryjovita/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maryjovita/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maryjovita/subscriptions",
"organizations_url": "https://api.github.com/users/maryjovita/orgs",
"repos_url": "https://api.github.com/users/maryjovita/repos",
"events_url": "https://api.github.com/users/maryjovita/events{/privacy}",
"received_events_url": "https://api.github.com/users/maryjovita/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"for error fixation"
] | 1,617 | 1,617 | 1,617 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10982/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10982/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10982",
"html_url": "https://github.com/huggingface/transformers/pull/10982",
"diff_url": "https://github.com/huggingface/transformers/pull/10982.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10982.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10981 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10981/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10981/comments | https://api.github.com/repos/huggingface/transformers/issues/10981/events | https://github.com/huggingface/transformers/pull/10981 | 845,910,702 | MDExOlB1bGxSZXF1ZXN0NjA1MDQ0OTcy | 10,981 | support passing path to a `config` variable in AutoClass | {
"login": "hwijeen",
"id": 29157715,
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hwijeen",
"html_url": "https://github.com/hwijeen",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,617 | 1,620 | 1,620 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
It enables loading weights from a private server like the following.
```python
AutoModel.from_pretrained('https://myserver/model/pytorch_model.bin', config='path/to/local/config.json')
```
**For the moment, the function to load weights from a private server like this way is supported in `PretrainedModel` but not in `AutoModel`.**
```python
BertModel.from_pretrained('https://myserver/model/pytorch_model.bin', config='path/to/local/config.json') # supported
AutoModel.from_pretrained('https://myserver/model/pytorch_model.bin', config='path/to/local/config.json') # not supported
```
To fix this issue, I copy and pasted code about `config_path` from `PretrainedModel` to `AutoModel`.
This features was requested in #10961 .
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10981/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10981/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10981",
"html_url": "https://github.com/huggingface/transformers/pull/10981",
"diff_url": "https://github.com/huggingface/transformers/pull/10981.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10981.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10980 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10980/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10980/comments | https://api.github.com/repos/huggingface/transformers/issues/10980/events | https://github.com/huggingface/transformers/pull/10980 | 845,684,749 | MDExOlB1bGxSZXF1ZXN0NjA0ODMyNDE5 | 10,980 | Enforce string-formatting with f-strings | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,617 | 1,617 | 1,617 | COLLABORATOR | null | # What does this PR do?
This PR removes any strings formatted with `.format` or `%` to use f-strings exclusively (unless there is a very good reason to use the other syntax, or the file is in a research_project/legacy folder).
The mix of three syntaxes does not make any sense and we all agree in the team that f-strings are more readable. Now that Python 3.5 is officially dead, there is no reason not to switch fully to f-strings.
cc @stas00 as we had a conversation about that.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10980/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10980/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10980",
"html_url": "https://github.com/huggingface/transformers/pull/10980",
"diff_url": "https://github.com/huggingface/transformers/pull/10980.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10980.patch",
"merged_at": 1617199227000
} |
https://api.github.com/repos/huggingface/transformers/issues/10979 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10979/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10979/comments | https://api.github.com/repos/huggingface/transformers/issues/10979/events | https://github.com/huggingface/transformers/issues/10979 | 845,542,678 | MDU6SXNzdWU4NDU1NDI2Nzg= | 10,979 | Tagged Model Version Not Working | {
"login": "rkunani",
"id": 35608129,
"node_id": "MDQ6VXNlcjM1NjA4MTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/35608129?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rkunani",
"html_url": "https://github.com/rkunani",
"followers_url": "https://api.github.com/users/rkunani/followers",
"following_url": "https://api.github.com/users/rkunani/following{/other_user}",
"gists_url": "https://api.github.com/users/rkunani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rkunani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rkunani/subscriptions",
"organizations_url": "https://api.github.com/users/rkunani/orgs",
"repos_url": "https://api.github.com/users/rkunani/repos",
"events_url": "https://api.github.com/users/rkunani/events{/privacy}",
"received_events_url": "https://api.github.com/users/rkunani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think you're using it wrong indeed. As said in the [docs](https://huggingface.co/transformers/model_sharing.html#model-versioning), you can either specify a tag name, branch name, or commit hash.",
"To reiterate over what @NielsRogge already said, this is the revision of the *model*, not the repository. You can check the model commits here: https://huggingface.co/roberta-large/commits/main\r\n\r\nRevisions also include branches and tags, but this particular model only has a single branch and no tag.",
"Ah, that makes sense. Thanks for the prompt reply!"
] | 1,617 | 1,617 | 1,617 | NONE | null | I am trying to download a specific version of the `roberta-large` model using the `revision` parameter of `from_pretrained()` as shown below:
```
from transformers import RobertaForSequenceClassification
model_type = "roberta-large"
v1 = "v3.5.0"
model = RobertaForSequenceClassification.from_pretrained(model_type, num_labels=2, revision=v1)
```
This code gives me the following 404 error:
```
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/roberta-large/resolve/v3.5.0/config.json
```
Am I using the `revision` parameter incorrectly?
Extra notes:
- I am using transformers v4.1.1
- Running the same code above with `v1 = "main"` works just fine | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10979/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10978 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10978/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10978/comments | https://api.github.com/repos/huggingface/transformers/issues/10978/events | https://github.com/huggingface/transformers/issues/10978 | 845,509,257 | MDU6SXNzdWU4NDU1MDkyNTc= | 10,978 | Add GPT Neo models to Write With Transformer | {
"login": "zxv",
"id": 366474,
"node_id": "MDQ6VXNlcjM2NjQ3NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/366474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zxv",
"html_url": "https://github.com/zxv",
"followers_url": "https://api.github.com/users/zxv/followers",
"following_url": "https://api.github.com/users/zxv/following{/other_user}",
"gists_url": "https://api.github.com/users/zxv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zxv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zxv/subscriptions",
"organizations_url": "https://api.github.com/users/zxv/orgs",
"repos_url": "https://api.github.com/users/zxv/repos",
"events_url": "https://api.github.com/users/zxv/events{/privacy}",
"received_events_url": "https://api.github.com/users/zxv/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"Very much needed!"
] | 1,617 | 1,622 | null | NONE | null | # π Feature request
Would it be possible to get the newly-added GPT Neo models usable on Write With Transformer?
## Motivation
It would be helpful to use the new models in the Write With Transformer app since it supports newlines.
CC @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10978/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10978/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10977 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10977/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10977/comments | https://api.github.com/repos/huggingface/transformers/issues/10977/events | https://github.com/huggingface/transformers/pull/10977 | 845,373,902 | MDExOlB1bGxSZXF1ZXN0NjA0NTQzNzcz | 10,977 | [Flax] Add other BERT classes | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2934977194,
"node_id": "MDU6TGFiZWwyOTM0OTc3MTk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Flax",
"name": "Flax",
"color": "4862AD",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"> It is so similar to the PyTorch implementation it seems a script could take care of the implementation by copying the PyTorch one and replacing a few strings!\r\n\r\n@marcvanzee and I were also wondering about this in general -- is there a 80/20 solution that requires user input in some cases? It would have to not introduce silent errors (e.g. a model that seems to run the same but differs in some hard-to-find way).\r\n"
] | 1,617 | 1,619 | 1,617 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds the other BERT model classes for Flax.
Also the following checkpoints have been uploaded for Flax:
- https://huggingface.co/bert-base-cased
- https://huggingface.co/bert-large-cased
- https://huggingface.co/bert-base-uncased
- https://huggingface.co/bert-large-uncased
- https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10977/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10977/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10977",
"html_url": "https://github.com/huggingface/transformers/pull/10977",
"diff_url": "https://github.com/huggingface/transformers/pull/10977.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10977.patch",
"merged_at": 1617173159000
} |
https://api.github.com/repos/huggingface/transformers/issues/10976 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10976/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10976/comments | https://api.github.com/repos/huggingface/transformers/issues/10976/events | https://github.com/huggingface/transformers/issues/10976 | 845,280,414 | MDU6SXNzdWU4NDUyODA0MTQ= | 10,976 | Transformers QA Online Demo is not working | {
"login": "gavishpoddar",
"id": 18366222,
"node_id": "MDQ6VXNlcjE4MzY2MjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/18366222?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gavishpoddar",
"html_url": "https://github.com/gavishpoddar",
"followers_url": "https://api.github.com/users/gavishpoddar/followers",
"following_url": "https://api.github.com/users/gavishpoddar/following{/other_user}",
"gists_url": "https://api.github.com/users/gavishpoddar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gavishpoddar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gavishpoddar/subscriptions",
"organizations_url": "https://api.github.com/users/gavishpoddar/orgs",
"repos_url": "https://api.github.com/users/gavishpoddar/repos",
"events_url": "https://api.github.com/users/gavishpoddar/events{/privacy}",
"received_events_url": "https://api.github.com/users/gavishpoddar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I re-launched the app.\r\n\r\nThe app itself doesn't have much information to help recreate the system though (and is not designed for heavy use :) )\r\n\r\nI'd recommend reading through the blog post instead: https://yjernite.github.io/lfqa.html",
"Thank you so much @yjernite "
] | 1,617 | 1,617 | 1,617 | NONE | null |
Transformers QA Online Demo is not working: https://huggingface.co/qa/
I am trying to recreate ELI5 but unable to find enough information @yjernite Can you Please help.
Please let me know if I can help
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10976/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10975 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10975/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10975/comments | https://api.github.com/repos/huggingface/transformers/issues/10975/events | https://github.com/huggingface/transformers/pull/10975 | 845,176,019 | MDExOlB1bGxSZXF1ZXN0NjA0MzU4Mjgw | 10,975 | Merge trainers | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I tested run all our `sagemaker/tests` and a few additional `model_parallel` tests. β
\r\nI also tested everything with the upcoming `pytorch 1.7.1` image with the new `smp 1.3.0` (their model parallelism library)β
\r\n\r\nWhat is still open for me is how does this behaves with `Seq2SeqTrainer`. Can users now use the `Seq2SeqTrainer` for model parallelism too? Data parallelism works already. Tested with `BART`\r\n\r\nAfter we have merged the `SageMakerTrainer` with the `Trainer` I would update the docs for sagemaker/model parallelism and the tests in tests/sagemaker. \r\n"
] | 1,617 | 1,617 | 1,617 | COLLABORATOR | null | # What does this PR do?
This PR merges the specific `SageMakerTrainer` into the main `Trainer` to make all the scripts work directly with model parallelism. In passing, a few internal breaking changes:
- `is_sagemaker_distributed_available` is renamed to `is_sagemaker_dp_enabled` since it's about data parallelism and not specifically distributed training, and it's True when the user has activated it, not when it's merely "available"
- in the ParallelMode enum, the case `SAGEMAKER_DISTRIBUTED` is renamed as well (but it wasn't used anywhere).
Both only concern the internals of the library and no public API is breaking. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10975/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10975",
"html_url": "https://github.com/huggingface/transformers/pull/10975",
"diff_url": "https://github.com/huggingface/transformers/pull/10975.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10975.patch",
"merged_at": 1617199290000
} |
https://api.github.com/repos/huggingface/transformers/issues/10974 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10974/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10974/comments | https://api.github.com/repos/huggingface/transformers/issues/10974/events | https://github.com/huggingface/transformers/issues/10974 | 845,059,025 | MDU6SXNzdWU4NDUwNTkwMjU= | 10,974 | Reproducing DistilRoBERTa | {
"login": "DavidHarrison",
"id": 2935011,
"node_id": "MDQ6VXNlcjI5MzUwMTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2935011?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DavidHarrison",
"html_url": "https://github.com/DavidHarrison",
"followers_url": "https://api.github.com/users/DavidHarrison/followers",
"following_url": "https://api.github.com/users/DavidHarrison/following{/other_user}",
"gists_url": "https://api.github.com/users/DavidHarrison/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DavidHarrison/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DavidHarrison/subscriptions",
"organizations_url": "https://api.github.com/users/DavidHarrison/orgs",
"repos_url": "https://api.github.com/users/DavidHarrison/repos",
"events_url": "https://api.github.com/users/DavidHarrison/events{/privacy}",
"received_events_url": "https://api.github.com/users/DavidHarrison/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\nYou can ping @VictorSanh as he'll be the most helpful regarding distillation.\r\n\r\nThanks!",
"Apologies, I've posted to the forum [here](https://discuss.huggingface.co/t/reproducing-distilroberta/5217?u=davidharrison).\r\n\r\nThanks!"
] | 1,617 | 1,617 | 1,617 | NONE | null | I've been trying to retrain DistilRoBERTa from the information given [here](https://huggingface.co/distilroberta-base) along with the example code/documentation [here](https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation).
I'm a bit unclear on the exact configuration used to train the DistilRoBERTa model. I have been assuming it uses the same configuration as the DistilBERT model with minor changes, though some things, such as the loss coefficients are still a bit ambiguous.
**Would it be possible to share the exact command/configuration to train DistilRoBERTa?**
I've been able to replicate DistilRoBERTa to similar evaluation MLM perplexity but there still seems to be a small but statistically significant difference, I can share the full config if it's helpful.
Thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10974/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10973 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10973/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10973/comments | https://api.github.com/repos/huggingface/transformers/issues/10973/events | https://github.com/huggingface/transformers/pull/10973 | 845,041,256 | MDExOlB1bGxSZXF1ZXN0NjA0MjMzMDY1 | 10,973 | accelerate scripts for question answering and qa with beam search | {
"login": "theainerd",
"id": 15798640,
"node_id": "MDQ6VXNlcjE1Nzk4NjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/15798640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theainerd",
"html_url": "https://github.com/theainerd",
"followers_url": "https://api.github.com/users/theainerd/followers",
"following_url": "https://api.github.com/users/theainerd/following{/other_user}",
"gists_url": "https://api.github.com/users/theainerd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theainerd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theainerd/subscriptions",
"organizations_url": "https://api.github.com/users/theainerd/orgs",
"repos_url": "https://api.github.com/users/theainerd/repos",
"events_url": "https://api.github.com/users/theainerd/events{/privacy}",
"received_events_url": "https://api.github.com/users/theainerd/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,617 | 1,617 | 1,617 | CONTRIBUTOR | null | # What does this PR do?
Adding example script for question answering and question answering beam search using accelerate library .
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR. @sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10973/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10973/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10973",
"html_url": "https://github.com/huggingface/transformers/pull/10973",
"diff_url": "https://github.com/huggingface/transformers/pull/10973.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10973.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10972 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10972/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10972/comments | https://api.github.com/repos/huggingface/transformers/issues/10972/events | https://github.com/huggingface/transformers/pull/10972 | 844,958,495 | MDExOlB1bGxSZXF1ZXN0NjA0MTU2MjU1 | 10,972 | Add more metadata to the user agent | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,617 | 1,617 | 1,617 | COLLABORATOR | null | # What does this PR do?
This PR adds a bit more metadata to the user agent to allow us to have more statistics on the usage, more precisely, it registers:
- the type of the file asked on the hub ("config", "tokenizer", "model" or "model_card")
- the framework used for the model ("pytorch", "tensorflow" or "flax"). Note that this is the framework actually used even in the case of a conversion (so if download the PyTorch checkpoint but use it to instantiate a Flax model, it will be "flax")
- for a tokenizer, whether it's fast or slow (like from the framework it checks the class used at the end, not the files downloaded)
- whether the Auto API was used or not
- if the instantiation came from a given pipeline or not
- if the instantiation came from the CI or not (by using a specific env variable)
There is no personal data collected but if a user wants to deactivate this behavior, the `DISABLE_TELEMETRY` env variable can be set to any truthy value and none of this will be shared.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10972/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10972",
"html_url": "https://github.com/huggingface/transformers/pull/10972",
"diff_url": "https://github.com/huggingface/transformers/pull/10972.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10972.patch",
"merged_at": 1617197767000
} |
https://api.github.com/repos/huggingface/transformers/issues/10971 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10971/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10971/comments | https://api.github.com/repos/huggingface/transformers/issues/10971/events | https://github.com/huggingface/transformers/pull/10971 | 844,939,033 | MDExOlB1bGxSZXF1ZXN0NjA0MTM4MTA1 | 10,971 | added py7zr | {
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks a lot for adding this!"
] | 1,617 | 1,617 | 1,617 | MEMBER | null | # What does this PR do?
This PR adds `py7zr` to use `samsum` as a dataset.
```python
[1,14]<stdout>: use_auth_token=use_auth_token,
[1,14]<stdout>: File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 448, in prepare_module
[1,14]<stdout>: f"To be able to use this {module_type}, you need to install the following dependencies"
[1,14]<stdout>:ImportError: To be able to use this dataset, you need to install the following dependencies['py7zr'] using 'pip install py7zr' for instance'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10971/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10971",
"html_url": "https://github.com/huggingface/transformers/pull/10971",
"diff_url": "https://github.com/huggingface/transformers/pull/10971.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10971.patch",
"merged_at": 1617126432000
} |
https://api.github.com/repos/huggingface/transformers/issues/10970 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10970/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10970/comments | https://api.github.com/repos/huggingface/transformers/issues/10970/events | https://github.com/huggingface/transformers/pull/10970 | 844,915,975 | MDExOlB1bGxSZXF1ZXN0NjA0MTE2NzM2 | 10,970 | Fixed a bug where the `pipeline.framework` would actually contain a fully qualified model. | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,617 | 1,617 | 1,617 | CONTRIBUTOR | null | # What does this PR do?
We simply forgot to change the call for this one when this landed:
https://github.com/huggingface/transformers/pull/10888
It's odd that tests didn't catch that. Should we add some ?
(It's a pretty edgy test case, but it does run within the API).
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10970/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10970",
"html_url": "https://github.com/huggingface/transformers/pull/10970",
"diff_url": "https://github.com/huggingface/transformers/pull/10970.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10970.patch",
"merged_at": 1617125195000
} |
https://api.github.com/repos/huggingface/transformers/issues/10969 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10969/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10969/comments | https://api.github.com/repos/huggingface/transformers/issues/10969/events | https://github.com/huggingface/transformers/pull/10969 | 844,869,002 | MDExOlB1bGxSZXF1ZXN0NjA0MDczMzU3 | 10,969 | [GPT Neo] defaults for max length and sampling | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,617 | 1,617 | 1,617 | MEMBER | null | # What does this PR do?
Update defaults , `max_length=50` and `do_sample=True` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10969/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10969",
"html_url": "https://github.com/huggingface/transformers/pull/10969",
"diff_url": "https://github.com/huggingface/transformers/pull/10969.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10969.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10968 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10968/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10968/comments | https://api.github.com/repos/huggingface/transformers/issues/10968/events | https://github.com/huggingface/transformers/pull/10968 | 844,693,560 | MDExOlB1bGxSZXF1ZXN0NjAzOTEyNjYw | 10,968 | GPT Neo few fixes | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,617 | 1,617 | 1,617 | MEMBER | null | # What does this PR do?
- update checkpoint names
- auto model
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10968/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10968",
"html_url": "https://github.com/huggingface/transformers/pull/10968",
"diff_url": "https://github.com/huggingface/transformers/pull/10968.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10968.patch",
"merged_at": 1617117355000
} |
https://api.github.com/repos/huggingface/transformers/issues/10967 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10967/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10967/comments | https://api.github.com/repos/huggingface/transformers/issues/10967/events | https://github.com/huggingface/transformers/pull/10967 | 844,587,182 | MDExOlB1bGxSZXF1ZXN0NjAzODE3MTgz | 10,967 | [BigBird] Fix big bird gpu test | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,617 | 1,617 | 1,617 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
`torch.randint(...)` does not seem to be reproducible across versions and devices
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10967/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10967",
"html_url": "https://github.com/huggingface/transformers/pull/10967",
"diff_url": "https://github.com/huggingface/transformers/pull/10967.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10967.patch",
"merged_at": 1617113029000
} |
https://api.github.com/repos/huggingface/transformers/issues/10966 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10966/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10966/comments | https://api.github.com/repos/huggingface/transformers/issues/10966/events | https://github.com/huggingface/transformers/pull/10966 | 844,577,613 | MDExOlB1bGxSZXF1ZXN0NjAzODA4ODAy | 10,966 | improved sagemaker documentation for git_config and examples | {
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,617 | 1,617 | 1,617 | MEMBER | null | # What does this PR do?
This PR improves Amazon SageMaker documentation to make it more clear how `git_config` works with `examples/`. Related to #10957. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10966/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10966",
"html_url": "https://github.com/huggingface/transformers/pull/10966",
"diff_url": "https://github.com/huggingface/transformers/pull/10966.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10966.patch",
"merged_at": 1617120052000
} |
https://api.github.com/repos/huggingface/transformers/issues/10965 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10965/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10965/comments | https://api.github.com/repos/huggingface/transformers/issues/10965/events | https://github.com/huggingface/transformers/issues/10965 | 844,556,461 | MDU6SXNzdWU4NDQ1NTY0NjE= | 10,965 | Gradient checkpointing in Wav2Vec2 | {
"login": "Getmany1",
"id": 26164540,
"node_id": "MDQ6VXNlcjI2MTY0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/26164540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Getmany1",
"html_url": "https://github.com/Getmany1",
"followers_url": "https://api.github.com/users/Getmany1/followers",
"following_url": "https://api.github.com/users/Getmany1/following{/other_user}",
"gists_url": "https://api.github.com/users/Getmany1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Getmany1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Getmany1/subscriptions",
"organizations_url": "https://api.github.com/users/Getmany1/orgs",
"repos_url": "https://api.github.com/users/Getmany1/repos",
"events_url": "https://api.github.com/users/Getmany1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Getmany1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! You can check out this related issue: https://github.com/huggingface/transformers/issues/10366",
"Thanks @LysandreJik! Yeah, one solution could be modify the `run_asr.py` script and add segmenting long speech samples by manually splitting each `batch[\"speech\"]` into smaller chunks before passing to the _wav2vec2 processor_ and converting to input values, then passing one chunk at a time to the model and after that merging the outputs into a single transcription and calculating the loss. Just wondering if there could be any other ways / built-in features to split a batch of size 1 into smaller mini-batches.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Hi @Getmany1 I also meet this errorοΌBut my audio file is no longer than 15s. How to solve the error? and how to pass one chunk at a time to the model.\r\n"
] | 1,617 | 1,632 | 1,620 | NONE | null | Hi,
Has anyone managed to fine-tune a **Wav2Vec2** model on long audio recordings which cannot fit into a GPU even with `batch_size=1`? I tried out to set `gradient_checkpointing=true`, but it didn't help to solve the _CUDA Out of Memory Error_. Could it mean that gradient checkpointing does not work properly with **Wav2Vec2** models or are there other tricks needed to be added to the fine-tuning script in addition to the gradient checkpointing? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10965/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10964 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10964/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10964/comments | https://api.github.com/repos/huggingface/transformers/issues/10964/events | https://github.com/huggingface/transformers/issues/10964 | 844,409,264 | MDU6SXNzdWU4NDQ0MDkyNjQ= | 10,964 | pkg_resources' working_set caching breaks transformers import on google colab | {
"login": "konstin",
"id": 6826232,
"node_id": "MDQ6VXNlcjY4MjYyMzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6826232?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/konstin",
"html_url": "https://github.com/konstin",
"followers_url": "https://api.github.com/users/konstin/followers",
"following_url": "https://api.github.com/users/konstin/following{/other_user}",
"gists_url": "https://api.github.com/users/konstin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/konstin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/konstin/subscriptions",
"organizations_url": "https://api.github.com/users/konstin/orgs",
"repos_url": "https://api.github.com/users/konstin/repos",
"events_url": "https://api.github.com/users/konstin/events{/privacy}",
"received_events_url": "https://api.github.com/users/konstin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"Thank you for this awesome report, @konstin, and identifying the cause of the problem and the solution.\r\n\r\nThis looks like a bug in `pkg_resources`. It should update its cache after install. \r\n\r\nThe other workaround that seems to work is to just do:\r\n```\r\n!pip install -U \"transformers<5.0.0,>=4.0.0\" \"tqdm<5.0.0,>=4.45.0\"\r\n!pip install -U \"transformers<5.0.0,>=4.0.0\" \"tqdm<5.0.0,>=4.45.0\"\r\n```\r\nSo the 2nd one updates the cache.\r\n\r\nYou may want to report this bug to `pkg_resources`.\r\n\r\nSince `transformers` has started using `importlib_metadata` extensively as of recent I think your proposed solution sounds great, so yes please - the proposed PR sounds perfect.\r\n\r\nThank you."
] | 1,617 | 1,617 | 1,617 | CONTRIBUTOR | null | ## Environment info
- `transformers` version: 4.4.2
- Platform: google colab
- Python version: 3.7.10
- PyTorch version (GPU?): n/a
- Tensorflow version (GPU?): n/a
- Using GPU in script?: n/a
- Using distributed or parallel set-up in script?: n/a
### Who can help
CC @stas00 as this was implemented in #8645
## To reproduce
You can find a complete example in [this google colab](https://colab.research.google.com/drive/1WT7GSd4uk9TLle9q9ftRNFy4BLZWCvZa?usp=sharing), also exported to [this gist](https://gist.github.com/konstin/e42d8f428fa11ba389e31be69cdc5646).
To reproduce, first [create a new google colab notebook](https://colab.research.google.com/#create=true). Let's install recent transformers and tqdm versions in it:
```
!pip install -U pip
!pip install -U "transformers<5.0.0,>=4.0.0" "tqdm<5.0.0,>=4.45.0"
```
This currently install transformers 4.4.2 and tqdm 4.59.0.
Surprisingly, now running `import transformers` fails. We get an error in pkg_resources, which is looking for the .dist-info of tqdm 4.41.1, when the installed version is 4.59.0:
```
[...]
/usr/local/lib/python3.7/dist-packages/pkg_resources/__init__.py in _get(self, path)
1609
1610 def _get(self, path):
-> 1611 with open(path, 'rb') as stream:
1612 return stream.read()
1613
FileNotFoundError: [Errno 2] No such file or directory: '/usr/local/lib/python3.7/dist-packages/tqdm-4.41.1.dist-info/METADATA'
```
The cause is that pkg_resources uses the cached [WorkingSet](https://setuptools.readthedocs.io/en/latest/pkg_resources.html#workingset-objects), which contains the state before the pip install. We can confirm this by recreating pkg_resources' cache manually:
```python
import pkg_resources
pkg_resources.working_set = pkg_resources.WorkingSet()
```
Afterwards, importing transformers works.
The above example is the minimized version of our real [notebooks examples](https://github.com/sacdallago/bio_embeddings/tree/develop/notebooks):
```python
!pip install -U pip
!pip install -U bio_embeddings[all]
from bio_embeddings.embed import SeqVecEmbedder # This line fails with the tqdm .dist-info not found error
```
## Expected behavior
transformers should use the actual installed versions for checking compatibility instead of pkg_resources cache. This could be achieved e.g. by using [importlib_metadata](https://github.com/python/importlib_metadata) instead of pkg_resources [here](https://github.com/huggingface/transformers/blob/1c06240e1b3477728129bb58e7b6c7734bb5074e/src/transformers/utils/versions.py#L80) or by recreating pkg_resources` cache with `pkg_resources.working_set = pkg_resources.WorkingSet()` before checking versions.
I've used the following snippet to check that importlib_metadata works, which prints `4.41.1` and `4.59.0`:
```python
import pkg_resources
import importlib_metadata
print(pkg_resources.get_distribution("tqdm").version)
print(importlib_metadata.version("tqdm"))
```
I can prepare a pull request for either solution. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10964/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10963 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10963/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10963/comments | https://api.github.com/repos/huggingface/transformers/issues/10963/events | https://github.com/huggingface/transformers/issues/10963 | 844,388,116 | MDU6SXNzdWU4NDQzODgxMTY= | 10,963 | compute perplexity using a custom metric function | {
"login": "TheAzouz",
"id": 44611898,
"node_id": "MDQ6VXNlcjQ0NjExODk4",
"avatar_url": "https://avatars.githubusercontent.com/u/44611898?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheAzouz",
"html_url": "https://github.com/TheAzouz",
"followers_url": "https://api.github.com/users/TheAzouz/followers",
"following_url": "https://api.github.com/users/TheAzouz/following{/other_user}",
"gists_url": "https://api.github.com/users/TheAzouz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TheAzouz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheAzouz/subscriptions",
"organizations_url": "https://api.github.com/users/TheAzouz/orgs",
"repos_url": "https://api.github.com/users/TheAzouz/repos",
"events_url": "https://api.github.com/users/TheAzouz/events{/privacy}",
"received_events_url": "https://api.github.com/users/TheAzouz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discusss.huggingface.co) instead?\r\n\r\ncc @sgugger \r\n\r\nThanks!",
"The predictions are the logits of your model, so in the case of a language model, it will be a big array `num_samples x seq_length x vocab_size`. Your labels will be a big array of `num_samples x seq_length` with the tokens corresponding to something not masked at -100 (index that is ignored).",
"First, I would like to apologize for writing to you here, I wasn't aware that there is a dedicated forum.\r\nThank you @sgugger, I appreciate your help, my function is working now."
] | 1,617 | 1,617 | 1,617 | NONE | null | Hello,
I am trying to replicate the "On the Cross-lingual Transferability of Monolingual Representations" paper from Artetxe et al. and I am using the code you're providing, specifically [run_mlm.py]
(https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py)
I wanted to log the perplexity to tensorboard during the evaluation step. I found out that the best option is to add a custom compute_metrics function in the trainer that uses the evaluation results (predictions and target) to compute perplexity. However, I didn't manage to do that because I couldn't understand what the output of predictions represents.
I am really new to NLP and your help is very much appreciated. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10963/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10962 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10962/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10962/comments | https://api.github.com/repos/huggingface/transformers/issues/10962/events | https://github.com/huggingface/transformers/pull/10962 | 844,341,148 | MDExOlB1bGxSZXF1ZXN0NjAzNTk4NDIx | 10,962 | fix md file to avoid evaluation crash | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patrickvonplaten It seems the finetune notebook is on your own github repository, not on HuggingFace's transformers.",
"@ydshieh yes this is correct - if it's ok for you feel free to open a PR there :-) "
] | 1,617 | 1,651 | 1,617 | COLLABORATOR | null | # What does this PR do?
Fix the crash due to the memory usage in the instructions for model evaluation in `FINE_TUNE_XLSR_WAV2VEC2.md`.
The original version `test_dataset["speech"][:2]` load the whole speech array into memory which is too large.
Change it to `test_dataset[:2]["speech"]` runs smoothly and much faster.
## Before submitting
- [ ] This PR improves the docs
## Who can review?
@patrickvonplaten
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10962/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10962",
"html_url": "https://github.com/huggingface/transformers/pull/10962",
"diff_url": "https://github.com/huggingface/transformers/pull/10962.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10962.patch",
"merged_at": 1617128783000
} |
https://api.github.com/repos/huggingface/transformers/issues/10961 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10961/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10961/comments | https://api.github.com/repos/huggingface/transformers/issues/10961/events | https://github.com/huggingface/transformers/issues/10961 | 844,294,932 | MDU6SXNzdWU4NDQyOTQ5MzI= | 10,961 | Supporting `config_path` for `AutoModel` | {
"login": "hwijeen",
"id": 29157715,
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hwijeen",
"html_url": "https://github.com/hwijeen",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @hwijeen! Would the `AutoModel.from_config()` method work for you?",
"Thank you @LysandreJik for your quick reply! I checked out the `AutoModel.from_config()` method. It is a convenient method but does not fit my use case, as **it does NOT load weights**.",
"Hi @LysandreJik , I update this issue to clarify. I would appreciate if you could give it a pass! I am sorry to nudge you twice but I think this issue and related PR(#10981) could be useful for those who try to load pretrained weights from private server!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,617 | 1,620 | 1,620 | CONTRIBUTOR | null | # π Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Creating a model like this:
```python
AutoModel.from_pretrained('https://myserver/model/pytorch_model.bin', config='path/to/local/config.json')
```
## Motivation
For the moment, instantiating a model like above is possible with `PretrainedModel` but not with `AutoModel`. i.e.
```python
BertModel.from_pretrained('https://myserver/model/pytorch_model.bin', config='path/to/local/config.json') # possible
AutoModel.from_pretrained('https://myserver/model/pytorch_model.bin', config='path/to/local/config.json') # not possible
```
To elaborate, the optional `config` argument passed to **`AutoModel.from_pretrained`** method should be `PretrainedConfig`, while it could be either `PretrainedConfig` or 'a string or path valid as input to `PretrainedConfig.from_pretrained`' in the case of **`PretrainedModel.from_pretrained`**.
The difference comes from the lack of [this line](https://github.com/huggingface/transformers/blob/8780caa388c7b2aa937454ed96bcdd3f097f851d/src/transformers/modeling_utils.py#L974) in `AutoModel`.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
I would like to submit a PR.
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10961/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10961/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10960 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10960/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10960/comments | https://api.github.com/repos/huggingface/transformers/issues/10960/events | https://github.com/huggingface/transformers/issues/10960 | 844,191,886 | MDU6SXNzdWU4NDQxOTE4ODY= | 10,960 | What is the score of trainer.predict()? | {
"login": "Yuukp",
"id": 48003204,
"node_id": "MDQ6VXNlcjQ4MDAzMjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/48003204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yuukp",
"html_url": "https://github.com/Yuukp",
"followers_url": "https://api.github.com/users/Yuukp/followers",
"following_url": "https://api.github.com/users/Yuukp/following{/other_user}",
"gists_url": "https://api.github.com/users/Yuukp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yuukp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yuukp/subscriptions",
"organizations_url": "https://api.github.com/users/Yuukp/orgs",
"repos_url": "https://api.github.com/users/Yuukp/repos",
"events_url": "https://api.github.com/users/Yuukp/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yuukp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"These are the logits from your model, check:- https://github.com/huggingface/transformers/blob/8780caa388c7b2aa937454ed96bcdd3f097f851d/src/transformers/trainer.py#L1852",
"I see! Thank you so much!!"
] | 1,617 | 1,617 | 1,617 | NONE | null | I want to know the meaning of output of trainer.predict().
example:
`PredictionOutput(predictions=array([[-2.2704859, 2.442343 ]], dtype=float32), label_ids=array([1]), metrics={'eval_loss': 0.008939245715737343, 'eval_runtime': 0.0215, 'eval_samples_per_second': 46.56})`
What is this score? -> predictions=array([[-2.2704859, 2.442343 ]]
I use it for Sequence Classification.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10960/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10959 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10959/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10959/comments | https://api.github.com/repos/huggingface/transformers/issues/10959/events | https://github.com/huggingface/transformers/pull/10959 | 844,128,755 | MDExOlB1bGxSZXF1ZXN0NjAzNDE4Nzkz | 10,959 | Fix summarization notebook link | {
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,617 | 1,617 | 1,617 | MEMBER | null | This PRs fixes the link to the new summarization notebook | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10959/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10959",
"html_url": "https://github.com/huggingface/transformers/pull/10959",
"diff_url": "https://github.com/huggingface/transformers/pull/10959.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10959.patch",
"merged_at": 1617107338000
} |
https://api.github.com/repos/huggingface/transformers/issues/10958 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10958/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10958/comments | https://api.github.com/repos/huggingface/transformers/issues/10958/events | https://github.com/huggingface/transformers/issues/10958 | 844,104,069 | MDU6SXNzdWU4NDQxMDQwNjk= | 10,958 | Returning Confidence Score For Extractive QA Task When Using Non-Pipeline Approach | {
"login": "UmerTariq1",
"id": 32323864,
"node_id": "MDQ6VXNlcjMyMzIzODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/32323864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/UmerTariq1",
"html_url": "https://github.com/UmerTariq1",
"followers_url": "https://api.github.com/users/UmerTariq1/followers",
"following_url": "https://api.github.com/users/UmerTariq1/following{/other_user}",
"gists_url": "https://api.github.com/users/UmerTariq1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/UmerTariq1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/UmerTariq1/subscriptions",
"organizations_url": "https://api.github.com/users/UmerTariq1/orgs",
"repos_url": "https://api.github.com/users/UmerTariq1/repos",
"events_url": "https://api.github.com/users/UmerTariq1/events{/privacy}",
"received_events_url": "https://api.github.com/users/UmerTariq1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"Correct me if I am wrong but I think the reason why it's not included in the model output is because it's an utility function and not a direct output of the model.\r\n\r\nIf I am not wrong, calculating the confidence score in a non-pipeline method is straight forward, like how it's done below \r\nhttps://github.com/huggingface/transformers/blob/8d171628fe84bdf92ee40b5375d7265278180f14/examples/pytorch/question-answering/utils_qa.py#L151\r\n\r\nFrom the example you shared, adding couple of lines should give us the score.\r\n````\r\nfrom transformers import AutoTokenizer, AutoModelForQuestionAnswering\r\nimport torch\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-large-uncased-whole-word-masking-finetuned-squad\")\r\nmodel = AutoModelForQuestionAnswering.from_pretrained(\"bert-large-uncased-whole-word-masking-finetuned-squad\")\r\ntext = r\"\"\"\r\nπ€ Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose\r\narchitectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNetβ¦) for Natural Language Understanding (NLU) and Natural\r\nLanguage Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between\r\nTensorFlow 2.0 and PyTorch.\r\n\"\"\"\r\nquestions = [\r\n \"How many pretrained models are available in π€ Transformers?\",\r\n \"What does π€ Transformers provide?\",\r\n \"π€ Transformers provides interoperability between which frameworks?\",\r\n]\r\nfor question in questions:\r\n inputs = tokenizer(question, text, add_special_tokens=True, return_tensors=\"pt\")\r\n input_ids = inputs[\"input_ids\"].tolist()[0]\r\n outputs = model(**inputs)\r\n answer_start_scores = outputs.start_logits\r\n answer_end_scores = outputs.end_logits\r\n answer_start = torch.argmax(\r\n answer_start_scores\r\n ) # Get the most likely beginning of answer with the argmax of the score\r\n answer_end = torch.argmax(answer_end_scores) + 1 # Get the most likely end of answer with the argmax of the score\r\n answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))\r\n # Compute the Score using start_logits and end_logits\r\n score = outputs.start_logits[0][answer_start] + outputs.end_logits[0][answer_end-1]\r\n print(f\"Question: {question}\")\r\n print(f\"Answer: {answer}\")\r\n print(f\"Confidence Score: {score}\")\r\n````\r\n\r\nThis also gives us the flexibility to design the confidence score. For example it might also be interesting to boost the confidence score based on intent similarity and entity/NP intersection.\r\n\r\nHope this helps!",
"\r\nThanks for replying to this @lawliet19189 . This is good insight. \r\nI do agree with @UmerTariq1 that there should be an option to return a basic confidence score without jumping through hoops. At the very least it would save us some time to document that it doesn't exist and a method (like this) that can be used. That way you don't spend a lot of time searching the documentation for a parameter or method to get them.\r\n\r\nSo I vote for this feature."
] | 1,617 | 1,641 | null | NONE | null | # π Feature request
HF's Extract QA pipeline provides an excellent interface for start. It returns 4 values including a **probability score / confidence score**. Unfortunately same is not the case when using the non-pipeline approach i.e using model and tokenizer for question answering.
[Both methods are mentioned here, The pipeline one and the other](https://huggingface.co/transformers/task_summary.html#extractive-question-answering)
## Motivation
The confidence score will help a lot in various tasks. For example when I am developing a complete pipeline for QA, consisting of recall, retriever and some other models for entity matching and etc. I need to calculate scores of each models and then rank the final list of documents based on the weighted sum of score from each model. I believe this is a very common practice among NLP practitioners and not just for QA task. The point is confidence scores are usually a pretty standard requirement for each model output because we have to take further actions based on its score.
## Your contribution
I want to. but unfortunately I am not at the level where I can understand the code. I have went through the code and I believe its the "decode" function in "QuestionAnsweringPipeline" class which has the code the which generates the probability scores. If you guys can just provide an interface for it or provide docs for how to calculate this score using the model and tokenizer approach then that would be great too. And if you do decide to do this then please also add this addition to docs in the link mentioned at the top.
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10958/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10957 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10957/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10957/comments | https://api.github.com/repos/huggingface/transformers/issues/10957/events | https://github.com/huggingface/transformers/issues/10957 | 843,899,645 | MDU6SXNzdWU4NDM4OTk2NDU= | 10,957 | check_version not valid | {
"login": "gwc4github",
"id": 3164663,
"node_id": "MDQ6VXNlcjMxNjQ2NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3164663?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gwc4github",
"html_url": "https://github.com/gwc4github",
"followers_url": "https://api.github.com/users/gwc4github/followers",
"following_url": "https://api.github.com/users/gwc4github/following{/other_user}",
"gists_url": "https://api.github.com/users/gwc4github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gwc4github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gwc4github/subscriptions",
"organizations_url": "https://api.github.com/users/gwc4github/orgs",
"repos_url": "https://api.github.com/users/gwc4github/repos",
"events_url": "https://api.github.com/users/gwc4github/events{/privacy}",
"received_events_url": "https://api.github.com/users/gwc4github/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, what's the problem?",
"> Hi, what's the problem?\r\n\r\nSorry, I accidentally saved the ticket before I was done. Then I had to leave my desk for a while. I have updated the ticket with all the info now.\r\n\r\nThanks!",
"Ah, I see! This is because you're using a script which comes from the `master` branch. Version `v4.5.0dev0` is the development version of the v4.5.0 version, which is the current `master`. \r\n\r\nThe scripts you use from the GitHub repository are always synced with `master`, so please be sure to use the source installation of the `master` branch of `transformers` alongside it.\r\n\r\nIf you want to use a script compatible with version `v4.4.2`, I would suggest taking the script from the tag `v4.4.2`, as this one will work with that version: \r\n\r\nhttps://github.com/huggingface/transformers/blob/9f43a425fe89cfc0e9b9aa7abd7dd44bcaccd79a/examples/token-classification/run_ner.py#L43-L55",
"Hello @gwc4github,\r\n\r\nHappy to see that you are already using the new Hugging Face Deep Learning Container and the Sagemaker-sdk. Regarding your issue. If you want to use the `examples` script you have to configure the `git_config` like that.\r\n\r\n```python\r\n# configure git settings\r\ngit_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.4.2'} \r\n```\r\nWhile branch `v4.4.1` is referring to the `transformers_version` in the HuggingFace estimator.\r\n\r\nFor your example, the estimator would look like \r\n\r\n```python\r\n# git configuration to download our fine-tuning script\r\ngit_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.4.2'}\r\n\r\n# hyperparameters, which are passed into the training job\r\nhyperparameters={'epochs': 1,\r\n 'train_batch_size': 32,\r\n 'model_name':'distilbert-base-uncased'\r\n }\r\n\r\nhuggingface_estimator = HuggingFace(entry_point='run_ner.py', # script\r\n source_dir='./examples/token-classification', # relative path to example\r\n base_job_name='huggingface-sdk-extension',\r\n git_config=git_config,\r\n instance_type='ml.p3.2xlarge',\r\n instance_count=1,\r\n transformers_version='4.4',\r\n tensorflow_version='2.4',\r\n py_version='py37',\r\n role=role,\r\n hyperparameters = hyperparameters)\r\n```\r\n\r\nYou can find more information about using `git_config` [here](https://huggingface.co/transformers/sagemaker.html#git-repository)\r\n",
"Thanks Lysandre and Phil.\r\nI didn't follow what Lysandre was explaining enough to do anything with it yet, but I did the following with Phil's information.\r\nI added the git_config line and then had to add a dataset_name as well. So no my code is as follows below. However, when I run it I get the error that I have also included *after* the code. I have also attached the full cell output.\r\n\r\n```\r\nfrom sagemaker.huggingface import HuggingFace\r\n\r\n# hyperparameters, which are passed into the training job\r\nhyperparameters={'epochs': 1,\r\n 'train_batch_size': 32,\r\n 'model_name':'bert-base-uncased',\r\n 'output_dir':'/opt/ml/model',\r\n 'dataset_name':'conll2003'\r\n }\r\n\r\n# git configuration to download our fine-tuning script\r\ngit_config = {'repo': 'https://github.com/huggingface/transformers.git','branch': 'v4.4.2'}\r\n\r\n\r\nhuggingface_estimator = HuggingFace(entry_point='run_ner.py', # script\r\n source_dir='./examples/token-classification', # relative path to example\r\n base_job_name='huggingface-sdk-extension',\r\n git_config=git_config,\r\n instance_type='ml.p3.2xlarge',\r\n instance_count=1,\r\n transformers_version='4.4',\r\n tensorflow_version='2.4',\r\n py_version='py37',\r\n role=role,\r\n hyperparameters = hyperparameters)\r\n\r\n\r\n```\r\n\r\nERROR:\r\n```\r\nInvoking script with the following command:\r\n\r\n/usr/local/bin/python3.7 run_ner.py --dataset_name conll2003 --epochs 1 --model_name bert-base-uncased --output_dir /opt/ml/model --train_batch_size 32\r\n\r\n\r\n2021-03-30 21:48:35.696262: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.\r\n2021-03-30 21:48:35.696429: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.\r\n2021-03-30 21:48:35.740003: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.\r\nTraceback (most recent call last):\r\n File \"run_ner.py\", line 501, in <module>\r\n main()\r\n File \"run_ner.py\", line 181, in main\r\n model_args, data_args, training_args = parser.parse_args_into_dataclasses()\r\n File \"/usr/local/lib/python3.7/site-packages/transformers/hf_argparser.py\", line 196, in parse_args_into_dataclasses\r\n raise ValueError(f\"Some specified arguments are not used by the HfArgumentParser: {remaining_args}\")\r\nValueError: Some specified arguments are not used by the HfArgumentParser: ['--epochs', '1', '--train_batch_size', '32']\r\n\r\n2021-03-30 21:48:37,576 sagemaker-training-toolkit ERROR ExecuteUserScriptError:\r\nCommand \"/usr/local/bin/python3.7 run_ner.py --dataset_name conll2003 --epochs 1 --model_name bert-base-uncased --output_dir /opt/ml/model --train_batch_size 32\"\r\n2021-03-30 21:48:35.696262: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.\r\n2021-03-30 21:48:35.696429: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:105] SageMaker Profiler is not enabled. The timeline writer thread will not be started, future recorded events will be dropped.\r\n2021-03-30 21:48:35.740003: W tensorflow/core/profiler/internal/smprofiler_timeline.cc:460] Initializing the SageMaker Profiler.\r\nTraceback (most recent call last):\r\n File \"run_ner.py\", line 501, in <module>\r\n main()\r\n File \"run_ner.py\", line 181, in main\r\n model_args, data_args, training_args = parser.parse_args_into_dataclasses()\r\n File \"/usr/local/lib/python3.7/site-packages/transformers/hf_argparser.py\", line 196, in parse_args_into_dataclasses\r\n raise ValueError(f\"Some specified arguments are not used by the HfArgumentParser: {remaining_args}\")\r\nValueError: Some specified arguments are not used by the HfArgumentParser: ['--epochs', '1', '--train_batch_size', '32']\r\n\r\n2021-03-30 21:48:45 Uploading - Uploading generated training model\r\n2021-03-30 21:49:27 Failed - Training job failed\r\n\r\n```\r\n[errLogs2021.03.30.txt](https://github.com/huggingface/transformers/files/6232459/errLogs2021.03.30.txt)\r\n",
"Hey @gwc4github,\r\nas the error is saying \r\n```\r\nValueError: Some specified arguments are not used by the HfArgumentParser: ['--epochs', '1', '--train_batch_size', '32']\r\n```\r\nyou pass in the wrong `hyperparameters`. If you take a look at the `run_ner.py` script and how it parses the arguments, you will notice it. The script parses the `parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))`. The `ModelArguments` and `DataTrainingArguments` are defined directly in the script and the `TrainingArguments` can you find [here](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments).\r\n\r\nThe hyperparameters you use `epochs`, `train_batch_size` and `model_name` have been only defined in the script for the [example](https://github.com/huggingface/notebooks/blob/4c909862282d551958629bec59c5712c010e4420/sagemaker/02_getting_started_tensorflow/scripts/train.py#L16) \r\n\r\nWhen you use the existing examples you stay need to provide the arguments as they are defined in the script in your example it would be\r\n```python\r\nhyperparameters={'num_train_epochs': 1,\r\n 'per_device_train_batch_size': 32,\r\n 'model_name_or_path':'bert-base-uncased',\r\n 'output_dir':'/opt/ml/model',\r\n 'dataset_name':'conll2003'\r\n }\r\n```\r\n\r\n**Additionally:** I noticed that you want to use the `Tensorflow` and the `Tensorflow` based DLC with `run_ner.py`. The `run_ner.py` only works with `Pytorch` so you have to replace `tensorflow_version` with `pytorch_version` and change the `py_version`. Please take a look at the [documentation here](https://huggingface.co/transformers/sagemaker.html). All your problems are addressed there.\r\n",
"Thanks @philschmid and team. This did fix that problem and I understand completely what you are saying. I have gotten a lot further.\r\nThere are some new issues but they are unrelated to this original problem so I will open new tickets as needed. This ticket can be closed. THANKS again for your quick and detailed help!!!\r\n\r\nGregg"
] | 1,617 | 1,617 | 1,617 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.2
- Platform: AWS Sagemaker
- Python version: 3.6
- PyTorch version (GPU?): N/A
- Tensorflow version (GPU?): 2.4.1 (kernal: conda_tensorflow2_p36)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@LysandreJik, @sgugger, @patil-suraj
Models: Bert
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [X ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run sample Sagemaker notebook: ./notebooks/sagemaker/02_getting_started_tensorflow/
2. Change the entry_point from "train.py" to "run_ner.py"
3. Copy run_ner.py from the examples for ner: .\transformers\examples\token-classification\run_ner.py
3. Execute notebook
4. Line 47 executes a check_min_version("4.5.0.dev0") but that version does not exist. Newest version I see is 4.4.2. This results in the following error message:
Lines 47-48:
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
check_min_version("4.5.0.dev0")

<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I am trying to get a baseline Sagemaker notebook working that fine tunes Bert for token classification. (Using Colnn2003 or other.) This should use the new Sagemaker Deep Learning Containers. This is a first step for the project where we will next use custom data to fine tune the model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10957/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10957/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10956 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10956/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10956/comments | https://api.github.com/repos/huggingface/transformers/issues/10956/events | https://github.com/huggingface/transformers/pull/10956 | 843,882,762 | MDExOlB1bGxSZXF1ZXN0NjAzMjA3Nzg0 | 10,956 | [T5/MT5] resolve inf/nan under amp (mixed precision) | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | [
"> But the 1.2GB download is somewhat big even for @slow tests.\r\n\r\nThe downloads are cached on a shared disk across slow self-hosted runners, so that's not an issue!",
"Before I approached this problem, I did a bit of a study on the bfloat16 vs float16 properties. This is not fully complete, but you can see most of the useful data here: https://github.com/stas00/ml-ways/blob/master/numbers/bfloat16-vs-float16-study.ipynb\r\n\r\nComments/requests/suggestions are welcome though. It's a bit on a terse-side.",
"I spent some more time staring at the numbers, as I think @patrickvonplaten mentioned in one of the related threads, something trained in `bfloat16` isn't going to work with `float16`. You can see why by looking at this debug output:\r\n\r\n```\r\nmin=-2.77e+05 max= 2.75e+05 var= 5.45e+07 mean= 5.16e+01 (T5Stack loop start)\r\nmin=-2.77e+05 max= 2.75e+05 var= 5.45e+07 mean= 5.16e+01 (T5Block)\r\nmin=-2.77e+05 max= 2.75e+05 var= 5.45e+07 mean= 5.16e+01 (T5LayerNorm)\r\nmin= 1.31e+06 max= 6.90e+08 var= 9.52e+15 mean= 5.45e+07 (T5LayerNorm variance)\r\nmin=-1.46e+01 max= 1.46e+01 var= 1.00e+00 mean=-2.69e-03 (T5LayerNorm hidden_states)\r\nmin=-1.46e+01 max= 1.46e+01 var= 1.00e+00 mean=-2.69e-03 (T5LayerNorm hidden_states before return)\r\nmin=-2.76e+05 max= 2.74e+05 var= 5.41e+07 mean= 4.83e+01 (T5Block after T5LayerSelfAttention)\r\nmin=-2.76e+05 max= 2.74e+05 var= 5.41e+07 mean= 4.83e+01 (T5LayerNorm)\r\nmin= 1.38e+06 max= 6.86e+08 var= 9.37e+15 mean= 5.41e+07 (T5LayerNorm variance)\r\nmin=-1.45e+01 max= 1.46e+01 var= 1.00e+00 mean=-2.98e-03 (T5LayerNorm hidden_states)\r\nmin=-1.45e+01 max= 1.46e+01 var= 1.00e+00 mean=-2.98e-03 (T5LayerNorm hidden_states before return)\r\nmin=-2.76e+05 max= 2.73e+05 var= 5.40e+07 mean= 3.93e+01 (T5Block before T5LayerFF)\r\nmin=-2.76e+05 max= 2.73e+05 var= 5.40e+07 mean= 3.93e+01 (T5LayerFF: 1)\r\nmin=-2.76e+05 max= 2.73e+05 var= 5.40e+07 mean= 3.93e+01 (T5LayerNorm)\r\nmin= 1.61e+06 max= 6.84e+08 var= 9.28e+15 mean= 5.40e+07 (T5LayerNorm variance)\r\nmin=-1.44e+01 max= 1.46e+01 var= 1.00e+00 mean=-5.14e-03 (T5LayerNorm hidden_states)\r\nmin=-1.44e+01 max= 1.46e+01 var= 1.00e+00 mean=-5.14e-03 (T5LayerNorm hidden_states before return)\r\nmin=-2.47e+00 max= 3.03e+00 var= 4.43e-02 mean=-8.23e-05 (T5LayerFF: 2)\r\nmin=-1.70e-01 max= 4.95e+01 var= 6.34e-01 mean= 3.00e-01 (gelu 1)\r\nmin=-3.70e+02 max= 3.93e+02 var= 3.79e+02 mean= 2.79e-01 (gelu 2)\r\nmin=-4.71e+03 max= 3.67e+03 var= 1.89e+03 mean=-3.80e-01 (gelu 3)\r\nmin=-5.23e+03 max= 4.08e+03 var= 2.21e+03 mean=-4.75e-01 (gelu 4)\r\nmin=-7.11e+04 max= 5.32e+04 var= 8.27e+06 mean=-1.36e+02 (gelu 5)\r\nmin=-7.11e+04 max= 5.32e+04 var= 8.27e+06 mean=-1.36e+02 (T5LayerFF: 3)\r\nmin=-2.61e+05 max= 2.68e+05 var= 4.41e+07 mean=-1.04e+02 (T5LayerFF: 5)\r\nmin=-2.61e+05 max= 2.68e+05 var= 4.41e+07 mean=-1.04e+02 (T5Block after T5LayerFF)\r\nmin=-2.61e+05 max= 2.68e+05 var= 4.41e+07 mean=-1.04e+02 (T5Stack loop end)\r\nmin=-2.61e+05 max= 2.68e+05 var= 4.41e+07 mean=-1.04e+02 (T5LayerNorm)\r\nmin= 2.99e+06 max= 6.12e+08 var= 5.65e+15 mean= 4.41e+07 (T5LayerNorm variance)\r\nmin=-1.45e+01 max= 1.62e+01 var= 1.00e+00 mean=-2.27e-02 (T5LayerNorm hidden_states)\r\nmin=-1.45e+01 max= 1.62e+01 var= 1.00e+00 mean=-2.27e-02 (T5LayerNorm hidden_states before return)\r\n```\r\n\r\nBecause `bfloat16` lacks precision - it trained itself to compensate for this by switching to the range of large numbers. If you look at the numbers above you can see that many of them are a way beyond fp16 range, which can only do `+-64K`. \r\n\r\nSo if I understand the nature of this problem correctly expecting this to work is a bit of fantasy. But of course, let's try to do our best to come as close to the solution as possible.\r\n\r\nI found that it's enough to cancel autocast just for `self.DenseReluDense` for the simple case to not produce NaN. ",
"@yuvalkirstain, let's switch the discussion to the actual PR\r\n\r\nwrt your newly discovered overflow.\r\n\r\nPlease try to add this penalizing for large logits:\r\n```\r\n@@ -1578,6 +1618,15 @@ class T5ForConditionalGeneration(T5PreTrainedModel):\r\n loss = loss_fct(lm_logits.view(-1, lm_logits.size(-1)), labels.view(-1))\r\n # TODO(thom): Add z_loss https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/layers.py#L666\r\n\r\n+ # z_loss\r\n+ log_z = lm_logits.view(-1).logsumexp(-1)\r\n+ z_loss = 7e-5\r\n+ loss_extra = z_loss*log_z.square()\r\n+ #z_loss = 1e-5\r\n+ #loss_extra = z_loss*log_z.pow(3)\r\n+ #print(f\"loss={loss}, loss_extra={loss_extra}\")\r\n+ loss += loss_extra\r\n```\r\n\r\nMay need some tuning for `z_loss` factor for best convergence. The recommended one is 1e-4, so I've experimented with a few. Also tried the `pow(3)` instead of `pow(2)`.\r\n\r\nIt seem that the network gets the hint within just a 100 steps - `loss_extra` drops down very quickly.\r\n\r\nPerhaps this was the missing piece?\r\n",
"Here is the output of the proposed [overflow/underflow detector](https://github.com/huggingface/transformers/pull/11274) in progress tool for mt5. This is prior to any modifications proposed in this PR. So one can see the progression as the weights and activations change from forward to forward.\r\n\r\n```\r\nrm -rf output_dir; CUDA_VISIBLE_DEVICES=0 USE_TF=0 PYTHONPATH=src \\\r\npython examples/pytorch/translation/run_translation.py --model_name_or_path google/mt5-small --do_train \\\r\n--source_lang en --target_lang ro --dataset_name \\\r\nwmt16 --dataset_config_name ro-en --output_dir output_dir --per_device_train_batch_size=4 --logging_step 2 --save_steps 0 \\\r\n--fp16 --max_train_samples 10 --save_total_limit 0 --save_strategy no --debug underflow_overflow\r\n```\r\n\r\n```\r\nDetected inf/nan during batch_number=0\r\nLast 21 forward frames:\r\nabs min abs max metadata\r\n encoder.block.1.layer.1.DenseReluDense.dropout Dropout\r\n0.00e+00 2.57e+02 input[0]\r\n0.00e+00 2.85e+02 output\r\n encoder.block.1.layer.1.DenseReluDense.wo Linear\r\n4.80e-06 8.62e+00 weight\r\n0.00e+00 2.85e+02 input[0]\r\n8.50e-05 1.53e+03 output\r\n encoder.block.1.layer.1.DenseReluDense T5DenseGatedGeluDense\r\n0.00e+00 2.04e+00 input[0]\r\n8.50e-05 1.53e+03 output\r\n encoder.block.1.layer.1.dropout Dropout\r\n8.50e-05 1.53e+03 input[0]\r\n0.00e+00 1.70e+03 output\r\n encoder.block.1.layer.1 T5LayerFF\r\n0.00e+00 1.50e+03 input[0]\r\n6.78e-04 3.15e+03 output\r\n encoder.block.1 T5Block\r\n0.00e+00 1.40e+03 input[0]\r\n6.78e-04 3.15e+03 output[0]\r\n None output[1]\r\n2.25e-01 1.00e+04 output[2]\r\n encoder.block.2.layer.0.layer_norm T5LayerNorm\r\n6.54e-02 2.75e-01 weight\r\n6.78e-04 3.15e+03 input[0]\r\n5.75e-06 2.12e+00 output\r\n encoder.block.2.layer.0.SelfAttention.q Linear\r\n3.75e-08 3.40e-01 weight\r\n5.75e-06 2.12e+00 input[0]\r\n2.21e-06 1.20e+00 output\r\n encoder.block.2.layer.0.SelfAttention.k Linear\r\n4.84e-08 2.62e+00 weight\r\n5.75e-06 2.12e+00 input[0]\r\n5.47e-05 1.40e+01 output\r\n encoder.block.2.layer.0.SelfAttention.v Linear\r\n7.21e-06 2.59e+00 weight\r\n5.75e-06 2.12e+00 input[0]\r\n1.20e-04 7.56e+00 output\r\n encoder.block.2.layer.0.SelfAttention.o Linear\r\n6.65e-06 1.44e+01 weight\r\n0.00e+00 5.30e+00 input[0]\r\n5.20e-04 2.66e+02 output\r\n encoder.block.2.layer.0.SelfAttention T5Attention\r\n5.75e-06 2.12e+00 input[0]\r\n5.20e-04 2.66e+02 output[0]\r\n None output[1]\r\n2.25e-01 1.00e+04 output[2]\r\n encoder.block.2.layer.0.dropout Dropout\r\n5.20e-04 2.66e+02 input[0]\r\n0.00e+00 2.96e+02 output\r\n encoder.block.2.layer.0 T5LayerSelfAttention\r\n6.78e-04 3.15e+03 input[0]\r\n2.65e-04 3.42e+03 output[0]\r\n None output[1]\r\n2.25e-01 1.00e+04 output[2]\r\n encoder.block.2.layer.1.layer_norm T5LayerNorm\r\n8.69e-02 4.18e-01 weight\r\n2.65e-04 3.42e+03 input[0]\r\n1.79e-06 4.65e+00 output\r\n encoder.block.2.layer.1.DenseReluDense.wi_0 Linear\r\n2.17e-07 4.50e+00 weight\r\n1.79e-06 4.65e+00 input[0]\r\n2.68e-06 3.70e+01 output\r\n encoder.block.2.layer.1.DenseReluDense.wi_1 Linear\r\n8.08e-07 2.66e+01 weight\r\n1.79e-06 4.65e+00 input[0]\r\n1.27e-04 2.37e+02 output\r\n encoder.block.2.layer.1.DenseReluDense.dropout Dropout\r\n0.00e+00 8.76e+03 input[0]\r\n0.00e+00 9.74e+03 output\r\n encoder.block.2.layer.1.DenseReluDense.wo Linear\r\n1.01e-06 6.44e+00 weight\r\n0.00e+00 9.74e+03 input[0]\r\n3.18e-04 6.27e+04 output\r\n encoder.block.2.layer.1.DenseReluDense T5DenseGatedGeluDense\r\n1.79e-06 4.65e+00 input[0]\r\n3.18e-04 6.27e+04 output\r\n encoder.block.2.layer.1.dropout Dropout\r\n3.18e-04 6.27e+04 input[0]\r\n0.00e+00 inf output\r\n```",
"Hi there, I'm wondering what the current status of this is, as my team would benefit from a fix to fp16 issue with large T5 models. And is there anything we could do to help to move the PR along? \r\n\r\nIn the mean time, it should be sufficient to simply disable autocast for the DenseReluDense, correct?",
"> Hi there, I'm wondering what the current status of this is, as my team would benefit from a fix to fp16 issue with large T5 models. And is there anything we could do to help to move the PR along?\r\n\r\n@yuvalkirstain, who is one of the original reporters mentioned elsewhere that he still had an issue during the long training, so I was waiting for him to provide more details.\r\n\r\n> In the mean time, it should be sufficient to simply disable autocast for the DenseReluDense, correct?\r\n\r\nIf you're not using deepspeed, then yes, that is all that is needed. At least for the tests I have done. But they weren't long.\r\n\r\nPerhaps you could test this PR and report back if it solves your problem?\r\n\r\nI'm not sure if I should remove the clamping or not.\r\n\r\nI cleaned up the PR to remove all the debug noise, so it's very simple now.",
"Hi, I ran some experiments and it appears to me that this branch does fix the inf/nan issue for both T5-large and T5-3b--I trained both models for 10,000 steps on a language modeling task and never had the NaN loss issue I was having before. However, as far as I can tell the fix comes at a large cost in time and memory usage. \r\n\r\nUsing t5-large on an A6000 card (48 GB), I found:\r\n\r\n- no fp16: 25.00 GB, 3.06 iters/s\r\n- fp16 without the fix from this branch: 15.01 GB, 4.10 iters/s [but loss was `NaN`]\r\n- fp16 with the fix from this branch: 23.99 GB, 2.90 iters/s\r\n\r\n(collected using the `torch.autograd.profiler` tool)\r\n\r\nIn other words, fp16 with this fix uses about 1.6x more memory than before.\r\n\r\nDisclaimer: the experiments I ran were using an LM task that's internal to my team, so you won't be able to replicate it exactly. But I wanted to report back anyway since it's been a few days. However, in the next few days I'd like to repeat these experiments using one of the HF example scripts so that you can verify by running the exact same code.",
"That's a fantastic feedback, @dblakely - Thank you! Looking forward to seeing the stats on non-custom code.\r\n\r\nIt's interesting that you get even slower results than full fp32. But since you're no A6000 you're probably running on tf32 automatically if you're on the recent pytorch, that would explain it.\r\n\r\nAnd I trust you're not using the overflow detector which would add to the slowdown a bit.\r\n\r\nBTW, apparently there is a new `torch.profiler` tool - I haven't tried it yet.\r\n\r\nAlso earlier I wrote:\r\n\r\n> I found that it's enough to cancel autocast just for `self.DenseReluDense` for the simple case to not produce NaN.\r\n\r\nSo you might want to try this slightly tighter version:\r\n\r\n```\r\nclass T5LayerFF(nn.Module):\r\n def __init__(self, config):\r\n super().__init__()\r\n if config.feed_forward_proj == \"relu\":\r\n self.DenseReluDense = T5DenseReluDense(config)\r\n elif config.feed_forward_proj == \"gated-gelu\":\r\n self.DenseReluDense = T5DenseGatedGeluDense(config)\r\n else:\r\n raise ValueError(\r\n f\"{self.config.feed_forward_proj} is not supported. Choose between `relu` and `gated-gelu`\"\r\n )\r\n\r\n self.layer_norm = T5LayerNorm(config.d_model, eps=config.layer_norm_epsilon)\r\n self.dropout = nn.Dropout(config.dropout_rate)\r\n\r\n def forward(self, hidden_states):\r\n forwarded_states = self.layer_norm(hidden_states)\r\n if torch.is_autocast_enabled():\r\n with torch.cuda.amp.autocast(enabled=False):\r\n forwarded_states = self.DenseReluDense(forwarded_states)\r\n else:\r\n forwarded_states = self.DenseReluDense(forwarded_states)\r\n hidden_states = hidden_states + self.dropout(forwarded_states)\r\n return hidden_states\r\n```\r\n\r\nbut given that the bulk of everything comes from `DenseReluDense` it probably won't make much of a difference speed and memory requirements-wise.\r\n",
"Hi all,\r\n\r\nIf `bf16` is what's native to these models, how about we do `autocast` with `bf16` instead of `fp16` (and then don't scale)? There is a pull request [here](https://github.com/pytorch/pytorch/issues/55374) to add a `bf16` option to autocast.",
"That would be the best solution assuming you have high-end Ampere GPUs which support bf16 natively. (rtx-3090, a100, ...). So once this is finalized in pytorch we will support it in the HF trainer as well. \r\n\r\nIf you have been actively watching that development please kindly ping us when it's completed in pytorch. Thank you.",
"We use rtx a6000s, so I believe we are ok on that front. I'll monitor the aforementioned PR and keep you updated\r\n\r\nUPDATE: the fix has migrated to [this pr](https://github.com/pytorch/pytorch/pull/61002)",
"The [torch pr](https://github.com/pytorch/pytorch/pull/61002) is almost through, so I'm coming back to this. Would the ensuing pr here be as simple as changing autocast in t5 to the bf16 option?",
"Thank you for keeping on top of torch's side, @JamesDeAntonis \r\n\r\nNo, we will have to rework the HF Trainer to support bf16. The model doesn't need to be changed.\r\n\r\nMy recommendation is to wait till that PR lands in pt-nightly so we have something to test with. And then we can work on having bf16 support in the trainer.\r\n\r\nIf you're not using the HF Trainer, then you can do it independently by wrapping the training step in the new autocast.",
"Sounds good, thanks for the quick response! I'll continue to watch the pr.\r\n\r\nWe do indeed use the HF Trainer, so I'll probably be active on the HF pr as well.",
"It looks like the PR was just merged in torch! I think the ball is now in our court once the nightly build hits (so I think starting tomorrow)",
"Awesome! Thank you for keeping us abreast of this development, @JamesDeAntonis.\r\n\r\nThis is a month of August and most team members are on vacation at the moment, so this might take longer than normal.\r\n\r\nMy plate is very full at the moment, so unless someone beats me to it, I probably won't have any time in the next few weeks to work on this.\r\n\r\nBut, first, please create a new Issue and tag me there, so that we have an easy way to track this feature request.\r\n\r\nSecond, if one of you would like to work on the PR to integrate bf16 that would be great. I think the change itself should be relatively simple, add a new CLI arg `--bf16` and set amp to bf16 instead of fp16 in trainer.py. We may have to deprecate `fp16_backend` and rename it to something more generic, but just doing the above is a good start. The devil is in the detail though, so it may take longer to figure out.",
"Hi, after reading all the replies and related issues in torch and transformers, I still don't know how to fix the nan problem. I get `loss=nan` on every simple example using mt5-base.\r\n\r\n```\r\nfrom transformers import AutoConfig, AutoTokenizer, AutoModelForSeq2SeqLM, AutoModel\r\nimport torch\r\ndevice = 'cuda'\r\ntokenizer = AutoTokenizer.from_pretrained('google/mt5-base')\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained('google/mt5-base')\r\nmodel.to(device)\r\nscaler = torch.cuda.amp.GradScaler()\r\ntoks = tokenizer(['Je vous invite Γ vous lever pour cette minute de silence.',\"Please rise, then, for this minute' s silence.\"], return_tensors='pt', padding='max_length',max_length=512, truncation=True).to(device)\r\ninputs = {'input_ids': toks['input_ids'][0:1], 'attention_mask': toks['attention_mask'][0:1], 'labels': toks['input_ids'][1:], 'output_hidden_states':True}\r\nwith torch.cuda.amp.autocast():\r\n outputs = model(**inputs)\r\n loss = outputs.loss\r\nscaler.scale(loss).backward()\r\nloss.item()\r\n\r\n> nan\r\n\r\n```\r\ntorch=1.8.1+cu101\r\ntransformers=4.9.2\r\nany help would be much appreciated!",
">Hi, after reading all the replies and related issues in torch and transformers, I still don't know how to fix the nan problem. I get `loss=nan` on every simple example using mt5-base.\r\n\r\nIf you want to use `bf16`, you need to include the `fast_dtype` as seen [here](https://github.com/pytorch/pytorch/blob/master/torch/autocast_mode.py#L128) (I think)",
"> > Hi, after reading all the replies and related issues in torch and transformers, I still don't know how to fix the nan problem. I get `loss=nan` on every simple example using mt5-base.\r\n> \r\n> If you want to use `bf16`, you need to include the `fast_dtype` as seen [here](https://github.com/pytorch/pytorch/blob/master/torch/autocast_mode.py#L128) (I think)\r\n\r\nThank you very much. I switch to pt nightly build 1.10.0+cu111 and test the `fast_dtype =torch.bfloat16` in `amp`. It seems that cuda device does not support bfloat16. \r\n\r\n```\r\nRuntimeError: Current CUDA Device does not support bfloat16. Switching fast_dtype to float16.\r\n```\r\nI tried Tesla M40, GTX TITAN X and Quadro RTX 8000, same error.\r\n",
"> I tried Tesla M40, GTX TITAN X and Quadro RTX 8000, same error.\r\n\r\nFor bf16 you want the high end Ampere cards https://en.wikipedia.org/wiki/Ampere_(microarchitecture)#Products_using_Ampere - so 3090 and below on that list.",
"this PR isn't merged yet? :( is the issue resolved?",
"Well, it introduces a small slowdown as it forces one FF in fp32 under mixed precision, so I wasn't sure whether this solves the problem for everybody. Or whether this should be configurable. Some users reported that it solved their problem, other that it didn't.\r\n\r\nAdditionally I proposed in this PR \r\n- ~to remove the clamping, but got no feedback whether it's safe to do. https://github.com/huggingface/transformers/pull/10956/files/1ddec2c860617230a5171f3a95be74d27f4c8e9d#r603663224~ (Patrick suggested to keep it, so I restored that part)\r\n- to add an additional penalty to lm loss as described in the original codebase (it's not in PR, but the code is in the OP), which would stir the finetuning into the direction of fp16 weights. Perhaps it should be only added for when autocast is detected? But then bf16 is imminent, so probably need to find a way to check the autocast dtype is fp16? (and the dtype was introduced in pt-1.10 only)\r\n\r\nHere is a possible plan of action:\r\n\r\n1. leave everything as is and just have this PR add the FF override in fp32\r\n2. ~discuss clamping and keep or remove it~ (Patrick suggested to keep it, so I restored that part)\r\n3. discuss large weight lm loss penalty factor and add the code in\r\n",
"This PR forces T5 FF Layer in fp32. With this change, there is almost no benefit to training in fp16.\r\nThe memory usage and training speed improvements are very limited. ",
"@Liangtaiwan, also you may to try the loss penalty factor. The patch to apply (instead of this PR) is in the OP.",
"Hi everyone, I am testing a method of adjusting the T5 weights for FP16 training and so far it's promising. However, I would like to see if there is a way to \"validate\" how much of the model performance is still retained for both pre-trained and fine-tuned tasks.\r\n\r\nThe TLDR is: we scale the weights down, for as few parameters and as little as possible, until the model can be trained without NaN. Basically, to perform the minimum amount of \"surgery\" on the weights.\r\n\r\nCurrently, I am reducing about 2-3% of parameters in the model by a factor of 2 only and seeing some good initial results. These parameters are in the feed-forward layers in the encoder only. The resulting model still seems to work on existing tasks and I can fine-tune T5-large just fine in FP16 on my own task, where previously it would NaN. So far, nothing seems to be wrong with the outputs, and I have not encountered NaN. \r\n\r\nFor example of a converted model: https://github.com/tlkh/t5-fp16-surgery/blob/main/t5-large.ipynb\r\n\r\nI have uploaded the converted models for people to play with: \r\n\r\n* [`tlkh/t5_large_fp16_untuned`](https://huggingface.co/tlkh/t5_large_fp16_untuned)\r\n* [`tlkh/t5_3B_fp16_untuned`](https://huggingface.co/tlkh/t5_3B_fp16_untuned)\r\n\r\nNote: for the 3B model, after the conversion, the pre-trained translation task seems to be more unstable, but given it can still generate coherent text, my hunch is that after fine-tuning on another task, it should have negligible difference. However, it would be great to know for sure, so wonder if there is some kind of benchmark suite we can try.\r\n\r\nI do not have the resources to convert the 11B one, but I do not see why that wouldn't work similarly. It is also very quick to convert models. \r\n\r\nGitHub repo to demo/show code for the conversion, and also included inference testing to show the model seems to be working fine: https://github.com/tlkh/t5-fp16-surgery\r\n\r\nMy hopeful outcome from this is that we can fine-tune T5 in FP16 without any real penalty. ",
"I'm trying to use this for T5-3B with A100. Is bf16 available experimentally? \r\n\r\nBTW, I'm heading towards this direction because the `fairscale sharded_ddp` option for some reason hangs when it should run evaluation. Any pointers to solve this issue as well?",
"> I'm trying to use this for T5-3B with A100. Is bf16 available experimentally?\r\n\r\nThere is a WIP PR: https://github.com/huggingface/transformers/pull/13207\r\n\r\n> BTW, I'm heading towards this direction because the `fairscale sharded_ddp` option for some reason hangs when it should run evaluation. Any pointers to solve this issue as well?\r\n\r\nUse deepspeed: https://huggingface.co/transformers/master/main_classes/deepspeed.html#deepspeed-trainer-integration\r\n",
"> @Liangtaiwan, also you may to try the loss penalty factor. The patch to apply (instead of this PR) is in the OP.\r\n\r\n@stas00 Could you point out where is the patch or the PR? \r\n\r\n",
"https://github.com/huggingface/transformers/pull/10956 Scroll down to \"Penalizing large activation\""
] | 1,617 | 1,694 | null | CONTRIBUTOR | null | As reported in multiple issues t5/mt5 models produce loss of `nan` under mixed precision training, starting with t5-large and mt5-small and up. This PR is an attempt to fix this issue. This is crucial for DeepSpeed where it's always mixed precision training.
I spent some time with the debugger and the new `detect_overflow` helper util (added in this PR) and discovered that the best place to fix the whole problem is to not `T5LayerFF` in mixed precision. This slightly slows things down/consumes more gpu memory, but no longer requires clamping and running after ever overflowing `hidden_states`.
This PR:
* turns `autocast` off during `T5LayerFF` if run under amp
* removes the previous attempt to clamp the values as it now works without it
* introduces `debug_utils.py` with a helper function `detect_overflow` which is super-handy for tracking overflows automatically (as it's silent if all goes well). It also has some extra features, such as reporting a number of large elements - disabled by default.
Important:
* The fix is only for pytorch built-in amp. apex still has this problem since I haven't researched if the same could be done there, but it's probably a waste of time since apex is being phased out. And deepspeed doesn't use amp so it's till affected.
## Variations
Other possible variations to this solution:
1. to do the `autocast` disabling dynamically. That is trying with `autocast` and checking if any elements of output are `inf` (not sure of the overhead) and re-running this layer in full fp32 and setting a flag to continue in fp32 from then on. Here the main price will be paid by models that don't need this workaround, but they will gain but not having `autocast` turned off - so it might still be a beneficial solution to all
2. give users a switch to turn this feature on if they discover they need it - or have it on by default and allow users to turn it off if they "know what they are doing".
I am suggesting this since I don't know if all t5/mt5 models are impacted. Definitely t5-small doesn't need this.
## Penalizing large activation
See the details comment: https://github.com/huggingface/transformers/pull/10956#issuecomment-820712267
```
@@ -1578,6 +1618,15 @@ class T5ForConditionalGeneration(T5PreTrainedModel):
loss = loss_fct(lm_logits.view(-1, lm_logits.size(-1)), labels.view(-1))
# TODO(thom): Add z_loss https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/layers.py#L666
+ # z_loss
+ log_z = lm_logits.view(-1).logsumexp(-1)
+ z_loss = 7e-5
+ loss_extra = z_loss*log_z.square()
+ #z_loss = 1e-5
+ #loss_extra = z_loss*log_z.pow(3)
+ #print(f"loss={loss}, loss_extra={loss_extra}")
+ loss += loss_extra
```
## Questions:
* If this solution solves the problem at large and is accepted then we probably should document somewhere in t5/mt5 docs that it won't run AMP 100%?
* Test is needed: any suggestions to how we could write a test that is not too big and still gets nans prior to this PR? `t5-small` and `t5-base` don't have this problem (at least with a small sample), in my experiments the first model that gets `inf/nan` on the first batch is `mt5-small` (1.2GB), so my minimal test is:
```
rm -rf output_dir; CUDA_VISIBLE_DEVICES=0 USE_TF=0 PYTHONPATH=src python examples/seq2seq/run_translation.py \
--model_name_or_path google/mt5-small --do_train --source_lang en --target_lang ro --dataset_name wmt16 \
--dataset_config_name ro-en --output_dir output_dir --per_device_train_batch_size=4 --logging_step 2 --save_steps 0 \
--fp16 --max_train_samples 10 --save_total_limit 0 --save_strategy no
```
We can then run this as a test and check for `nan` in loss reports.
But the 1.2GB download is somewhat big even for `@slow` tests.
**edit**: @LysandreJik says it's not a problem since we are now caching the models on the test machine.
If it is ok I will just stick this with all the extended tests under `examples/tests/trainer/test_trainer_ext.py` where we have a setup for this type of full application-based tests.
* I also know some users mentioned that `inf` may happen much later in the game. I haven't run very long tests.
TODO:
* [ ] I left all the debug prints in place so that you could experiment with it easily - will remove when this is approved to be a good change
Related discussions:
- https://discuss.pytorch.org/t/bfloat16-transformers/96260 pegasus is affected too
Fixes: https://github.com/huggingface/transformers/issues/10830
Fixes: https://github.com/huggingface/transformers/issues/10819
@patrickvonplaten, @patil-suraj, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10956/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10956/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10956",
"html_url": "https://github.com/huggingface/transformers/pull/10956",
"diff_url": "https://github.com/huggingface/transformers/pull/10956.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10956.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10955 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10955/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10955/comments | https://api.github.com/repos/huggingface/transformers/issues/10955/events | https://github.com/huggingface/transformers/issues/10955 | 843,821,956 | MDU6SXNzdWU4NDM4MjE5NTY= | 10,955 | Input gets lost when converting mBART decoder to onnx | {
"login": "tobigue",
"id": 1560152,
"node_id": "MDQ6VXNlcjE1NjAxNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1560152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tobigue",
"html_url": "https://github.com/tobigue",
"followers_url": "https://api.github.com/users/tobigue/followers",
"following_url": "https://api.github.com/users/tobigue/following{/other_user}",
"gists_url": "https://api.github.com/users/tobigue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tobigue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tobigue/subscriptions",
"organizations_url": "https://api.github.com/users/tobigue/orgs",
"repos_url": "https://api.github.com/users/tobigue/repos",
"events_url": "https://api.github.com/users/tobigue/events{/privacy}",
"received_events_url": "https://api.github.com/users/tobigue/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It seems that the `encoder_hidden_states` are passed as `key_value_states` into the `MBartAttention` down the line and are not used in case `past_key_value` is given.\r\nhttps://github.com/huggingface/transformers/blob/90ecc29656ce37fdbe7279cf586511ed678c0cb7/src/transformers/models/mbart/modeling_mbart.py#L183\r\nIn that case I guess it's expected that they are not in the graph of the decoder, so I'll see how I can work around that when converting to ONNX."
] | 1,617 | 1,617 | 1,617 | CONTRIBUTOR | null | I'm trying to convert the mBART decoder to onnx and have the problem, that one of the inputs gets lost during the conversion, which leads to errors when trying to use the onnx model. (See code example below.)
I'm trying to understand why this is the case and how to circumvent this.
Thanks alot for any help!
## Environment info
- `transformers` version: 4.4.2
- Platform: Linux-5.4.0-65-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
## Who can help
@mfuntowicz @patil-suraj @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): mBART
The problem arises when using:
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
## To reproduce
If you run the code below, you should see the following print output:
```
['input_ids', 'encoder_attention_mask', 'encoder_hidden_states']
```
Now, if we uncomment the commented line in `DecoderWithLMhead.forward` and pass the `past_key_values` to the decoder and run the code again, the additional inputs will be added, but `encoder_hidden_states` is not present as an input any longer.
If we run `torch.onnx.export` with `verbose=True`, `encoder_hidden_states` seems not to be part of the graph. Is there a condition in the mBART decoder implementation that excludes `encoder_hidden_states` from the graph, when `past_key_values` is given to the decoder?
Code to reproduce the issue (adapted from [FastT5](https://github.com/Ki6an/fastT5/blob/master/fastT5/onnx_exporter.py)):
```python
import functools
import operator
import os
import tempfile
from transformers import AutoTokenizer, MBartForConditionalGeneration, AutoConfig
from onnxruntime import InferenceSession
import torch
model_or_model_path = 'facebook/mbart-large-cc25'
model = MBartForConditionalGeneration.from_pretrained(model_or_model_path)
model_config = AutoConfig.from_pretrained(model_or_model_path)
class DecoderWithLMhead(torch.nn.Module):
def __init__(self, decoder, lm_head, config):
super().__init__()
self.decoder = decoder
self.lm_head = lm_head
self.config = config
def forward(self, *inputs):
input_ids, attention_mask, encoder_hidden_states = inputs[:3]
list_pkv = inputs[3:]
past_key_values = tuple(list_pkv[i : i + 4] for i in range(0, len(list_pkv), 4))
decoder_output = self.decoder(
input_ids=input_ids,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=attention_mask,
# past_key_values=past_key_values,
)
lm_head_out = self.lm_head(decoder_output[0] * (self.config.d_model ** -0.5))
return lm_head_out, decoder_output[1]
decoder_with_lm_head = DecoderWithLMhead(
decoder=model.get_decoder(),
lm_head=model.get_output_embeddings(),
config=model_config
)
batch_size = 5
sequence_length = 10
input_ids_dec = torch.ones((batch_size, 1), dtype=torch.int64)
attention_mask_dec = torch.ones((batch_size, sequence_length), dtype=torch.int64)
enc_out = torch.ones(
(batch_size, sequence_length, model_config.d_model), dtype=torch.float32
)
head_dim = model_config.d_model // model_config.encoder_attention_heads
a = torch.ones((batch_size, model_config.decoder_attention_heads, sequence_length, head_dim), dtype=torch.float32)
attention_block = (a, a, a, a)
past_key_values = (attention_block,) * model_config.decoder_layers
flat_past_key_values = functools.reduce(operator.iconcat, past_key_values, [])
decoder_all_inputs = tuple(
[input_ids_dec, attention_mask_dec, enc_out] + flat_past_key_values
)
num_of_inputs = 4 * model_config.decoder_layers
with torch.no_grad():
decoder_inputs = [
"input_ids",
"encoder_attention_mask",
"encoder_hidden_states",
]
pkv_input_names = ["input_{}".format(i) for i in range(0, num_of_inputs)]
decoder_input_names = decoder_inputs + pkv_input_names
decoder_output_names = ["logits", "output_past_key_values"]
dyn_axis = {
"input_ids": {0: "batch", 1: "sequence"},
"encoder_attention_mask": {0: "batch", 1: "sequence"},
"encoder_hidden_states": {0: "batch", 1: "sequence"},
"logits": {0: "batch", 1: "sequence"},
"output_past_key_values": {0: "batch", 1: "sequence"},
}
dyn_pkv = {
"input_{}".format(i): {0: "batch", 1: "n_head", 2: "seq_length", 3: "d_kv"}
for i in range(0, num_of_inputs)
}
dyn_axis_params = {**dyn_axis, **dyn_pkv}
temp_dir = tempfile.TemporaryDirectory()
onnx_output_path = os.path.join(temp_dir.name, "decoder.onnx")
torch.onnx.export(
decoder_with_lm_head,
decoder_all_inputs,
onnx_output_path,
export_params=True,
do_constant_folding=True,
opset_version=12,
input_names=decoder_input_names,
output_names=decoder_output_names,
dynamic_axes=dyn_axis_params,
use_external_data_format=True,
)
session = InferenceSession(onnx_output_path)
print(list(map(lambda x: x.name, session.get_inputs()))) # encoder_hidden_states should be in here
temp_dir.cleanup()
```
## Expected behavior
All inputs passed to the onnx export function are present in the created onnx model.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10955/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10954 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10954/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10954/comments | https://api.github.com/repos/huggingface/transformers/issues/10954/events | https://github.com/huggingface/transformers/pull/10954 | 843,812,632 | MDExOlB1bGxSZXF1ZXN0NjAzMTQ1Njcy | 10,954 | [vulnerability] dep fix | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Oh, is this process somehow automated and I didn't know about it? All I see is the file which it complains about and no suggestion to automate the fix.",
"Yeah, but usually it's not as good as your suggestion as it offers `==` while you offer `>=`.\r\n\r\nI think I mentioned it here: https://github.com/huggingface/transformers/pull/10817\r\n\r\nBut thank you nonetheless, these are helpful!",
"Hmm, I have been copying exactly what the vulnerability bot suggested - which is always `>=` - so it's probably the dependabot that could use a bit of an update to match the vulnerability report.\r\n\r\nBut it's good to know that this is already automated, I will know not to make a PR next time."
] | 1,617 | 1,617 | 1,617 | CONTRIBUTOR | null | Fixes https://github.com/huggingface/transformers/security/dependabot/examples/research_projects/lxmert/requirements.txt/Pygments/open
@LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10954/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10954",
"html_url": "https://github.com/huggingface/transformers/pull/10954",
"diff_url": "https://github.com/huggingface/transformers/pull/10954.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10954.patch",
"merged_at": 1617053147000
} |
https://api.github.com/repos/huggingface/transformers/issues/10953 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10953/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10953/comments | https://api.github.com/repos/huggingface/transformers/issues/10953/events | https://github.com/huggingface/transformers/pull/10953 | 843,713,432 | MDExOlB1bGxSZXF1ZXN0NjAzMDU5MjIy | 10,953 | Use pre-computed lengths, if available, when grouping by length | {
"login": "pcuenca",
"id": 1177582,
"node_id": "MDQ6VXNlcjExNzc1ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1177582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pcuenca",
"html_url": "https://github.com/pcuenca",
"followers_url": "https://api.github.com/users/pcuenca/followers",
"following_url": "https://api.github.com/users/pcuenca/following{/other_user}",
"gists_url": "https://api.github.com/users/pcuenca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pcuenca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pcuenca/subscriptions",
"organizations_url": "https://api.github.com/users/pcuenca/orgs",
"repos_url": "https://api.github.com/users/pcuenca/repos",
"events_url": "https://api.github.com/users/pcuenca/events{/privacy}",
"received_events_url": "https://api.github.com/users/pcuenca/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,617 | 1,617 | 1,617 | MEMBER | null | # What does this PR do?
A new argument `length_column_name` has been added to
`TrainingArguments`, with default value `"length"`. If this column
exists and `group_by_length` is `True`, the train sampler will use
it for grouping rather than computing all lengths before training starts.
This is an optimization that allows the user to prepare data for fast
processing, preventing sequential access to the dataset as described in
issue #10909.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? [Discussion](https://discuss.huggingface.co/t/spanish-asr-fine-tuning-wav2vec2/4586/6), related issue #10909.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@sgugger, this is what we discussed during the fine-tuning week. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10953/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10953",
"html_url": "https://github.com/huggingface/transformers/pull/10953",
"diff_url": "https://github.com/huggingface/transformers/pull/10953.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10953.patch",
"merged_at": 1617047059000
} |
https://api.github.com/repos/huggingface/transformers/issues/10952 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10952/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10952/comments | https://api.github.com/repos/huggingface/transformers/issues/10952/events | https://github.com/huggingface/transformers/issues/10952 | 843,601,571 | MDU6SXNzdWU4NDM2MDE1NzE= | 10,952 | [Trainer] possible DDP memory regression | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I don't have a setup with 8Gb so I have to rely on nvidia-smi numbers. First command is 13.2Gb on GPU0, 6.5Gb on GPU1, second command is 11.2GB on GPU0 and 10.1GB on GPU1.",
"Thank you for the sanity check, @sgugger \r\n\r\nThis is very odd that we get such a discrepancy in memory allocation between the 2 gpus on DP! 2x gpu ram on card0.\r\n\r\nBut this explains why it works for me since I have precisely 24gb + 8gb, so this discrepancy fits just right. So it's unclear if it's a problem in DP or DDP.\r\n\r\nI will investigate.",
"With DP the gradients and optimizer states are only on one GPU, I think that is why we have the big difference. With DDP they are copied over the two.",
"Oh wow, that's a huge difference. Clearly DP wins here for those with lopsided setups like mine! \r\n\r\nOK, then it's by design then. Closing this.",
"This is a bit of a problem with our memory metrics reporting as we only report gpu0, but I guess since most users will have symmetrical setups (cards of the same size) and gpu0 consumes the biggest amount of memory in DP/DDP then it's OK.\r\n\r\nWill have to think how to extend the metrics for setups where it's critical to know each gpu's allocations - e.g. pipeline or model parallel."
] | 1,617 | 1,617 | 1,617 | CONTRIBUTOR | null | I think we may have created a memory regression somewhere recently.
I tried with pt-1.7 and pt-1.8 with the same results.
memory limit on this setup is 8gb
on `transformers` master:
This takes about 5.5GB/gpu:
```
PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0,1 python examples/seq2seq/run_translation.py --model_name_or_path google/mt5-small --do_train --do_eval --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config_name ro-en --output_dir /tmp/test --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate --logging_step 10
```
(no need to run more than a few secs, we are just trying to see that the job can start training)
switching to DDP immediately OOMs:
```
PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 examples/seq2seq/run_translation.py --model_name_or_path google/mt5-small --do_train --do_eval --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config_name ro-en --output_dir /tmp/test --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate --logging_step 10
```
even if I reduce the bs from 4 to 1 it still goes over 8GB.
@sgugger, could you please confirm if you're seeing the same?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10952/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10951 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10951/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10951/comments | https://api.github.com/repos/huggingface/transformers/issues/10951/events | https://github.com/huggingface/transformers/pull/10951 | 843,560,351 | MDExOlB1bGxSZXF1ZXN0NjAyOTI0MjEx | 10,951 | Fixes in the templates | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,617 | 1,617 | 1,617 | COLLABORATOR | null | # What does this PR do?
Fixes a few things I noticed from new models PR in the templates directly. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10951/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10951",
"html_url": "https://github.com/huggingface/transformers/pull/10951",
"diff_url": "https://github.com/huggingface/transformers/pull/10951.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10951.patch",
"merged_at": 1617053774000
} |
https://api.github.com/repos/huggingface/transformers/issues/10950 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10950/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10950/comments | https://api.github.com/repos/huggingface/transformers/issues/10950/events | https://github.com/huggingface/transformers/pull/10950 | 843,333,922 | MDExOlB1bGxSZXF1ZXN0NjAyNzMxNTcy | 10,950 | Add Vision Transformer and ViTFeatureExtractor | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I've addressed all comments. The pooler now more closely matches the one of `BertModel`. \r\n\r\nOnly `make fix-copies` is giving an error on CircleCI for now. Other than that the PR is ready.\r\n\r\n",
"Thanks for all your work on this @NielsRogge !"
] | 1,617 | 1,617 | 1,617 | CONTRIBUTOR | null | # What does this PR do?
Opening up a new PR based on #10513 which uses @sgugger's new `image_utils.py` instead of `torchvision` for the image transformations, and is up-to-date with master.
Things to do:
- [x] fix one integration test (currently `ViTFeatureExtractor` converts the numpy arrays into DoubleTensors, but the model expects FloatTensors)
- [x] fix styling (`make style` is not working as expected on my machine, see remaining comments in previous PR)
- [x] perhaps change pooler logic? Design (and updated conversion script) currently at branch "add_pooler_to_vit"
cc @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10950/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10950/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10950",
"html_url": "https://github.com/huggingface/transformers/pull/10950",
"diff_url": "https://github.com/huggingface/transformers/pull/10950.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10950.patch",
"merged_at": 1617290165000
} |
https://api.github.com/repos/huggingface/transformers/issues/10949 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10949/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10949/comments | https://api.github.com/repos/huggingface/transformers/issues/10949/events | https://github.com/huggingface/transformers/issues/10949 | 843,307,960 | MDU6SXNzdWU4NDMzMDc5NjA= | 10,949 | How to freeze Camembert model for Classification tasks? | {
"login": "siwarBM",
"id": 53350981,
"node_id": "MDQ6VXNlcjUzMzUwOTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/53350981?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/siwarBM",
"html_url": "https://github.com/siwarBM",
"followers_url": "https://api.github.com/users/siwarBM/followers",
"following_url": "https://api.github.com/users/siwarBM/following{/other_user}",
"gists_url": "https://api.github.com/users/siwarBM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/siwarBM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/siwarBM/subscriptions",
"organizations_url": "https://api.github.com/users/siwarBM/orgs",
"repos_url": "https://api.github.com/users/siwarBM/repos",
"events_url": "https://api.github.com/users/siwarBM/events{/privacy}",
"received_events_url": "https://api.github.com/users/siwarBM/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1897896961,
"node_id": "MDU6TGFiZWwxODk3ODk2OTYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Migration",
"name": "Migration",
"color": "e99695",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"See #400",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,617 | 1,620 | 1,620 | NONE | null | # π Migration
## Information
<!-- Important information -->
Model I am using (Bert, XLNet ...):
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## Details
<!-- A clear and concise description of the migration issue.
If you have code snippets, please provide it here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
-->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
<!-- IMPORTANT: which version of the former library do you use? -->
* `pytorch-transformers` or `pytorch-pretrained-bert` version (or branch):
## Checklist
- [ ] I have read the migration guide in the readme.
([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers);
[pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [ ] I checked if a related official extension example runs on my machine.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10949/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10948 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10948/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10948/comments | https://api.github.com/repos/huggingface/transformers/issues/10948/events | https://github.com/huggingface/transformers/issues/10948 | 843,245,270 | MDU6SXNzdWU4NDMyNDUyNzA= | 10,948 | [MarianMTModel] 'list' object has no attribute 'size' | {
"login": "lacls",
"id": 42736388,
"node_id": "MDQ6VXNlcjQyNzM2Mzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/42736388?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lacls",
"html_url": "https://github.com/lacls",
"followers_url": "https://api.github.com/users/lacls/followers",
"following_url": "https://api.github.com/users/lacls/following{/other_user}",
"gists_url": "https://api.github.com/users/lacls/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lacls/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lacls/subscriptions",
"organizations_url": "https://api.github.com/users/lacls/orgs",
"repos_url": "https://api.github.com/users/lacls/repos",
"events_url": "https://api.github.com/users/lacls/events{/privacy}",
"received_events_url": "https://api.github.com/users/lacls/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I confirm the error:\r\n```\r\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM\r\nmodel_path=\"Helsinki-NLP/opus-mt-en-de\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_path)\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(model_path)\r\nbatch = tokenizer.prepare_seq2seq_batch(src_texts=[\"Alice has a cat.\"])\r\ngen = model.generate(**batch)\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,617 | 1,622 | 1,622 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1
- Platform: Google Colab
- Python version: 3.7
- PyTorch version (GPU?): 1.8.1
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
```
!pip install transformers==4.1.1 sentencepiece==0.1.94
!pip install mosestokenizer==1.1.0
from transformers import MarianMTModel, MarianTokenizer
target_model_name = 'Helsinki-NLP/opus-mt-en-ROMANCE'
target_tokenizer = MarianTokenizer.from_pretrained(target_model_name)
target_model = MarianMTModel.from_pretrained(target_model_name)
en_model_name = 'Helsinki-NLP/opus-mt-ROMANCE-en'
en_tokenizer = MarianTokenizer.from_pretrained(en_model_name)
en_model = MarianMTModel.from_pretrained(en_model_name)
def translate(texts, model, tokenizer, language="fr"):
# Prepare the text data into appropriate format for the model
template = lambda text: f"{text}" if language == "en" else f">>{language}<< {text}"
src_texts = [template(text) for text in texts]
# Tokenize the texts
encoded = tokenizer.prepare_seq2seq_batch(src_texts)
# Generate translation using model
translated = model.generate(**encoded)
# Convert the generated tokens indices back into text
translated_texts = tokenizer.batch_decode(translated, skip_special_tokens=True)
return translated_texts
def back_translate(texts, source_lang="en", target_lang="vi"):
# Translate from source to target language
fr_texts = translate(texts, target_model, target_tokenizer,
language=target_lang)
# Translate from target language back to source language
back_translated_texts = translate(fr_texts, en_model, en_tokenizer,
language=source_lang)
return back_translated_texts
en_texts = ['This is so cool', 'I hated the food', 'They were very helpful']
aug_texts = back_translate(en_texts, source_lang="en", target_lang="es")
print(aug_texts)
```
The problem arises when using:
* [x] my own modified scripts:
```
<ipython-input-1-83d3425f13db> in back_translate(texts, source_lang, target_lang)
36 # Translate from source to target language
37 fr_texts = translate(texts, target_model, target_tokenizer,
---> 38 language=target_lang)
39
40 # Translate from target language back to source language
<ipython-input-1-83d3425f13db> in translate(texts, model, tokenizer, language)
26
27 # Generate translation using model
---> 28 translated = model.generate(**encoded)
29
30 # Convert the generated tokens indices back into text
/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
25 def decorate_context(*args, **kwargs):
26 with self.__class__():
---> 27 return func(*args, **kwargs)
28 return cast(F, decorate_context)
29
/usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, **model_kwargs)
914 if self.config.is_encoder_decoder:
915 # add encoder_outputs to model_kwargs
--> 916 model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)
917
918 # set input_ids as decoder_input_ids
/usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py in _prepare_encoder_decoder_kwargs_for_generation(self, input_ids, model_kwargs)
409 argument: value for argument, value in model_kwargs.items() if not argument.startswith("decoder_")
410 }
--> 411 model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)
412 return model_kwargs
413
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/usr/local/lib/python3.7/dist-packages/transformers/models/marian/modeling_marian.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)
712 raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
713 elif input_ids is not None:
--> 714 input_shape = input_ids.size()
715 input_ids = input_ids.view(-1, input_shape[-1])
716 elif inputs_embeds is not None:
AttributeError: 'list' object has no attribute 'size'
```
Thanks for your support
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10948/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10947 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10947/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10947/comments | https://api.github.com/repos/huggingface/transformers/issues/10947/events | https://github.com/huggingface/transformers/issues/10947 | 843,182,597 | MDU6SXNzdWU4NDMxODI1OTc= | 10,947 | Save model error: list index out of range after pass input_processing call | {
"login": "roymondliao",
"id": 8049009,
"node_id": "MDQ6VXNlcjgwNDkwMDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8049009?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/roymondliao",
"html_url": "https://github.com/roymondliao",
"followers_url": "https://api.github.com/users/roymondliao/followers",
"following_url": "https://api.github.com/users/roymondliao/following{/other_user}",
"gists_url": "https://api.github.com/users/roymondliao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/roymondliao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/roymondliao/subscriptions",
"organizations_url": "https://api.github.com/users/roymondliao/orgs",
"repos_url": "https://api.github.com/users/roymondliao/repos",
"events_url": "https://api.github.com/users/roymondliao/events{/privacy}",
"received_events_url": "https://api.github.com/users/roymondliao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"I am having the same problem with BERT. My solution so far was to downgrade to `transformers==4.0.1` which seems to be the last version which does not use `input_processing` in `TFBertMainLayer`.\r\n\r\nIn my case, the values of the relevant variables are\r\n```\r\ninput_ids = [<tf.Tensor 'input_ids:0' shape=(None, 384) dtype=int32>, <tf.Tensor 'input_ids_1:0' shape=(None, 384) dtype=int32>]\r\nparameter_names = ['args']\r\n```\r\nThe error arises because of the second item in `input_ids`. Like in the previous example I am using BERT as a part of a larger Keras model. Both the larger model and BERT have one input layer with name `input_ids`. I suspect that this is the reason why the list `input_ids` contains two elements. If I wrap `output[parameter_names[i]] = input` in a try-catch, it works as intended."
] | 1,617 | 1,623 | 1,620 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.0
- Platform: Linux-4.19.0-14-cloud-amd64-x86_64-with-debian-10.8
- Python version: 3.7.10
- PyTorch version (GPU?): 1.7.0 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
Maybe @LysandreJik or @jplu
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using albert:
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## Code for Tensorflow
* albert_zh: https://github.com/brightmart/albert_zh
```python
strategy = tf.distribute.MirroredStrategy()
max_seq_len = 128
cache_folder = '/tmp'
pretrain_model = 'voidful/albert_chinese_tiny'
albert_config = AlbertConfig.from_json_file(albert_zh / 'albert_config' / 'albert_config_tiny.json')
def sms_classifier_model(pretrain_model, config, max_seq_len, cache_folder):
input_ids = tf.keras.layers.Input(shape=(max_seq_len, ), name='input_ids', dtype=tf.int32)
input_token_type_ids = tf.keras.layers.Input(shape=(max_seq_len, ), name='token_type_ids', dtype=tf.int32)
input_attention_mask = tf.keras.layers.Input(shape=(max_seq_len, ), name='attention_mask', dtype=tf.int32)
albert_model = TFAlbertForSequenceClassification.from_pretrained(
pretrain_model,
config=config,
from_pt=True,
cache_dir=cache_folder)
x = albert_model([input_ids, input_token_type_ids, input_attention_mask])
output = tf.keras.activations.softmax(x[0])
model = tf.keras.models.Model(
inputs=[input_ids, input_token_type_ids, input_attention_mask],
outputs={'target': output}, name='sms_classifier')
return model
K.clear_session()
albert_config.hidden_act = 'gelu_new'
albert_config.num_labels = 4
with strategy.scope():
albert_model = sms_classifier_model(pretrain_model, albert_config, 128, cached_pretarin_model_folder)
with strategy.scope():
albert_model.compile(optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=tf.keras.metrics.CategoricalAccuracy())
albert_model.fit(
x=training_dataset,
validation_data=validation_dataset,
steps_per_epoch=200,
validation_steps=100,
epochs=2,
verbose=1,
use_multiprocessing=True)
albert_model.save('/tmp/albert_model')
```
## Error Message
```python
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-33-9fbc26d706ab> in <module>
1 albert_model.save(
----> 2 str(saved_tf_model_folder / f'{run_id}')
3 )
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in save(self, filepath, overwrite, include_optimizer, save_format, signatures, options, save_traces)
2000 # pylint: enable=line-too-long
2001 save.save_model(self, filepath, overwrite, include_optimizer, save_format,
-> 2002 signatures, options, save_traces)
2003
2004 def save_weights(self,
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/save.py in save_model(model, filepath, overwrite, include_optimizer, save_format, signatures, options, save_traces)
155 else:
156 saved_model_save.save(model, filepath, overwrite, include_optimizer,
--> 157 signatures, options, save_traces)
158
159
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save.py in save(model, filepath, overwrite, include_optimizer, signatures, options, save_traces)
87 with distribution_strategy_context._get_default_replica_context(): # pylint: disable=protected-access
88 with utils.keras_option_scope(save_traces):
---> 89 save_lib.save(model, filepath, signatures, options)
90
91 if not include_optimizer:
/opt/conda/lib/python3.7/site-packages/tensorflow/python/saved_model/save.py in save(obj, export_dir, signatures, options)
1031
1032 _, exported_graph, object_saver, asset_info = _build_meta_graph(
-> 1033 obj, signatures, options, meta_graph_def)
1034 saved_model.saved_model_schema_version = constants.SAVED_MODEL_SCHEMA_VERSION
1035
/opt/conda/lib/python3.7/site-packages/tensorflow/python/saved_model/save.py in _build_meta_graph(obj, signatures, options, meta_graph_def)
1196
1197 with save_context.save_context(options):
-> 1198 return _build_meta_graph_impl(obj, signatures, options, meta_graph_def)
/opt/conda/lib/python3.7/site-packages/tensorflow/python/saved_model/save.py in _build_meta_graph_impl(obj, signatures, options, meta_graph_def)
1131 if signatures is None:
1132 signatures = signature_serialization.find_function_to_export(
-> 1133 checkpoint_graph_view)
1134
1135 signatures, wrapped_functions = (
/opt/conda/lib/python3.7/site-packages/tensorflow/python/saved_model/signature_serialization.py in find_function_to_export(saveable_view)
73 # If the user did not specify signatures, check the root object for a function
74 # that can be made into a signature.
---> 75 functions = saveable_view.list_functions(saveable_view.root)
76 signature = functions.get(DEFAULT_SIGNATURE_ATTR, None)
77 if signature is not None:
/opt/conda/lib/python3.7/site-packages/tensorflow/python/saved_model/save.py in list_functions(self, obj, extra_functions)
149 if obj_functions is None:
150 obj_functions = obj._list_functions_for_serialization( # pylint: disable=protected-access
--> 151 self._serialization_cache)
152 self._functions[obj] = obj_functions
153 if extra_functions:
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in _list_functions_for_serialization(self, serialization_cache)
2611 self.predict_function = None
2612 functions = super(
-> 2613 Model, self)._list_functions_for_serialization(serialization_cache)
2614 self.train_function = train_function
2615 self.test_function = test_function
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in _list_functions_for_serialization(self, serialization_cache)
3085 def _list_functions_for_serialization(self, serialization_cache):
3086 return (self._trackable_saved_model_saver
-> 3087 .list_functions_for_serialization(serialization_cache))
3088
3089 def __getstate__(self):
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/base_serialization.py in list_functions_for_serialization(self, serialization_cache)
92 return {}
93
---> 94 fns = self.functions_to_serialize(serialization_cache)
95
96 # The parent AutoTrackable class saves all user-defined tf.functions, and
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py in functions_to_serialize(self, serialization_cache)
77 def functions_to_serialize(self, serialization_cache):
78 return (self._get_serialized_attributes(
---> 79 serialization_cache).functions_to_serialize)
80
81 def _get_serialized_attributes(self, serialization_cache):
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py in _get_serialized_attributes(self, serialization_cache)
93
94 object_dict, function_dict = self._get_serialized_attributes_internal(
---> 95 serialization_cache)
96
97 serialized_attr.set_and_validate_objects(object_dict)
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/model_serialization.py in _get_serialized_attributes_internal(self, serialization_cache)
55 objects, functions = (
56 super(ModelSavedModelSaver, self)._get_serialized_attributes_internal(
---> 57 serialization_cache))
58 functions['_default_save_signature'] = default_signature
59 return objects, functions
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py in _get_serialized_attributes_internal(self, serialization_cache)
102 """Returns dictionary of serialized attributes."""
103 objects = save_impl.wrap_layer_objects(self.obj, serialization_cache)
--> 104 functions = save_impl.wrap_layer_functions(self.obj, serialization_cache)
105 # Attribute validator requires that the default save signature is added to
106 # function dict, even if the value is None.
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py in wrap_layer_functions(layer, serialization_cache)
163 call_fn_with_losses = call_collection.add_function(
164 _wrap_call_and_conditional_losses(layer),
--> 165 '{}_layer_call_and_return_conditional_losses'.format(layer.name))
166 call_fn = call_collection.add_function(
167 _extract_outputs_from_fn(layer, call_fn_with_losses),
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py in add_function(self, call_fn, name)
503 # Manually add traces for layers that have keyword arguments and have
504 # a fully defined input signature.
--> 505 self.add_trace(*self._input_signature)
506 return fn
507
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py in add_trace(self, *args, **kwargs)
418 fn.get_concrete_function(*args, **kwargs)
419
--> 420 trace_with_training(True)
421 trace_with_training(False)
422 else:
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py in trace_with_training(value, fn)
416 utils.set_training_arg(value, self._training_arg_index, args, kwargs)
417 with K.deprecated_internal_learning_phase_scope(value):
--> 418 fn.get_concrete_function(*args, **kwargs)
419
420 trace_with_training(True)
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py in get_concrete_function(self, *args, **kwargs)
548 if not self.call_collection.tracing:
549 self.call_collection.add_trace(*args, **kwargs)
--> 550 return super(LayerCall, self).get_concrete_function(*args, **kwargs)
551
552
/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in get_concrete_function(self, *args, **kwargs)
1297 ValueError: if this object has not yet been called on concrete values.
1298 """
-> 1299 concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
1300 concrete._garbage_collector.release() # pylint: disable=protected-access
1301 return concrete
/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _get_concrete_function_garbage_collected(self, *args, **kwargs)
1203 if self._stateful_fn is None:
1204 initializers = []
-> 1205 self._initialize(args, kwargs, add_initializers_to=initializers)
1206 self._initialize_uninitialized_variables(initializers)
1207
/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
724 self._concrete_stateful_fn = (
725 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
--> 726 *args, **kwds))
727
728 def invalid_creator_scope(*unused_args, **unused_kwds):
/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
2967 args, kwargs = None, None
2968 with self._lock:
-> 2969 graph_function, _ = self._maybe_define_function(args, kwargs)
2970 return graph_function
2971
/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
3359
3360 self._function_cache.missed.add(call_context_key)
-> 3361 graph_function = self._create_graph_function(args, kwargs)
3362 self._function_cache.primary[cache_key] = graph_function
3363
/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3204 arg_names=arg_names,
3205 override_flat_arg_shapes=override_flat_arg_shapes,
-> 3206 capture_by_value=self._capture_by_value),
3207 self._function_attributes,
3208 function_spec=self.function_spec,
/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
988 _, original_func = tf_decorator.unwrap(python_func)
989
--> 990 func_outputs = python_func(*func_args, **func_kwargs)
991
992 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
632 xla_context.Exit()
633 else:
--> 634 out = weak_wrapped_fn().__wrapped__(*args, **kwds)
635 return out
636
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py in wrapper(*args, **kwargs)
525 with autocast_variable.enable_auto_cast_variables(
526 layer._compute_dtype_object): # pylint: disable=protected-access
--> 527 ret = method(*args, **kwargs)
528 _restore_layer_losses(original_losses)
529 return ret
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py in wrap_with_training_arg(*args, **kwargs)
169 return control_flow_util.smart_cond(
170 training, lambda: replace_training_and_call(True),
--> 171 lambda: replace_training_and_call(False))
172
173 # Create arg spec for decorated function. If 'training' is not defined in the
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/utils/control_flow_util.py in smart_cond(pred, true_fn, false_fn, name)
113 pred, true_fn=true_fn, false_fn=false_fn, name=name)
114 return smart_module.smart_cond(
--> 115 pred, true_fn=true_fn, false_fn=false_fn, name=name)
116
117
/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/smart_cond.py in smart_cond(pred, true_fn, false_fn, name)
52 if pred_value is not None:
53 if pred_value:
---> 54 return true_fn()
55 else:
56 return false_fn()
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py in <lambda>()
168
169 return control_flow_util.smart_cond(
--> 170 training, lambda: replace_training_and_call(True),
171 lambda: replace_training_and_call(False))
172
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py in replace_training_and_call(training)
165 def replace_training_and_call(training):
166 set_training_arg(training, training_arg_index, args, kwargs)
--> 167 return wrapped_call(*args, **kwargs)
168
169 return control_flow_util.smart_cond(
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py in call_and_return_conditional_losses(inputs, *args, **kwargs)
568 def call_and_return_conditional_losses(inputs, *args, **kwargs):
569 """Returns layer (call_output, conditional losses) tuple."""
--> 570 call_output = layer_call(inputs, *args, **kwargs)
571 if version_utils.is_v1_layer_or_model(layer):
572 conditional_losses = layer.get_losses_for(inputs)
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/functional.py in call(self, inputs, training, mask)
423 """
424 return self._run_internal_graph(
--> 425 inputs, training=training, mask=mask)
426
427 def compute_output_shape(self, input_shape):
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/functional.py in _run_internal_graph(self, inputs, training, mask)
558
559 args, kwargs = node.map_arguments(tensor_dict)
--> 560 outputs = node.layer(*args, **kwargs)
561
562 # Update tensor_dict.
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
1010 with autocast_variable.enable_auto_cast_variables(
1011 self._compute_dtype_object):
-> 1012 outputs = call_fn(inputs, *args, **kwargs)
1013
1014 if self._activity_regularizer:
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py in return_outputs_and_add_losses(*args, **kwargs)
71 inputs = args[inputs_arg_index]
72 args = args[inputs_arg_index + 1:]
---> 73 outputs, losses = fn(inputs, *args, **kwargs)
74 layer.add_loss(losses, inputs=inputs)
75
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py in wrap_with_training_arg(*args, **kwargs)
169 return control_flow_util.smart_cond(
170 training, lambda: replace_training_and_call(True),
--> 171 lambda: replace_training_and_call(False))
172
173 # Create arg spec for decorated function. If 'training' is not defined in the
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/utils/control_flow_util.py in smart_cond(pred, true_fn, false_fn, name)
113 pred, true_fn=true_fn, false_fn=false_fn, name=name)
114 return smart_module.smart_cond(
--> 115 pred, true_fn=true_fn, false_fn=false_fn, name=name)
116
117
/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/smart_cond.py in smart_cond(pred, true_fn, false_fn, name)
52 if pred_value is not None:
53 if pred_value:
---> 54 return true_fn()
55 else:
56 return false_fn()
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py in <lambda>()
168
169 return control_flow_util.smart_cond(
--> 170 training, lambda: replace_training_and_call(True),
171 lambda: replace_training_and_call(False))
172
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py in replace_training_and_call(training)
165 def replace_training_and_call(training):
166 set_training_arg(training, training_arg_index, args, kwargs)
--> 167 return wrapped_call(*args, **kwargs)
168
169 return control_flow_util.smart_cond(
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py in __call__(self, *args, **kwargs)
542 def __call__(self, *args, **kwargs):
543 if not self.call_collection.tracing:
--> 544 self.call_collection.add_trace(*args, **kwargs)
545 return super(LayerCall, self).__call__(*args, **kwargs)
546
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py in add_trace(self, *args, **kwargs)
418 fn.get_concrete_function(*args, **kwargs)
419
--> 420 trace_with_training(True)
421 trace_with_training(False)
422 else:
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py in trace_with_training(value, fn)
416 utils.set_training_arg(value, self._training_arg_index, args, kwargs)
417 with K.deprecated_internal_learning_phase_scope(value):
--> 418 fn.get_concrete_function(*args, **kwargs)
419
420 trace_with_training(True)
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py in get_concrete_function(self, *args, **kwargs)
548 if not self.call_collection.tracing:
549 self.call_collection.add_trace(*args, **kwargs)
--> 550 return super(LayerCall, self).get_concrete_function(*args, **kwargs)
551
552
/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in get_concrete_function(self, *args, **kwargs)
1297 ValueError: if this object has not yet been called on concrete values.
1298 """
-> 1299 concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
1300 concrete._garbage_collector.release() # pylint: disable=protected-access
1301 return concrete
/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _get_concrete_function_garbage_collected(self, *args, **kwargs)
1215 # run the first trace but we should fail if variables are created.
1216 concrete = self._stateful_fn._get_concrete_function_garbage_collected( # pylint: disable=protected-access
-> 1217 *args, **kwargs)
1218 if self._created_variables:
1219 raise ValueError("Creating variables on a non-first call to a function"
/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_garbage_collected(self, *args, **kwargs)
3017 args, kwargs = None, None
3018 with self._lock:
-> 3019 graph_function, _ = self._maybe_define_function(args, kwargs)
3020 seen_names = set()
3021 captured = object_identity.ObjectIdentitySet(
/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
3359
3360 self._function_cache.missed.add(call_context_key)
-> 3361 graph_function = self._create_graph_function(args, kwargs)
3362 self._function_cache.primary[cache_key] = graph_function
3363
/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3204 arg_names=arg_names,
3205 override_flat_arg_shapes=override_flat_arg_shapes,
-> 3206 capture_by_value=self._capture_by_value),
3207 self._function_attributes,
3208 function_spec=self.function_spec,
/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
988 _, original_func = tf_decorator.unwrap(python_func)
989
--> 990 func_outputs = python_func(*func_args, **func_kwargs)
991
992 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
632 xla_context.Exit()
633 else:
--> 634 out = weak_wrapped_fn().__wrapped__(*args, **kwds)
635 return out
636
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py in wrapper(*args, **kwargs)
525 with autocast_variable.enable_auto_cast_variables(
526 layer._compute_dtype_object): # pylint: disable=protected-access
--> 527 ret = method(*args, **kwargs)
528 _restore_layer_losses(original_losses)
529 return ret
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py in wrap_with_training_arg(*args, **kwargs)
169 return control_flow_util.smart_cond(
170 training, lambda: replace_training_and_call(True),
--> 171 lambda: replace_training_and_call(False))
172
173 # Create arg spec for decorated function. If 'training' is not defined in the
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/utils/control_flow_util.py in smart_cond(pred, true_fn, false_fn, name)
113 pred, true_fn=true_fn, false_fn=false_fn, name=name)
114 return smart_module.smart_cond(
--> 115 pred, true_fn=true_fn, false_fn=false_fn, name=name)
116
117
/opt/conda/lib/python3.7/site-packages/tensorflow/python/framework/smart_cond.py in smart_cond(pred, true_fn, false_fn, name)
52 if pred_value is not None:
53 if pred_value:
---> 54 return true_fn()
55 else:
56 return false_fn()
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py in <lambda>()
168
169 return control_flow_util.smart_cond(
--> 170 training, lambda: replace_training_and_call(True),
171 lambda: replace_training_and_call(False))
172
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/utils.py in replace_training_and_call(training)
165 def replace_training_and_call(training):
166 set_training_arg(training, training_arg_index, args, kwargs)
--> 167 return wrapped_call(*args, **kwargs)
168
169 return control_flow_util.smart_cond(
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py in call_and_return_conditional_losses(inputs, *args, **kwargs)
568 def call_and_return_conditional_losses(inputs, *args, **kwargs):
569 """Returns layer (call_output, conditional losses) tuple."""
--> 570 call_output = layer_call(inputs, *args, **kwargs)
571 if version_utils.is_v1_layer_or_model(layer):
572 conditional_losses = layer.get_losses_for(inputs)
/opt/conda/lib/python3.7/site-packages/transformers/models/albert/modeling_tf_albert.py in call(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict, labels, training, **kwargs)
1144 labels=labels,
1145 training=training,
-> 1146 kwargs_call=kwargs,
1147 )
1148 outputs = self.albert(
/opt/conda/lib/python3.7/site-packages/transformers/modeling_tf_utils.py in input_processing(func, config, input_ids, **kwargs)
372 output[tensor_name] = input
373 else:
--> 374 output[parameter_names[i]] = input
375 elif isinstance(input, allowed_types) or input is None:
376 output[parameter_names[i]] = input
IndexError: list index out of range
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
In version <= 4.0.0 use save method was without the error, but after `input_processing` function comment that the error happened.
Does any advice to fix the problem?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10947/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10947/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10946 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10946/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10946/comments | https://api.github.com/repos/huggingface/transformers/issues/10946/events | https://github.com/huggingface/transformers/pull/10946 | 843,141,724 | MDExOlB1bGxSZXF1ZXN0NjAyNTY3NTky | 10,946 | [Feature] Add a new tiny feature for self-attention analysis | {
"login": "YNNEKUW",
"id": 23073602,
"node_id": "MDQ6VXNlcjIzMDczNjAy",
"avatar_url": "https://avatars.githubusercontent.com/u/23073602?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YNNEKUW",
"html_url": "https://github.com/YNNEKUW",
"followers_url": "https://api.github.com/users/YNNEKUW/followers",
"following_url": "https://api.github.com/users/YNNEKUW/following{/other_user}",
"gists_url": "https://api.github.com/users/YNNEKUW/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YNNEKUW/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YNNEKUW/subscriptions",
"organizations_url": "https://api.github.com/users/YNNEKUW/orgs",
"repos_url": "https://api.github.com/users/YNNEKUW/repos",
"events_url": "https://api.github.com/users/YNNEKUW/events{/privacy}",
"received_events_url": "https://api.github.com/users/YNNEKUW/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,617 | 1,620 | 1,620 | NONE | null | # What does this PR do?
To implement the integral in [this paper](https://arxiv.org/abs/2004.11207), users need to take two items out: Output and Attention weights. However, the "Output" here is somewhat special; the attention weights of each layer should be multiplied by a scalar "alpha". I try to put this "alpha" in and set default to 1.0 (which is the same as the original BERT).
@LysandreJik
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10946/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10946",
"html_url": "https://github.com/huggingface/transformers/pull/10946",
"diff_url": "https://github.com/huggingface/transformers/pull/10946.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10946.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/10945 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10945/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10945/comments | https://api.github.com/repos/huggingface/transformers/issues/10945/events | https://github.com/huggingface/transformers/issues/10945 | 843,134,577 | MDU6SXNzdWU4NDMxMzQ1Nzc= | 10,945 | Are there memory leaks when using DeepSpeed on training T5? | {
"login": "avionkmh",
"id": 20922702,
"node_id": "MDQ6VXNlcjIwOTIyNzAy",
"avatar_url": "https://avatars.githubusercontent.com/u/20922702?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avionkmh",
"html_url": "https://github.com/avionkmh",
"followers_url": "https://api.github.com/users/avionkmh/followers",
"following_url": "https://api.github.com/users/avionkmh/following{/other_user}",
"gists_url": "https://api.github.com/users/avionkmh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avionkmh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avionkmh/subscriptions",
"organizations_url": "https://api.github.com/users/avionkmh/orgs",
"repos_url": "https://api.github.com/users/avionkmh/repos",
"events_url": "https://api.github.com/users/avionkmh/events{/privacy}",
"received_events_url": "https://api.github.com/users/avionkmh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2659267025,
"node_id": "MDU6TGFiZWwyNjU5MjY3MDI1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed",
"name": "DeepSpeed",
"color": "4D34F7",
"default": false,
"description": ""
}
] | closed | false | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi, we'll need a bit more information to understand what's going on here. What command did you use? Did you use one of our scripts? What data are you using? What version of Transformers? The more information, the more we'll be able to understand and help you.\r\n\r\nPinging @stas00 ",
"Thank you for reporting this, @avionkmh \r\n\r\nI've been only doing short functionality tests so far, so can't really tell.\r\n\r\nThe only general RAM leak I found so far is when `deepspeed.initialize` is called more than once and the fix is here:\r\nhttps://github.com/microsoft/DeepSpeed/issues/879\r\nI suppose this is not your case.\r\n\r\nAs @LysandreJik recommended we need a lot more details to reproduce the problem and then we or the Deepspeed team if it's in their land can fix it.\r\n\r\n",
"@LysandreJik @stas00 \r\n\r\nThank you for your interests. \r\n\r\nHere are the more detailed situations.\r\n\r\n- Command\r\n```\r\npython -u -m deepspeed.launcher.launch \\\r\n --world_info=eyJsb2NhbGhvc3QiOiBbMywgNF19 --master_addr=127.0.0.1 --master_port=29503 \\\r\n examples/seq2seq/finetune_trainer.py \\\r\n --overwrite_output_dir \\\r\n --output_dir ./output \\\r\n --data_dir ./input \\\r\n --model_name_or_path ./t5-small-empty \\\r\n --per_device_train_batch_size 16 --gradient_accumulation_steps 2 \\\r\n --logging_steps 500 \\\r\n --save_steps 10000 \\\r\n --warmup_steps 10 \\\r\n --num_train_epochs 8 \\\r\n --deepspeed run_ds_config-cpu_offload=X.json \\\r\n --do_train\r\n\r\n```\r\n\r\n\r\n- the script we used\r\nexamples/seq2seq/finetune_trainer.py \r\n<-- We modified the script(finetune_trainer.py) for T5 pretraining\r\n\r\n\r\n- Version of Transformers\r\nv4.3.2 \r\n\r\n\r\n- DeepSpeed version: v0.3.10\r\n\r\n\r\n- \"run_ds_config-cpu_offload=X.json\" file\r\n```\r\n{\r\n \"fp16\": {\r\n \"enabled\": true,\r\n \"loss_scale\": 0,\r\n \"loss_scale_window\": 1000,\r\n \"hysteresis\": 2,\r\n \"min_loss_scale\": 1\r\n },\r\n\r\n \"zero_optimization\": {\r\n \"stage\": 2,\r\n \"allgather_partitions\": true,\r\n \"allgather_bucket_size\": 2e8,\r\n \"overlap_comm\": true,\r\n \"reduce_scatter\": true,\r\n \"reduce_bucket_size\": 2e8,\r\n \"contiguous_gradients\": true,\r\n \"cpu_offload\": false\r\n },\r\n\r\n \"zero_allow_untested_optimizer\": true,\r\n\r\n \"optimizer\": {\r\n \"type\": \"AdamW\",\r\n \"params\": {\r\n \"lr\": 5e-5,\r\n \"betas\": [\r\n 0.9,\r\n 0.999\r\n ],\r\n \"eps\": 1e-8,\r\n \"weight_decay\": 0.0\r\n }\r\n },\r\n\r\n \"scheduler\": {\r\n \"type\": \"WarmupLR\",\r\n \"params\": {\r\n \"warmup_min_lr\": 0,\r\n \"warmup_max_lr\": 5e-5,\r\n \"warmup_num_steps\": 10\r\n }\r\n },\r\n\r\n \"steps_per_print\": 2000,\r\n \"wall_clock_breakdown\": false\r\n}\r\n```\r\n\r\n\r\n- when using \"torch.distributed.launch\"\r\nWe've found there was no memory leak until the end of pretraining.\r\nThe followings are the script for using \"torch.distributed.launch\"\r\n```\r\nexport NODE_RANK=0\r\nexport N_NODES=1\r\nexport N_GPU_NODE=2\r\nexport WORLD_SIZE=2\r\nexport MASTER_ADDR=\"129.254.164.234\"\r\nexport MASTER_PORT=1233\r\n\r\npython -m torch.distributed.launch \\\r\n --nproc_per_node=$N_GPU_NODE \\\r\n --nnodes=$N_NODES \\\r\n --node_rank $NODE_RANK \\\r\n --master_addr $MASTER_ADDR \\\r\n --master_port $MASTER_PORT \\\r\n examples/seq2seq/finetune_trainer.py \\\r\n --model_name_or_path t5-small-empty \\\r\n --output_dir ./output \\\r\n --data_dir ./input \\\r\n --do_train \\\r\n --save_steps 10000 \\\r\n --per_device_train_batch_size 16 \\\r\n --gradient_accumulation_steps 2 \\\r\n --num_train_epochs 8 \\\r\n --overwrite_output_dir\r\n```\r\n\r\n",
"Unfortunately, there is nothing we can do w/o you providing us a way to reproduce the problem in a simple to setup and quick to run script.\r\n\r\n> We modified this scripts for T5 pretraining\r\n\r\nHow could we possibly know what that means?\r\n\r\n> We've found there was no memory leak until the end of pretraining.\r\n\r\nI'd love to help, but I have no idea what to do with this information. \r\n\r\nPlease try to put yourself in the shoes of someone who isn't sitting in front of your computer seeing your software and what you're doing and what are the problems that you're seeing.\r\n\r\nIf we continue please first sync your code base to the latest training scripts and `transformers` since many issues have been fixed in deepspeed integration since the version you're using.\r\n\r\np.s. also when pasting code/config files please use code formatting as what you shared above is very difficult to read. Thank you.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,617 | 1,620 | 1,620 | NONE | null | Weβve been pretraining a T5-small model from scratch with using DeepSpeed v0.3.10.
Weβve found that cpu memory was increasing over time. (Weβve trained about 150 hours)
Are there memory leaks when using DeepSpeed on training T5?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10945/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10944 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10944/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10944/comments | https://api.github.com/repos/huggingface/transformers/issues/10944/events | https://github.com/huggingface/transformers/issues/10944 | 843,129,702 | MDU6SXNzdWU4NDMxMjk3MDI= | 10,944 | Please implement DUMA: Reading Comprehension with Transposition Thinking | {
"login": "max-yue",
"id": 13486398,
"node_id": "MDQ6VXNlcjEzNDg2Mzk4",
"avatar_url": "https://avatars.githubusercontent.com/u/13486398?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/max-yue",
"html_url": "https://github.com/max-yue",
"followers_url": "https://api.github.com/users/max-yue/followers",
"following_url": "https://api.github.com/users/max-yue/following{/other_user}",
"gists_url": "https://api.github.com/users/max-yue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/max-yue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/max-yue/subscriptions",
"organizations_url": "https://api.github.com/users/max-yue/orgs",
"repos_url": "https://api.github.com/users/max-yue/repos",
"events_url": "https://api.github.com/users/max-yue/events{/privacy}",
"received_events_url": "https://api.github.com/users/max-yue/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"Are the source code and the model weights avaiable?\r\n\r\nhttps://arxiv.org/abs/2001.09415",
"> Are the source code and the model weights avaiable?\r\n> \r\n> https://arxiv.org/abs/2001.09415\r\n\r\nI do not have the source code and model weights."
] | 1,617 | 1,617 | null | CONTRIBUTOR | null | # π Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
This one is on the race leaderborad top, will you guys consider implement this?
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10944/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10944/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10943 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10943/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10943/comments | https://api.github.com/repos/huggingface/transformers/issues/10943/events | https://github.com/huggingface/transformers/issues/10943 | 843,119,654 | MDU6SXNzdWU4NDMxMTk2NTQ= | 10,943 | Converting marian tatoeba models | {
"login": "Dmitry-Sn",
"id": 43182156,
"node_id": "MDQ6VXNlcjQzMTgyMTU2",
"avatar_url": "https://avatars.githubusercontent.com/u/43182156?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dmitry-Sn",
"html_url": "https://github.com/Dmitry-Sn",
"followers_url": "https://api.github.com/users/Dmitry-Sn/followers",
"following_url": "https://api.github.com/users/Dmitry-Sn/following{/other_user}",
"gists_url": "https://api.github.com/users/Dmitry-Sn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dmitry-Sn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dmitry-Sn/subscriptions",
"organizations_url": "https://api.github.com/users/Dmitry-Sn/orgs",
"repos_url": "https://api.github.com/users/Dmitry-Sn/repos",
"events_url": "https://api.github.com/users/Dmitry-Sn/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dmitry-Sn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"@patil-suraj unstale?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Unstale",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"Unstale",
"@patil-suraj - do you think you find time to take a look here? Otherwise I can probably free some time for it",
"I will take a look at it this week.",
"Gently pinging @patil-suraj here again - I think the conversion works now no? Could you maybe check? :-)",
"The conversion should work now, it has been fixed in #13757"
] | 1,617 | 1,634 | 1,634 | NONE | null | ## Environment info
- `transformers` version: 4.4.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.0+cu101 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
### Who can help
- marian: @patrickvonplaten, @patil-suraj
## Information
Model I am using (Bert, XLNet...): marian
The problem arises when using:
* [x] the official example scripts: Tatoeba models converting script
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: machine translation
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
All steps are the same as in [official script](https://github.com/huggingface/transformers/blob/master/scripts/tatoeba/README.md) for converting marian tatoeba models to pytorch.
Error log:
```
Traceback (most recent call last):
File "src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py", line 1267, in <module>
resolver = TatoebaConverter(save_dir=args.save_dir)
File "src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py", line 80, in __init__
released.columns = released_cols
File "/usr/local/lib/python3.7/dist-packages/pandas/core/generic.py", line 5154, in __setattr__
return object.__setattr__(self, name, value)
File "pandas/_libs/properties.pyx", line 66, in pandas._libs.properties.AxisProperty.__set__
File "/usr/local/lib/python3.7/dist-packages/pandas/core/generic.py", line 564, in _set_axis
self._mgr.set_axis(axis, labels)
File "/usr/local/lib/python3.7/dist-packages/pandas/core/internals/managers.py", line 227, in set_axis
f"Length mismatch: Expected axis has {old_len} elements, new "
ValueError: Length mismatch: Expected axis has 7 elements, new values have 9 elements
```
## Expected behavior
IMO, main problem is in changes in fields of file [Tatoeba-Challenge/models/released-models.txt](https://github.com/Helsinki-NLP/Tatoeba-Challenge/blob/master/models/released-models.txt).
I'm expecting clean conversion of model for choosed language pair. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10943/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10942 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10942/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10942/comments | https://api.github.com/repos/huggingface/transformers/issues/10942/events | https://github.com/huggingface/transformers/issues/10942 | 843,119,297 | MDU6SXNzdWU4NDMxMTkyOTc= | 10,942 | Wav2Vec2CTCTokenizer does not take the vocabulary into account when identifying tokens in a sentence | {
"login": "guillaume-wisniewski",
"id": 73657961,
"node_id": "MDQ6VXNlcjczNjU3OTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/73657961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guillaume-wisniewski",
"html_url": "https://github.com/guillaume-wisniewski",
"followers_url": "https://api.github.com/users/guillaume-wisniewski/followers",
"following_url": "https://api.github.com/users/guillaume-wisniewski/following{/other_user}",
"gists_url": "https://api.github.com/users/guillaume-wisniewski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guillaume-wisniewski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guillaume-wisniewski/subscriptions",
"organizations_url": "https://api.github.com/users/guillaume-wisniewski/orgs",
"repos_url": "https://api.github.com/users/guillaume-wisniewski/repos",
"events_url": "https://api.github.com/users/guillaume-wisniewski/events{/privacy}",
"received_events_url": "https://api.github.com/users/guillaume-wisniewski/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @guillaume-wisniewski ,\r\n\r\nThanks a lot for the very clear error description.The PR attached should fix the problem :-) Let me know if you still encounter any problems."
] | 1,617 | 1,619 | 1,619 | NONE | null | ## Environment info
- `transformers` version: 4.4.0
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.9
- PyTorch version (GPU?): 1.8.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten @LysandreJik
## Information
Model I am using (Bert, XLNet ...): wav2vec2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
We are trying to train an automatic phonemic transcription system for a low-resource language using the instructions [here](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 )
I created a `Wav2Vec2CTCTokenizer` tokenizer as follows:
```python
from transformers import Wav2Vec2CTCTokenizer
# example of a phonemic transcription
sent = "ΚΚ° Γ¦ Γ¦Μ Λ§ kΚ°"
# phonemes are separated by spaces in the transcription
vocab = {phoneme for phoneme in sent.split()}
vocab_dict = {k: v for v, k in enumerate(vocab)}
# <space> will be our phoneme separator
vocab_dict[" "] = len(vocab_dict)
vocab_dict["[UNK]"] = len(vocab_dict)
vocab_dict["[PAD]"] = len(vocab_dict)
import json
with open('vocab.json', 'w') as vocab_file:
json.dump(vocab_dict, vocab_file)
tokenizer = Wav2Vec2CTCTokenizer("./vocab.json",
unk_token="[UNK]",
pad_token="[PAD]",
word_delimiter_token=" ")
```
The result of the sentence tokenization is:
```
>>> tokenizer(sent)
{'input_ids': [6, 6, 5, 2, 5, 2, 6, 5, 4, 5, 6, 6], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
```
Where 6 is the id of the <unk> token. The vocabulary is :
`{'Γ¦Μ': 0, 'kΚ°': 1, 'Γ¦': 2, 'ΚΚ°': 3, 'Λ§': 4, ' ': 5, '[UNK]': 6, '[PAD]': 7}`
It appears that phonemes made of several characters (e.g. ΚΚ°) are not recognized as a whole but rather taken separately (Κ and than Κ°) each token being mapped to a separate id (here `<unk>` as the separated characters are not in the vocabulary).
The tokenization output results from the `Wav2Vec2CTCTokenizer._tokenize` function being called before looking in the dictionary representing the vocabulary to map tokens into IDs. This function converts the string representing the sentence to tokenize into a list without taking into account the tokens defined in the vocabulary.
I do not know if this is the intended behaviour or if we are not using the tokenizer correctly (in which case the documentation might be improved)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10942/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10941 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10941/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10941/comments | https://api.github.com/repos/huggingface/transformers/issues/10941/events | https://github.com/huggingface/transformers/pull/10941 | 842,837,261 | MDExOlB1bGxSZXF1ZXN0NjAyMzE0MzA5 | 10,941 | Added documentation for data collator. | {
"login": "fghuman",
"id": 15870351,
"node_id": "MDQ6VXNlcjE1ODcwMzUx",
"avatar_url": "https://avatars.githubusercontent.com/u/15870351?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fghuman",
"html_url": "https://github.com/fghuman",
"followers_url": "https://api.github.com/users/fghuman/followers",
"following_url": "https://api.github.com/users/fghuman/following{/other_user}",
"gists_url": "https://api.github.com/users/fghuman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fghuman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fghuman/subscriptions",
"organizations_url": "https://api.github.com/users/fghuman/orgs",
"repos_url": "https://api.github.com/users/fghuman/repos",
"events_url": "https://api.github.com/users/fghuman/events{/privacy}",
"received_events_url": "https://api.github.com/users/fghuman/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks again for your contribution!"
] | 1,616 | 1,618 | 1,618 | CONTRIBUTOR | null | # What does this PR do?
This PR aims to improve coverage of the documentation for the Data Collators.
Fixes #9035
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10941/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10941",
"html_url": "https://github.com/huggingface/transformers/pull/10941",
"diff_url": "https://github.com/huggingface/transformers/pull/10941.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10941.patch",
"merged_at": 1618243187000
} |
https://api.github.com/repos/huggingface/transformers/issues/10940 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10940/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10940/comments | https://api.github.com/repos/huggingface/transformers/issues/10940/events | https://github.com/huggingface/transformers/issues/10940 | 842,826,172 | MDU6SXNzdWU4NDI4MjYxNzI= | 10,940 | Addition of SequenceClassification config specific documentation to XModelForSequenceClassification. | {
"login": "nasheedyasin",
"id": 32324972,
"node_id": "MDQ6VXNlcjMyMzI0OTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/32324972?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nasheedyasin",
"html_url": "https://github.com/nasheedyasin",
"followers_url": "https://api.github.com/users/nasheedyasin/followers",
"following_url": "https://api.github.com/users/nasheedyasin/following{/other_user}",
"gists_url": "https://api.github.com/users/nasheedyasin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nasheedyasin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nasheedyasin/subscriptions",
"organizations_url": "https://api.github.com/users/nasheedyasin/orgs",
"repos_url": "https://api.github.com/users/nasheedyasin/repos",
"events_url": "https://api.github.com/users/nasheedyasin/events{/privacy}",
"received_events_url": "https://api.github.com/users/nasheedyasin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Those are generic config parameters and as such, they can't be documented on a model (they are config parameters, not model parameters). The documentation is in the [config page](https://huggingface.co/transformers/main_classes/configuration.html).\r\n\r\nThe documentation of the `from_pretrained` [method](https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.from_pretrained) also tells the user any parameter of the config can be passed to this method as a kwarg.",
"Understood, the params in question are mentioned there in detail. Thanks ππ½ @sgugger "
] | 1,616 | 1,617 | 1,617 | NONE | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Windows 10
- Python version: 3.6.12
- Tensorflow version (GPU?): 2.3.1
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help: @sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (DistilBert, Longformer):
The tasks I am working on is:
* [ ] my own task or dataset:
A document classification task with 2 or more custom classes.
## To reproduce
Steps to reproduce the behavior:
1. Navigate to the documentation of any [transformers.XForSequenceClassification](https://huggingface.co/transformers/model_doc/longformer.html#transformers.LongformerForSequenceClassification)
2. You will notice an absence of documentation for setting of any Sequence Classification related configs.
3. For example: `id2label, label2id, num_labels`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Documentation for SequenceClassification specific settings like: `id2label, label2id, num_labels` in the documentation page for [transformers.XForSequenceClassification](https://huggingface.co/transformers/model_doc/longformer.html#transformers.LongformerForSequenceClassification)
Additionally in the documentation of the `from_pretrained` method, when loading models fine-tuned on non `SequenceClassification` tasks.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10940/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10939 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10939/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10939/comments | https://api.github.com/repos/huggingface/transformers/issues/10939/events | https://github.com/huggingface/transformers/pull/10939 | 842,739,284 | MDExOlB1bGxSZXF1ZXN0NjAyMjQyMDY2 | 10,939 | [Example] Fixed finename for Saving null_odds in the evaluation stage in QA Examples | {
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,617 | 1,616 | CONTRIBUTOR | null | # What does this PR do?
Earlier file `eval_null_odds_eval.json` because of a typo in code now it will save it like `eval_null_odds.json` for squadv2 dataset Saving null_odds in the evaluation stage
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? I mentioned it in #10482
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stas00 @sgugger | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10939/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10939/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10939",
"html_url": "https://github.com/huggingface/transformers/pull/10939",
"diff_url": "https://github.com/huggingface/transformers/pull/10939.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10939.patch",
"merged_at": 1616950092000
} |
https://api.github.com/repos/huggingface/transformers/issues/10938 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10938/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10938/comments | https://api.github.com/repos/huggingface/transformers/issues/10938/events | https://github.com/huggingface/transformers/issues/10938 | 842,713,901 | MDU6SXNzdWU4NDI3MTM5MDE= | 10,938 | saving pretrained models that were obtained from another model | {
"login": "dar-tau",
"id": 45885627,
"node_id": "MDQ6VXNlcjQ1ODg1NjI3",
"avatar_url": "https://avatars.githubusercontent.com/u/45885627?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dar-tau",
"html_url": "https://github.com/dar-tau",
"followers_url": "https://api.github.com/users/dar-tau/followers",
"following_url": "https://api.github.com/users/dar-tau/following{/other_user}",
"gists_url": "https://api.github.com/users/dar-tau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dar-tau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dar-tau/subscriptions",
"organizations_url": "https://api.github.com/users/dar-tau/orgs",
"repos_url": "https://api.github.com/users/dar-tau/repos",
"events_url": "https://api.github.com/users/dar-tau/events{/privacy}",
"received_events_url": "https://api.github.com/users/dar-tau/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@dar-tau even though the file stores only weights, but if you added more layers those weights should be saved right? And when you try to load that model into RobertaForQuestionAnswering without your extra layers it should fail. ",
"Thanks for your reply. \r\nI'm using AutoModelForQuestionAnswering.from_pretrained(...) and not RobertaForQuestionAnswering (and as a matter of fact I'm actually replacing layers rather than just adding some).\r\n\r\nWhat I am aiming for is a way to make it \"forget\" it originated from Roberta, and save the entire model.\r\nMy desire is that it will be loadable with from_pretrained(..) and semantically equivalent to:\r\ntorch.save(model, \"file.pt\") \r\nmodel=torch.load(\"file.pt\")",
"@dar-tau I don't think this is something you can do with transformers, you need to probably do it using torch directly. You might want to check the config.json file which is saved.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored."
] | 1,616 | 1,620 | 1,620 | NONE | null | I am trying to save a pretrained model that I created from RobertaForQuestionAnswering by changing some layers.
However, when I load the model with from_pretrained, my new layers disappear.. It makes sense since binary seems to save only the model weights, but I wonder if there's a way to work around this? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10938/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/10937 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10937/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10937/comments | https://api.github.com/repos/huggingface/transformers/issues/10937/events | https://github.com/huggingface/transformers/pull/10937 | 842,669,671 | MDExOlB1bGxSZXF1ZXN0NjAyMTkxMTQz | 10,937 | [trainer metrics] fix cpu mem metrics; reformat runtime metric | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Woops, didn't mean to approve",
"> Thanks for the fix! Like @LysandreJik I would avoid adding `psutils` as a main dependency. We can have it come with a `is_psutils_available` and only compute the mems metrics when it's there.\r\n\r\nOh, I was thinking to assert to say to install it. They can disable the mem metrics flags if they don't want to install it.\r\n\r\nSo we have:\r\n\r\nA. `assert(\"pip install psutil to use memory metrics\")`\r\nB. `return if not is_psutils_available()`\r\n\r\n\r\nEither way works for me.\r\n",
"I would use option B personally. Having an error because something is not installed is not something we like (cf wandb).",
"looks like I did something wrong with the runtime metrics - checking.\r\n``` \r\nTrainer is attempting to log a value of \"0:00:02.18\" of type <class 'str'> for key \"train/train_runtime\" as a scalar. This invocation of Tensorboard's writer.add_scalar() is incorrect so we dropped this attribute.\r\n```\r\n\r\n**edit**: fixed",
"@sgugger, so the only missing part is how can the user reach the extensive docs in `TrainerMemoryTracker` docstring? It's sort of an internal class. \r\n\r\nPerhaps I should move the docs elsewhere so that the user can understand what the memory metrics are? Perhaps I can do the following:\r\n\r\n1. move the bulk of the current `TrainerMemoryTracker` docstring explaning the metrics to `save_metrics` and then it'll automatically be documented in the right place\r\n2. and add a note to `log_metrics` docstring to read the docstring of `save_metrics` for details\r\n3. and add a note to `TrainerMemoryTracker` docstring to read the details in `save_metrics` for details",
"I think your approach for making the doc more visible is a good one, so I'm fine with it. Also add there that `pip install psutil` is necessary to get the memory metrics?",
"> I think your approach for making the doc more visible is a good one, so I'm fine with it. \r\n\r\nGreat!\r\n\r\n> Also add there that `pip install psutil` is necessary to get the memory metrics?\r\n\r\nIt's already there ;)",
"Ah, missed it. Sorry about that!",
"OK, docs moved/reshaped/cross linked from 2 places. Decided to put the main doc in `log_metrics` since that's where they are most \"visual\". \r\n\r\nIf you could get one last look at the final version of the docs, that would be great. I expanded it a little bit more. I checked that they render well and cross-reference is a working link.",
"Hmm, I'm having second thoughts about skipping and not asserting if `psutil` is unavailable. Since there is a function flag to skip memory metrics, if the flag is `False` and we skip the metrics, that's not super intuitive. So if a user doesn't want the memory metrics they don't have to install `pustil` but can simply disable the metrics by setting the skip flag to `True`.\r\n\r\nPerhaps it'd be agreeable with you to change the behavior to option A. in https://github.com/huggingface/transformers/pull/10937#issuecomment-809529412, i.e. to assert.",
"I'd personally really like to avoid the script failing even if you can set an argument to avoid that."
] | 1,616 | 1,617 | 1,617 | CONTRIBUTOR | null | This PR improves and fixes trainer metrics:
* reworks the general ram memory tracking replacing `tracemalloc`, with "sampling" via `psutil` - in particular for peak tracking using a thread. `tracemalloc` proved to not track anything but python memory allocations, so we were missing most of the general RAM from reports. Now we are reporting much more. (other than swapped out memory).
* adds important details to memory metrics docs
* moves `psutil` dependency from just-for-tests to the core. I tried to find a built-in python equivalent, but the closest that I found is `resource.getrusage(resource.RUSAGE_SELF).ru_maxrss` which doesn't report what we need and it's not cross-platform.
* reformats secs to be in `hh:mm:ss.msec` format so it's much easier to read the runtime metric
Discovered the `tracemalloc` limitation while tracking a huge memory leak in DeepSpeed when re-using deepspeed in the same process. My tests were consuming hundreds of MBs of general RAM and the metrics were reporting nothing.
before:
```
BS=4; PYTHONPATH=src USE_TF=0 python examples/seq2seq/run_translation.py --model_name_or_path \
t5-small --output_dir /tmp/zero3 --overwrite_output_dir --max_train_samples 64 --max_val_samples 64 \
--max_source_length 128 --max_target_length 128 --val_max_target_length 128 --do_train \
--num_train_epochs 1 --per_device_train_batch_size $BS --per_device_eval_batch_size $BS \
--learning_rate 3e-3 --warmup_steps 500 --predict_with_generate --logging_steps 0 --save_steps 0 \
--eval_steps 0 --group_by_length --adafactor --dataset_name wmt16 --dataset_config ro-en \
--source_lang en --target_lang ro --source_prefix "translate English to Romanian: "
***** train metrics *****
epoch = 1.0
init_mem_cpu_alloc_delta = 3MB
init_mem_cpu_peaked_delta = 0MB
init_mem_gpu_alloc_delta = 230MB
init_mem_gpu_peaked_delta = 0MB
train_mem_cpu_alloc_delta = 60MB
train_mem_cpu_peaked_delta = 0MB
train_mem_gpu_alloc_delta = 232MB
train_mem_gpu_peaked_delta = 472MB
train_runtime = 5.5261
train_samples = 64
train_samples_per_second = 1.448
```
after this PR:
```
***** train metrics *****
epoch = 1.0
init_mem_cpu_alloc_delta = 1298MB
init_mem_cpu_peaked_delta = 154MB
init_mem_gpu_alloc_delta = 230MB
init_mem_gpu_peaked_delta = 0MB
train_mem_cpu_alloc_delta = 3446MB
train_mem_cpu_peaked_delta = 0MB
train_mem_gpu_alloc_delta = 232MB
train_mem_gpu_peaked_delta = 472MB
train_runtime = 0:00:05.66
train_samples = 64
train_samples_per_second = 1.412
```
@sgugger, @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10937/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10937",
"html_url": "https://github.com/huggingface/transformers/pull/10937",
"diff_url": "https://github.com/huggingface/transformers/pull/10937.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10937.patch",
"merged_at": 1617050822000
} |
https://api.github.com/repos/huggingface/transformers/issues/10936 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/10936/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/10936/comments | https://api.github.com/repos/huggingface/transformers/issues/10936/events | https://github.com/huggingface/transformers/pull/10936 | 842,668,919 | MDExOlB1bGxSZXF1ZXN0NjAyMTkwNTcy | 10,936 | Fix initializing BertJapaneseTokenizer with AutoTokenizers | {
"login": "singletongue",
"id": 17107587,
"node_id": "MDQ6VXNlcjE3MTA3NTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/17107587?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/singletongue",
"html_url": "https://github.com/singletongue",
"followers_url": "https://api.github.com/users/singletongue/followers",
"following_url": "https://api.github.com/users/singletongue/following{/other_user}",
"gists_url": "https://api.github.com/users/singletongue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/singletongue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/singletongue/subscriptions",
"organizations_url": "https://api.github.com/users/singletongue/orgs",
"repos_url": "https://api.github.com/users/singletongue/repos",
"events_url": "https://api.github.com/users/singletongue/events{/privacy}",
"received_events_url": "https://api.github.com/users/singletongue/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,616 | 1,617 | 1,617 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes a bug in loading some kinds of tokenizers using `AutoTokenizers.from_pretrained()`.
This issue is discussed in https://github.com/cl-tohoku/bert-japanese/issues/25.
When `sentencepiece` is not installed, the initialization of several tokenizers such as `BertJapaneseTokenizer`, `BarthezTokenizer`, and `MBart50Tokenizer` fails.
The exception is raised in `tokenizer_class_from_name()` when iterating over tokenizer classes which are `NoneType` objects.
Such tokenizer classes are set to `None` in `is_sentencepiece_available()` if `sentencepiece` is not available.
This error affects the initialization of `BertJapaneseTokenizer` even though it does not depend on `sentencepiece`.
```sh
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("cl-tohoku/bert-base-japanese")
Downloading: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 479/479 [00:00<00:00, 143kB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/m-suzuki/.pyenv/versions/py3.7/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py", line 370, in from_pretrained
tokenizer_class = tokenizer_class_from_name(tokenizer_class_candidate)
File "/Users/m-suzuki/.pyenv/versions/py3.7/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py", line 273, in tokenizer_class_from_name
if c.__name__ == class_name:
AttributeError: 'NoneType' object has no attribute '__name__'
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/10936/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/10936/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/10936",
"html_url": "https://github.com/huggingface/transformers/pull/10936",
"diff_url": "https://github.com/huggingface/transformers/pull/10936.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/10936.patch",
"merged_at": 1617027976000
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.