url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/1410
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1410/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1410/comments
https://api.github.com/repos/huggingface/transformers/issues/1410/events
https://github.com/huggingface/transformers/issues/1410
501,916,488
MDU6SXNzdWU1MDE5MTY0ODg=
1,410
migrate BertForQuestionAnswering from pytorch-pretrained-bert not produce the same result
{ "login": "ductm104", "id": 45566602, "node_id": "MDQ6VXNlcjQ1NTY2NjAy", "avatar_url": "https://avatars.githubusercontent.com/u/45566602?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ductm104", "html_url": "https://github.com/ductm104", "followers_url": "https://api.github.com/users/ductm104/followers", "following_url": "https://api.github.com/users/ductm104/following{/other_user}", "gists_url": "https://api.github.com/users/ductm104/gists{/gist_id}", "starred_url": "https://api.github.com/users/ductm104/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ductm104/subscriptions", "organizations_url": "https://api.github.com/users/ductm104/orgs", "repos_url": "https://api.github.com/users/ductm104/repos", "events_url": "https://api.github.com/users/ductm104/events{/privacy}", "received_events_url": "https://api.github.com/users/ductm104/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hello! Have you put these models in `eval()` mode so as to deactivate the dropout modules?", "For completeness sake: did you train both models with the same random seed? Or are you just trying to evaluate models that you trained?\r\n\r\nMy go-to method is:\r\n\r\n```python\r\ndef set_seed(seed):\r\n \"\"\" Set all seeds to make results reproducible (deterministic mode).\r\n When seed is a false-y value or not supplied, disables deterministic mode. \"\"\"\r\n\r\n if seed:\r\n logging.info(f\"Running in deterministic mode with seed {seed}\")\r\n torch.manual_seed(seed)\r\n torch.cuda.manual_seed_all(seed)\r\n torch.backends.cudnn.deterministic = True\r\n torch.backends.cudnn.benchmark = False\r\n np.random.seed(seed)\r\n random.seed(seed)\r\n os.environ['PYTHONHASHSEED'] = str(seed)\r\n else:\r\n logging.info(f\"Running in non-deterministic mode\")\r\n```", "It does work when I put generated tensors into 2 models but doesn't when I put tensors I save before. Maybe I will rewrite the inference code with new code base and retrain the model.", "Are you sure your model is in evaluation mode? ", "yes, I already put it in evaluation mode", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,570
1,576
1,576
NONE
null
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): retrained Bert Language I am using the model on (English, Chinese....): multilingual - vietnamese The tasks I am working on is: * an official GLUE/SQUaD task: SQUaD * my own task or dataset: same format as SQUaD ## To Reproduce Steps to reproduce the behavior: I have trained a bert model on older pytorch-pretrained-bert and it works just fine. recently, I switch the code to the latest version of transformer. I use the following config: > bert_model = './my_model' > max_seq_length = 160 > doc_stride = 160 > predict_batch_size = 20 > n_best_size=20 > max_answer_length=30 > verbose_logging = False > no_cuda = True > seed= 42 > do_lower_case= True > version_2_with_negative = True > null_score_diff_threshold=0.0 > max_query_length = 64 > THRESH_HOLD = 0.95 I import 2 class: `from transformers import BertForQuestionAnswering as bqa1` `from pytorch_pretrained_bert.modeling import BertForQuestionAnswering as bqa2` and load 2 model as following : `model1 = bqa1.from_pretrained(args.bert_model)` `model2 = bqa2.from_pretrained(args.bert_model)` and input to models with the same tensors: `input_ids = torch.ones((1,160),dtype = torch.int64)` `segment_ids = torch.ones((1,160),dtype = torch.int64)` `input_mask = torch.ones((1,160),dtype = torch.int64) ` `model(input_ids, segment_ids, input_mask)` I also check if 2 model has same weights or not by using following guide [https://discuss.pytorch.org/t/check-if-models-have-same-weights/4351/3](guide). I seed the randomness of torch before inference 2 model by using: `seed = 0` `torch.manual_seed(seed)` `if torch.cuda.is_available():` ` torch.cuda.manual_seed_all(seed)` but 2 models still produce difference results.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1410/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1410/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1409
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1409/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1409/comments
https://api.github.com/repos/huggingface/transformers/issues/1409/events
https://github.com/huggingface/transformers/pull/1409
501,851,798
MDExOlB1bGxSZXF1ZXN0MzI0MDQzOTMx
1,409
Evaluation result.txt path changing #1286
{ "login": "brian41005", "id": 13401708, "node_id": "MDQ6VXNlcjEzNDAxNzA4", "avatar_url": "https://avatars.githubusercontent.com/u/13401708?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brian41005", "html_url": "https://github.com/brian41005", "followers_url": "https://api.github.com/users/brian41005/followers", "following_url": "https://api.github.com/users/brian41005/following{/other_user}", "gists_url": "https://api.github.com/users/brian41005/gists{/gist_id}", "starred_url": "https://api.github.com/users/brian41005/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brian41005/subscriptions", "organizations_url": "https://api.github.com/users/brian41005/orgs", "repos_url": "https://api.github.com/users/brian41005/repos", "events_url": "https://api.github.com/users/brian41005/events{/privacy}", "received_events_url": "https://api.github.com/users/brian41005/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Great, that looks good to me!", "Ok, merging, thanks @brian41005 " ]
1,570
1,570
1,570
CONTRIBUTOR
null
Here is the suggestion that I mention in issues #1286
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1409/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1409/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1409", "html_url": "https://github.com/huggingface/transformers/pull/1409", "diff_url": "https://github.com/huggingface/transformers/pull/1409.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1409.patch", "merged_at": 1570583674000 }
https://api.github.com/repos/huggingface/transformers/issues/1408
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1408/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1408/comments
https://api.github.com/repos/huggingface/transformers/issues/1408/events
https://github.com/huggingface/transformers/issues/1408
501,784,074
MDU6SXNzdWU1MDE3ODQwNzQ=
1,408
Batched BertForNextSentencePrediction with variable length sentences
{ "login": "murdo25", "id": 16654599, "node_id": "MDQ6VXNlcjE2NjU0NTk5", "avatar_url": "https://avatars.githubusercontent.com/u/16654599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/murdo25", "html_url": "https://github.com/murdo25", "followers_url": "https://api.github.com/users/murdo25/followers", "following_url": "https://api.github.com/users/murdo25/following{/other_user}", "gists_url": "https://api.github.com/users/murdo25/gists{/gist_id}", "starred_url": "https://api.github.com/users/murdo25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/murdo25/subscriptions", "organizations_url": "https://api.github.com/users/murdo25/orgs", "repos_url": "https://api.github.com/users/murdo25/repos", "events_url": "https://api.github.com/users/murdo25/events{/privacy}", "received_events_url": "https://api.github.com/users/murdo25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello! \r\n\r\n1 - Indeed, if you want to have several sequences of variable length in a single batch, you should pad the shorter sequences.\r\n\r\n\r\n2 - In the [`BertForNextSentencePrediction ` documentation](https://huggingface.co/transformers/model_doc/bert.html#bertfornextsentenceprediction) is written the following: \r\n\r\n`attention_mask`: Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: `1` for tokens that are NOT MASKED, `0` for MASKED tokens. \r\n\r\nIf using the `attention_mask`, there should be no difference between a model's predictions of a sequence and its padded counterpart. \r\n\r\n\r\n3 - The segment ids padding indices can change according to the model. I believe it is `0` for most models, but `4` in the case of XLNet. You seem to be padding with `0` in your example, which is the way to go!", "Thank you!!! This is exactly what I was missing. \r\n\r\n```python3\r\nimport torch\r\nfrom pytorch_transformers import BertTokenizer, BertModel, BertForMaskedLM,BertForNextSentencePrediction\r\n\r\n# Load pre-trained model tokenizer (vocabulary)\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n\r\n# Tokenized inputs\r\ntext1 = \"[CLS] Who was Jim ? [SEP] Jim Henson was a puppeteer [SEP]\"\r\ntext2 = \"[CLS] Who was Jim ? [SEP] Jim Henson was a puppeteer [SEP] [PAD] [PAD]\"\r\ntokenized_text1 = tokenizer.tokenize(text1)\r\ntokenized_text2 = tokenizer.tokenize(text2)\r\n\r\n# Convert token to vocabulary indices\r\nindexed_tokens1 = tokenizer.convert_tokens_to_ids(tokenized_text1)\r\nindexed_tokens2 = tokenizer.convert_tokens_to_ids(tokenized_text2)\r\n\r\n# Define sentence A and B indices associated to 1st and 2nd sentences \r\nsegments_ids1 = [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]\r\nsegments_ids2 = [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]\r\n\r\n#Attention Mask [1] over tokens, [0] over padding\r\nattention_mask = torch.FloatTensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]])\r\n\r\n# Convert inputs to PyTorch tensors\r\ntokens_tensor1 = torch.tensor([indexed_tokens1])\r\ntokens_tensor2 = torch.tensor([indexed_tokens2])\r\nsegments_tensors1 = torch.tensor([segments_ids1])\r\nsegments_tensors2 = torch.tensor([segments_ids2])\r\n\r\n# Load pre-trained model (weights)\r\nmodel = BertForNextSentencePrediction.from_pretrained('bert-base-uncased')\r\nmodel.eval()\r\n\r\n# Predict is Next Sentence ?\r\npredictions1 = model(tokens_tensor1, segments_tensors1 )\r\npredictions2 = model(tokens_tensor2, segments_tensors2, attention_mask=attention_mask )\r\n\r\nprint(predictions1) #(tensor([[ 5.6165, -5.2786]], grad_fn=<AddmmBackward>),)\r\nprint(predictions2) #(tensor([[ 5.6165, -5.2786]], grad_fn=<AddmmBackward>),)\r\n```\r\n\r\nWith the attention mask now over tokens I get the same output without degradation. \r\n:)\r\n" ]
1,570
1,570
1,570
NONE
null
## ❓ Questions & Help What's the proper way to pad a batch of variable length sentences for the BertForNextSentencePrediction model? I want to batch a list of sentences, and each sentence can have any length < max_seq_len. To fit them into a token tensor I assume I will need some form of padding? Here's an example with 2 candidate sentences where the first sentence has no padding, and the second has 2 padded 0s. ```python import torch from pytorch_transformers import BertTokenizer, BertModel, BertForMaskedLM,BertForNextSentencePrediction # Load pre-trained model tokenizer (vocabulary) tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') # Tokenized inputs text1 = "[CLS] Who was Jim ? [SEP] Jim Henson was a puppeteer [SEP]" text2 = "[CLS] Who was Jim ? [SEP] Jim Henson was a puppeteer [SEP] [PAD] [PAD]" tokenized_text1 = tokenizer.tokenize(text1) tokenized_text2 = tokenizer.tokenize(text2) # Convert token to vocabulary indices indexed_tokens1 = tokenizer.convert_tokens_to_ids(tokenized_text1) indexed_tokens2 = tokenizer.convert_tokens_to_ids(tokenized_text2) # Define sentence A and B indices associated to 1st and 2nd sentences segments_ids1 = [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1] segments_ids2 = [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0] # Convert inputs to PyTorch tensors tokens_tensor1 = torch.tensor([indexed_tokens1]) tokens_tensor2 = torch.tensor([indexed_tokens2]) segments_tensors1 = torch.tensor([segments_ids1]) segments_tensors2 = torch.tensor([segments_ids2]) # Load pre-trained model (weights) model = BertForNextSentencePrediction.from_pretrained('bert-base-uncased') model.eval() # Predict is Next Sentence ? predictions1 = model(tokens_tensor1, segments_tensors1 ) predictions2 = model(tokens_tensor2, segments_tensors2 ) print(predictions1) #(tensor([[ 5.6165, -5.2786]], grad_fn=<AddmmBackward>),) print(predictions2) #(tensor([[ 5.0919, -4.4939]], grad_fn=<AddmmBackward>),) ``` As the number padding 0s increases the the confidence of the model continues to decline. We haven't been able to find any documentation for setting up the padding sequences, especially for the segment ids. Any idea how we can set this up? Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1408/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1408/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1407
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1407/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1407/comments
https://api.github.com/repos/huggingface/transformers/issues/1407/events
https://github.com/huggingface/transformers/issues/1407
501,776,758
MDU6SXNzdWU1MDE3NzY3NTg=
1,407
GPT-2 Training on non-english text
{ "login": "angelorodem", "id": 11444489, "node_id": "MDQ6VXNlcjExNDQ0NDg5", "avatar_url": "https://avatars.githubusercontent.com/u/11444489?v=4", "gravatar_id": "", "url": "https://api.github.com/users/angelorodem", "html_url": "https://github.com/angelorodem", "followers_url": "https://api.github.com/users/angelorodem/followers", "following_url": "https://api.github.com/users/angelorodem/following{/other_user}", "gists_url": "https://api.github.com/users/angelorodem/gists{/gist_id}", "starred_url": "https://api.github.com/users/angelorodem/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/angelorodem/subscriptions", "organizations_url": "https://api.github.com/users/angelorodem/orgs", "repos_url": "https://api.github.com/users/angelorodem/repos", "events_url": "https://api.github.com/users/angelorodem/events{/privacy}", "received_events_url": "https://api.github.com/users/angelorodem/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi! By \"GPT-2 training\" two different methods can be understood: training from scratch, and fine-tuning. \r\n\r\nIf you're looking at training GPT-2 on a different language such as Portuguese, then training from scratch seems necessary. You could use the [language modeling finetuning example](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py) as a start, but please be aware that training such a language model from scratch takes a humongous amount of power and data, which would cost a lot. I can point you to [this issue](https://github.com/huggingface/transformers/issues/1356) which discusses training such a model on French.\r\n\r\nIf you're looking at training your model on programming languages that have a lot of overlapping vocabulary with English (say Python with a lot of documentation), maybe you could fine-tune the original GPT-2 to your dataset (still using the [lm finetuning example](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py)), but I'm not sure of the results. ", "I'm training Russian GPT-2 at the moment. [I've tried to make Readme useful.](https://github.com/mgrankin/ru_transformers)", "> If you're looking at training GPT-2 on a different language such as Portuguese, then training from scratch seems necessary. \r\n\r\nIt is definitely not necessary to start from scratch. I'd argue the opposite, it'd be useful to start with pre-trained GPT-2 even if you replacing the whole vocabulary (English -> Portuguese).", "Alright @mgrankin, that's good to know, thanks!", "> > If you're looking at training GPT-2 on a different language such as Portuguese, then training from scratch seems necessary.\r\n> \r\n> It is definitely not necessary to start from scratch. I'd argue the opposite, it'd be useful to start with pre-trained GPT-2 even if you replacing the whole vocabulary (English -> Portuguese).\r\n\r\nBut in $$$ terms it would still be closer to training from scratch than fine-tuning, right?", "> I'm training Russian GPT-2 at the moment. [I've tried to make Readme useful.](https://github.com/mgrankin/ru_transformers)\r\n\r\nThank you @mgrankin for sharing your steps. I plan to do the same for Hindi language.\r\nHow much is it costing you to train?", "> How much is it costing you to train?\r\n\r\nIt’s hard to tell overall cost because the training is in the process. I’ve got a workstation with 4 Titan RTX and I don’t use cloud GPUs at the moment. I use one GPU per model. The training already lasted about two weeks now and gpt2-medium gives me perplexity = 21 on my validation set. \r\n\r\nSince PyTorch 1.3 was released recently with TPU support I’m thinking of trying to use TPU to speed up the training. I will update the repo in the next few days in case of success. \r\n", "> But in $$$ terms it would still be closer to training from scratch than fine-tuning, right?\r\n\r\nActually, in terms of quality it would be great if somebody try to train GPT2 on Portuguese from scratch vs fine-tune from pretrained English model. My guess that fine-tuning is better is based on intuition that non-random weights could be reused. Also, English is probably the most resourceful language and WebText is a great dataset. If you can build dataset with same or better quality you can give it a shot and train GPT-2 from scratch. \r\n\r\nIn terms of money it should be way cheaper to fine-tune. But I will say that with confidence then I'll finish the Russian GPT-2.\r\n\r\n", "Thanks for the answer @mgrankin i'm anxious to see your results!", "> I'm training Russian GPT-2 at the moment. [I've tried to make Readme useful.](https://github.com/mgrankin/ru_transformers)\r\n\r\n@mgrankin Could you explain to me how you trained your model from scratch with BERT?\r\n\r\nI would like to train BERT from scratch for a textual base in PT-BR (8GB data). Is it possible to use the run_lm_finetuning.py code to perform this process without using the multi-language bert model?\r\n\r\nI already have a vocab.txt for the PT-BR base and I don't want to load initial weights.\r\n\r\nIs there any script or tutorial to perform this process step by step?", "Hi, I also have a repo which allows to train gpt-2 language model on non-english text with a custom BPE tokenizer. But it uses a different gpt-2 implementation so currently it's unable to use pre-trained GPT-2 (although a conversion script should be possible, because it's a port of original TF implementation). Here is the link https://github.com/lopuhin/transformer-lm", "Hello, this thread is what I'm looking (with the one about GPT-2 and BERT into French) for but I'm not sure I found the answer to my questions:\r\n- how long does it take to go through GPT-2 on non-english text?\r\n- what configuration of GPUs?\r\n- what size of corpus?\r\n\r\nMany thanks in advance for your answers!", "Why don't you use [CamemBERT](https://camembert-model.fr/) model, which is dedicated to French language? **It's available in HuggingFace's Transformers** too (since few days ago, so try out :D)! If you want absolutely to use GPT2 model, I can answer to you too!\r\n\r\n> Hello, this thread is what I'm looking (with the one about GPT-2 and BERT into French) for but I'm not sure I found the answer to my questions:\r\n> \r\n> * how long does it take to go through GPT-2 on non-english text?\r\n> * what configuration of GPUs?\r\n> * what size of corpus?\r\n> \r\n> Many thanks in advance for your answers!", "Hi @piegu, please do not post the same message in two issues (that are linked with one another)", "> Hi @piegu, please do not post the same message in two issues (that are linked with one another)\r\n\r\nHello @julien-c. Ok but then I have to update in this thread my question to French and Portuguese (same 3 questions about fine-tuning GPT-2 and BERT). Thank you.\r\n\r\n", "> Why don't you use [CamemBERT](https://camembert-model.fr/) model, which is dedicated to French language? **It's available in HuggingFace's Transformers** too (since few days ago, so try out :D)! If you want absolutely to use GPT2 model, I can answer to you too!\r\n\r\nThanks @TheEdoardo93. For sure I will test CamemBERT but it does not answer my 3 questions :-) Great if you can answer about GPT-2 at least. Thank you.", "Hi @nikhilno1 , Did you manage to train it on Hindi?", "Hi @GladiatorX, No I didn't. Life got in the way. :)\r\nWould you like to work on it together?", "@mgrankin Out of curiosity, how did you collect your 230 GB Russian dataset?\r\n I would love to do something similar for another language, and I'm looking for tips", "@BoxxiDev you can use something like a scraper/crawler like [Scrapy](https://scrapy.org/) (or something like it) on a russian site, and then you can use something like AWS Comprehend to get the language (or make a language detector yourself) and filter only Russian results.\r\n\r\nto get tons of data use some distributed scraper on a cloud service like AWS.", "@BoxxiDev Library projects have been working in Russia for a very long time, and they publish a torrent file with all the contents in fb2. [example](https://booktracker.org/viewtopic.php?t=1198)", "Hi @nikhilno1 , +1, \r\nDid you manage to train it on Hindi?\r\n\r\n", "> > If you're looking at training GPT-2 on a different language such as Portuguese, then training from scratch seems necessary.\r\n> \r\n> It is definitely not necessary to start from scratch. I'd argue the opposite, it'd be useful to start with pre-trained GPT-2 even if you replacing the whole vocabulary (English -> Portuguese).\r\n\r\n@mgrankin you say that it is not necessary to train from scratch, but assumed the vocabulary will not overlap (let's say English and Russian), how you do it?\r\n\r\nAlso someone else is talking about BERT based models (like the French model CamemBERT), but those models are [MASK] token based models, so it would need a different approach for text generation à la GPT-2", "@loretoparisi \r\n\r\nBy using progressive unfreezing. This's a technique from Transfer Learning. First, you freeze all layers and unfreeze only those layers that you expect to change the most - the embeddings and adjacent to the embeddings, you train them, you unfreeze a bit more layers, repeat. I’d advise taking a [course.fast.ai](https://course.fast.ai) course to be very comfortable with the concept.\r\n\r\nYou can look at the code [here](https://github.com/mgrankin/ru_transformers/blob/64d7a68e067737c35c7bf3986cb1845aaf54a163/tpu_lm_finetuning.py#L677).\r\n", "@mgrankin thank you, in the meanwhile I'm following this approach [BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model](https://arxiv.org/abs/1902.04094)", "> \r\n> By using progressive unfreezing. This's a technique from Transfer Learning. First, you freeze all layers and unfreeze only those layers that you expect to change the most - the embeddings and adjacent to the embeddings, you train them, you unfreeze a bit more layers, repeat. I’d advise taking a [course.fast.ai](https://course.fast.ai) course to be very comfortable with the concept.\r\n> \r\n> You can look at the code [here](https://github.com/mgrankin/ru_transformers/blob/64d7a68e067737c35c7bf3986cb1845aaf54a163/tpu_lm_finetuning.py#L677).\r\n\r\nHi Mikhail. In your (great) code, you unfreeze groups of 3 layers (see [code](https://github.com/mgrankin/ru_transformers/blob/64d7a68e067737c35c7bf3986cb1845aaf54a163/tpu_lm_finetuning.py#L684) and below). There is a specific reason or it is the result of your tests? Thanks.\r\n\r\n`need_grads = set(flat[:i_start+args.unfreeze_level*3]) | set(flat[-(i_end+args.unfreeze_level*3):])`", "@piegu that's a heuristic, feel free to experiment with the number.", "> Hi @nikhilno1 , +1,\r\n> Did you manage to train it on Hindi?\r\n\r\nStarting it now. Let me know if you want to work together.", "> > Hi @nikhilno1 , +1,\r\n> > Did you manage to train it on Hindi?\r\n> \r\n> Starting it now. Let me know if you want to work together.\r\n\r\n@nikhilno1 Im interested to do this for tamil, were you able to figure our hindi ?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,570
1,615
1,597
NONE
null
## ❓ Questions & Help I wish to train a GPT-2 in different languages, like Portuguese and maybe some programming languages like C++ (and play with token predictions). But I could not find any examples of how to take an X dataset (like c++ source files), create the tokens from it and train a GPT-2 to predict new tokens from the knowledge of this X dataset. Is this even possible? (if yes, how could one do this?) Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1407/reactions", "total_count": 15, "+1": 15, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1407/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1406
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1406/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1406/comments
https://api.github.com/repos/huggingface/transformers/issues/1406/events
https://github.com/huggingface/transformers/pull/1406
501,711,249
MDExOlB1bGxSZXF1ZXN0MzIzOTMyNjIw
1,406
Distil update
{ "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "repos_url": "https://api.github.com/users/VictorSanh/repos", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1406?src=pr&el=h1) Report\n> Merging [#1406](https://codecov.io/gh/huggingface/transformers/pull/1406?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/63ed224b7c550ead5f9599187e665ded57ce80d4?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1406/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1406?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1406 +/- ##\n=======================================\n Coverage 84.72% 84.72% \n=======================================\n Files 84 84 \n Lines 12591 12591 \n=======================================\n Hits 10668 10668 \n Misses 1923 1923\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1406?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1406?src=pr&el=footer). Last update [63ed224...193bbda](https://codecov.io/gh/huggingface/transformers/pull/1406?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,570
1,651
1,570
MEMBER
null
Update Distil* - update on distilbert weights - add distilgpt2 weights - link to the paper - big update on code
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1406/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1406/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1406", "html_url": "https://github.com/huggingface/transformers/pull/1406", "diff_url": "https://github.com/huggingface/transformers/pull/1406.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1406.patch", "merged_at": 1570112832000 }
https://api.github.com/repos/huggingface/transformers/issues/1405
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1405/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1405/comments
https://api.github.com/repos/huggingface/transformers/issues/1405/events
https://github.com/huggingface/transformers/pull/1405
501,659,443
MDExOlB1bGxSZXF1ZXN0MzIzODg5OTEz
1,405
Re-order XLNet attention head outputs for better perf
{ "login": "slayton58", "id": 4992598, "node_id": "MDQ6VXNlcjQ5OTI1OTg=", "avatar_url": "https://avatars.githubusercontent.com/u/4992598?v=4", "gravatar_id": "", "url": "https://api.github.com/users/slayton58", "html_url": "https://github.com/slayton58", "followers_url": "https://api.github.com/users/slayton58/followers", "following_url": "https://api.github.com/users/slayton58/following{/other_user}", "gists_url": "https://api.github.com/users/slayton58/gists{/gist_id}", "starred_url": "https://api.github.com/users/slayton58/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/slayton58/subscriptions", "organizations_url": "https://api.github.com/users/slayton58/orgs", "repos_url": "https://api.github.com/users/slayton58/repos", "events_url": "https://api.github.com/users/slayton58/events{/privacy}", "received_events_url": "https://api.github.com/users/slayton58/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Remaining CI failures are valid, they look to assume a `ijbn` ordering for all attention-based things, which no longer holds.\r\n\r\nI'm happy to add additional functionality to get these tests passing, but I'd like input on how you'd like that done (I'd lean to passing an optional `expected_attention_size` which is `[key_len, batch_size, num_heads]` by default, and checking that instead of assembling the expected sizes on-the-fly in the test(s))", "Please ignore above :) test errors were due to a missed transpose on my part (if attention outputs are returned, they need to be transposed from `bnij` to `ijbn` ordering to keep the interface from attention <-> the rest of the code the same as before.)", "This is a great work, thanks a lot @slayton58.\r\n\r\nCan you confirm this has no noticeable impact on downstream performances (in terms of evaluation metrics), for instance on your SQuAD tests?", "@thomwolf I have been testing against the config from https://github.com/huggingface/transformers/issues/947#issue-476001056 with seq-length=512, and obtained a consistent f1 score of 83 across both `ijbn` and `bnij` attention head orderings (also across fp32/fp16 O1/fp16 O2)\r\n\r\nWhen there's a PR issued for the changes in https://github.com/huggingface/transformers/issues/947#issuecomment-535989890 I'd be happy to go ahead and repro those numbers with this change if you'd like the additional security.", "Awesome, ok let's merge this then." ]
1,570
1,570
1,570
CONTRIBUTOR
null
Significant performance boost over the original orderings. On an already somewhat optimised branch this gave me > 2x end-to-end throughput on a squad xlnet fine-tuning task (batch 8, seq-length 512, fp16, amp opt level = O2) Justifying this is the contraction ``` attn_vec = torch.einsum('bnij,jbnd->ibnd', attn_prob, v_head_h) ``` Given how `torch.einsum` and tensor contractions work, this is a batched gemm with batch dimension `bn` and gemm dimension `(i x j) * (j x d)`. Moving `bn` to be the first dimensions for the first input eliminates a sizable transpose that would otherwise need to be done.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1405/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1405/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1405", "html_url": "https://github.com/huggingface/transformers/pull/1405", "diff_url": "https://github.com/huggingface/transformers/pull/1405.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1405.patch", "merged_at": 1570788659000 }
https://api.github.com/repos/huggingface/transformers/issues/1404
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1404/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1404/comments
https://api.github.com/repos/huggingface/transformers/issues/1404/events
https://github.com/huggingface/transformers/issues/1404
501,640,115
MDU6SXNzdWU1MDE2NDAxMTU=
1,404
How to speedup BERT eval
{ "login": "abaheti95", "id": 9119028, "node_id": "MDQ6VXNlcjkxMTkwMjg=", "avatar_url": "https://avatars.githubusercontent.com/u/9119028?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abaheti95", "html_url": "https://github.com/abaheti95", "followers_url": "https://api.github.com/users/abaheti95/followers", "following_url": "https://api.github.com/users/abaheti95/following{/other_user}", "gists_url": "https://api.github.com/users/abaheti95/gists{/gist_id}", "starred_url": "https://api.github.com/users/abaheti95/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abaheti95/subscriptions", "organizations_url": "https://api.github.com/users/abaheti95/orgs", "repos_url": "https://api.github.com/users/abaheti95/repos", "events_url": "https://api.github.com/users/abaheti95/events{/privacy}", "received_events_url": "https://api.github.com/users/abaheti95/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Did you try using DistilBERT? Inference should be ~ 60% faster", "Turns out I wasn't using the using the gpu correctly. I moved the model and the inputs to gpu by doing `.to(device)` and it became 100x faster. Thanks for the suggestion." ]
1,570
1,570
1,570
NONE
null
## ❓ Questions & Help Is there a simple way to speedup `.eval()` when using the BERT model Specifically I am using `BertForSequenceClassification`. I have finetuned a the model separately on my own data and I am trying to get hidden representations after doing `model.eval()` as follows: `last_hidden_layer, all_hidden_states = model(input_ids)` However for each input it is taking about 2.3 seconds on `cpu` and 2.6 seconds on `gpu`. Is there a way where I can do faster than this?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1404/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1404/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1403
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1403/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1403/comments
https://api.github.com/repos/huggingface/transformers/issues/1403/events
https://github.com/huggingface/transformers/issues/1403
501,626,602
MDU6SXNzdWU1MDE2MjY2MDI=
1,403
Is it possible to modify the parameters in GPT-2?
{ "login": "weiguowilliam", "id": 31396452, "node_id": "MDQ6VXNlcjMxMzk2NDUy", "avatar_url": "https://avatars.githubusercontent.com/u/31396452?v=4", "gravatar_id": "", "url": "https://api.github.com/users/weiguowilliam", "html_url": "https://github.com/weiguowilliam", "followers_url": "https://api.github.com/users/weiguowilliam/followers", "following_url": "https://api.github.com/users/weiguowilliam/following{/other_user}", "gists_url": "https://api.github.com/users/weiguowilliam/gists{/gist_id}", "starred_url": "https://api.github.com/users/weiguowilliam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/weiguowilliam/subscriptions", "organizations_url": "https://api.github.com/users/weiguowilliam/orgs", "repos_url": "https://api.github.com/users/weiguowilliam/repos", "events_url": "https://api.github.com/users/weiguowilliam/events{/privacy}", "received_events_url": "https://api.github.com/users/weiguowilliam/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! GPT-2, like all models in this library, directly inherit from pytorch's `nn.Module`, so you're free to finetune them or modify their parameters as you wish.", "> Hi! GPT-2, like all models in this library, directly inherit from pytorch's `nn.Module`, so you're free to finetune them or modify their parameters as you wish.\r\n\r\nThank you for your help!" ]
1,570
1,570
1,570
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I wonder whether it's possible to modify the parameters in GPT-2? Since we can not train GPT-2, modifying the parameters and observing the changes in results will be helpful. Thank you in advance!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1403/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1403/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1402
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1402/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1402/comments
https://api.github.com/repos/huggingface/transformers/issues/1402/events
https://github.com/huggingface/transformers/issues/1402
501,618,164
MDU6SXNzdWU1MDE2MTgxNjQ=
1,402
Defining Models in TF 2.0 and Extending Them
{ "login": "vyraun", "id": 17217068, "node_id": "MDQ6VXNlcjE3MjE3MDY4", "avatar_url": "https://avatars.githubusercontent.com/u/17217068?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vyraun", "html_url": "https://github.com/vyraun", "followers_url": "https://api.github.com/users/vyraun/followers", "following_url": "https://api.github.com/users/vyraun/following{/other_user}", "gists_url": "https://api.github.com/users/vyraun/gists{/gist_id}", "starred_url": "https://api.github.com/users/vyraun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vyraun/subscriptions", "organizations_url": "https://api.github.com/users/vyraun/orgs", "repos_url": "https://api.github.com/users/vyraun/repos", "events_url": "https://api.github.com/users/vyraun/events{/privacy}", "received_events_url": "https://api.github.com/users/vyraun/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,570
1,575
1,575
NONE
null
## ❓ Questions & Help Hi, Thanks for the awesome library 😊 I saw the examples on fine-tuning the models. My question is, how could we get model definitions i.e the layered architectures (model.summary) in Keras. Any example notebooks demonstrating how we could get the model definitions and extend the architectures (by subclassing or manually tweaking the layers)? Cheers, Vikas
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1402/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1402/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1401
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1401/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1401/comments
https://api.github.com/repos/huggingface/transformers/issues/1401/events
https://github.com/huggingface/transformers/issues/1401
501,555,845
MDU6SXNzdWU1MDE1NTU4NDU=
1,401
XLM add new models
{ "login": "vvssttkk", "id": 8581044, "node_id": "MDQ6VXNlcjg1ODEwNDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/8581044?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vvssttkk", "html_url": "https://github.com/vvssttkk", "followers_url": "https://api.github.com/users/vvssttkk/followers", "following_url": "https://api.github.com/users/vvssttkk/following{/other_user}", "gists_url": "https://api.github.com/users/vvssttkk/gists{/gist_id}", "starred_url": "https://api.github.com/users/vvssttkk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vvssttkk/subscriptions", "organizations_url": "https://api.github.com/users/vvssttkk/orgs", "repos_url": "https://api.github.com/users/vvssttkk/repos", "events_url": "https://api.github.com/users/vvssttkk/events{/privacy}", "received_events_url": "https://api.github.com/users/vvssttkk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! Those models actually are available, we just forgot to add them to the documentation :). Thanks for letting us know!" ]
1,570
1,570
1,570
NONE
null
hi can u add to your libs new pretrained models by XLM, like `mlm_17_1280.pth` & `mlm_100_1280.pth`?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1401/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1401/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1400
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1400/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1400/comments
https://api.github.com/repos/huggingface/transformers/issues/1400/events
https://github.com/huggingface/transformers/pull/1400
501,547,727
MDExOlB1bGxSZXF1ZXN0MzIzNzk5MjI4
1,400
Fix typo: initialy -> initially
{ "login": "bryant1410", "id": 3905501, "node_id": "MDQ6VXNlcjM5MDU1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bryant1410", "html_url": "https://github.com/bryant1410", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "repos_url": "https://api.github.com/users/bryant1410/repos", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Great thanks!", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1400?src=pr&el=h1) Report\n> Merging [#1400](https://codecov.io/gh/huggingface/transformers/pull/1400?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/391db836ab7ed2ca61c51a7cf1b135b6ab92be58?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1400/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1400?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1400 +/- ##\n=======================================\n Coverage 84.72% 84.72% \n=======================================\n Files 84 84 \n Lines 12591 12591 \n=======================================\n Hits 10668 10668 \n Misses 1923 1923\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1400?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/1400/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3hsbS5weQ==) | `88.23% <ø> (ø)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1400/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2JlcnQucHk=) | `95.7% <ø> (ø)` | :arrow_up: |\n| [transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1400/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2JlcnQucHk=) | `88.17% <ø> (ø)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1400/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX2Rpc3RpbGJlcnQucHk=) | `96.61% <ø> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1400?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1400?src=pr&el=footer). Last update [391db83...0c39053](https://codecov.io/gh/huggingface/transformers/pull/1400?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,570
1,570
1,570
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1400/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1400/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1400", "html_url": "https://github.com/huggingface/transformers/pull/1400", "diff_url": "https://github.com/huggingface/transformers/pull/1400.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1400.patch", "merged_at": 1570028659000 }
https://api.github.com/repos/huggingface/transformers/issues/1399
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1399/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1399/comments
https://api.github.com/repos/huggingface/transformers/issues/1399/events
https://github.com/huggingface/transformers/issues/1399
501,210,201
MDU6SXNzdWU1MDEyMTAyMDE=
1,399
Generate Variable Length Text With GPT2
{ "login": "neild0", "id": 22753813, "node_id": "MDQ6VXNlcjIyNzUzODEz", "avatar_url": "https://avatars.githubusercontent.com/u/22753813?v=4", "gravatar_id": "", "url": "https://api.github.com/users/neild0", "html_url": "https://github.com/neild0", "followers_url": "https://api.github.com/users/neild0/followers", "following_url": "https://api.github.com/users/neild0/following{/other_user}", "gists_url": "https://api.github.com/users/neild0/gists{/gist_id}", "starred_url": "https://api.github.com/users/neild0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neild0/subscriptions", "organizations_url": "https://api.github.com/users/neild0/orgs", "repos_url": "https://api.github.com/users/neild0/repos", "events_url": "https://api.github.com/users/neild0/events{/privacy}", "received_events_url": "https://api.github.com/users/neild0/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! In Write With Transformer, we use the context to predict the following token. We then add that token to the initial context, to generate the following one. This way we can generate long sequences according to a given token.\r\n\r\nIn that app we stop generating tokens once we have reached a given time, or once we have seen an end of sentence token. We, therefore, don't generate variable length text suggestions, we just adjust the batch according to end tokens identified in our results.", "Thanks for the explanation!" ]
1,569
1,570
1,570
NONE
null
This might be obviously explained in the documentation, but I've been browsing through the code for a while and can't seem to find a resolution, so thank you in advance for your help. As demoed with Write with Transformers, it seems to generate variable length text suggestions. I was wondering how this would be possible with the transformers library given, and how it would be possible to interface the largest version of GPT2 to do so. Thank you!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1399/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1399/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1398
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1398/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1398/comments
https://api.github.com/repos/huggingface/transformers/issues/1398/events
https://github.com/huggingface/transformers/pull/1398
501,209,385
MDExOlB1bGxSZXF1ZXN0MzIzNTI4NDM4
1,398
Fixed typo in docs README
{ "login": "dveselov", "id": 10365705, "node_id": "MDQ6VXNlcjEwMzY1NzA1", "avatar_url": "https://avatars.githubusercontent.com/u/10365705?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dveselov", "html_url": "https://github.com/dveselov", "followers_url": "https://api.github.com/users/dveselov/followers", "following_url": "https://api.github.com/users/dveselov/following{/other_user}", "gists_url": "https://api.github.com/users/dveselov/gists{/gist_id}", "starred_url": "https://api.github.com/users/dveselov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dveselov/subscriptions", "organizations_url": "https://api.github.com/users/dveselov/orgs", "repos_url": "https://api.github.com/users/dveselov/repos", "events_url": "https://api.github.com/users/dveselov/events{/privacy}", "received_events_url": "https://api.github.com/users/dveselov/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1398?src=pr&el=h1) Report\n> Merging [#1398](https://codecov.io/gh/huggingface/transformers/pull/1398?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/391db836ab7ed2ca61c51a7cf1b135b6ab92be58?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1398/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1398?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1398 +/- ##\n=======================================\n Coverage 84.72% 84.72% \n=======================================\n Files 84 84 \n Lines 12591 12591 \n=======================================\n Hits 10668 10668 \n Misses 1923 1923\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1398?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1398?src=pr&el=footer). Last update [391db83...cd69bc9](https://codecov.io/gh/huggingface/transformers/pull/1398?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "👍 " ]
1,569
1,570
1,570
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1398/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1398/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1398", "html_url": "https://github.com/huggingface/transformers/pull/1398", "diff_url": "https://github.com/huggingface/transformers/pull/1398.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1398.patch", "merged_at": 1570585962000 }
https://api.github.com/repos/huggingface/transformers/issues/1397
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1397/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1397/comments
https://api.github.com/repos/huggingface/transformers/issues/1397/events
https://github.com/huggingface/transformers/pull/1397
501,203,132
MDExOlB1bGxSZXF1ZXN0MzIzNTIzMzcx
1,397
remove token type inputs from roberta - fix #1234
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,569
1,651
1,570
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1397/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1397/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1397", "html_url": "https://github.com/huggingface/transformers/pull/1397", "diff_url": "https://github.com/huggingface/transformers/pull/1397.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1397.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/1396
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1396/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1396/comments
https://api.github.com/repos/huggingface/transformers/issues/1396/events
https://github.com/huggingface/transformers/pull/1396
501,084,428
MDExOlB1bGxSZXF1ZXN0MzIzNDI1OTkx
1,396
Fix syntax typo in README.md
{ "login": "dnahurnyi", "id": 27808442, "node_id": "MDQ6VXNlcjI3ODA4NDQy", "avatar_url": "https://avatars.githubusercontent.com/u/27808442?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dnahurnyi", "html_url": "https://github.com/dnahurnyi", "followers_url": "https://api.github.com/users/dnahurnyi/followers", "following_url": "https://api.github.com/users/dnahurnyi/following{/other_user}", "gists_url": "https://api.github.com/users/dnahurnyi/gists{/gist_id}", "starred_url": "https://api.github.com/users/dnahurnyi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnahurnyi/subscriptions", "organizations_url": "https://api.github.com/users/dnahurnyi/orgs", "repos_url": "https://api.github.com/users/dnahurnyi/repos", "events_url": "https://api.github.com/users/dnahurnyi/events{/privacy}", "received_events_url": "https://api.github.com/users/dnahurnyi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks :)", "Your welcome, you are doing a great job!", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1396?src=pr&el=h1) Report\n> Merging [#1396](https://codecov.io/gh/huggingface/transformers/pull/1396?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5c3b32d44d0164aaa9b91405f48e53cf53a82b35?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1396/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1396?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1396 +/- ##\n=======================================\n Coverage 84.69% 84.69% \n=======================================\n Files 84 84 \n Lines 12596 12596 \n=======================================\n Hits 10668 10668 \n Misses 1928 1928\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1396?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1396?src=pr&el=footer). Last update [5c3b32d...6b92911](https://codecov.io/gh/huggingface/transformers/pull/1396?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,569
1,569
1,569
NONE
null
![image](https://user-images.githubusercontent.com/27808442/65991735-8343d980-e496-11e9-90b4-bfbd61d02de0.png)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1396/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1396/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1396", "html_url": "https://github.com/huggingface/transformers/pull/1396", "diff_url": "https://github.com/huggingface/transformers/pull/1396.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1396.patch", "merged_at": 1569956372000 }
https://api.github.com/repos/huggingface/transformers/issues/1395
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1395/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1395/comments
https://api.github.com/repos/huggingface/transformers/issues/1395/events
https://github.com/huggingface/transformers/issues/1395
500,978,921
MDU6SXNzdWU1MDA5Nzg5MjE=
1,395
Masking of special tokens in masked LM finetuning.
{ "login": "oadams", "id": 1115622, "node_id": "MDQ6VXNlcjExMTU2MjI=", "avatar_url": "https://avatars.githubusercontent.com/u/1115622?v=4", "gravatar_id": "", "url": "https://api.github.com/users/oadams", "html_url": "https://github.com/oadams", "followers_url": "https://api.github.com/users/oadams/followers", "following_url": "https://api.github.com/users/oadams/following{/other_user}", "gists_url": "https://api.github.com/users/oadams/gists{/gist_id}", "starred_url": "https://api.github.com/users/oadams/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/oadams/subscriptions", "organizations_url": "https://api.github.com/users/oadams/orgs", "repos_url": "https://api.github.com/users/oadams/repos", "events_url": "https://api.github.com/users/oadams/events{/privacy}", "received_events_url": "https://api.github.com/users/oadams/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "The same issue here, using 4 RTX 2080Ti with Ubuntu 18.04.", "This issue exists as the `mask_tokens` function will sometimes replace `<s>` with a random word. Not sure whether `<s>` should be masked. A workaround would be adding a line \r\n```\r\nmasked_indices[:, 0] = 0 # tokenizer.bos_token_id\r\n```\r\nright below \r\n```\r\nmasked_indices = torch.bernoulli(torch.full(labels.shape, args.mlm_probability)).bool()\r\n```\r\nhttps://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py#L111.\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "So the [CLS] and [SEP] are being masked during the training or not?" ]
1,569
1,590
1,576
NONE
null
## 🐛 Bug roBERTa throws repeated warnings about the absence of special tokens in masked LM fine-tuning with `run_lm_finetuning.py`: ``` WARNING - transformers.modeling_roberta - A sequence with no special tokens has been passed to the RoBERTa model. This model requires special tokens in order to work. Please specify add_special_tokens=True in your encoding. ``` This made it look like there was a preprocessing problem, but it appears as though the random masking of tokens applies to the special tokens as well, both for BERT and roBERTa training with that script. It's not clear from the original papers if that's meant to happen, but I assumed it's not. The wording in section 3.3.1 of the BERT paper suggests they might not: They "mask 15% of all _wordpiece_ tokens at random". Their implementation would probably shed light but I just wanted to check with you, since masking of special tokens will affect the representation of [CLS]. Is this masking meant to happen? Note that BERT does not throw such a warning, but the masking of special tokens also applies to that model. Model I am using (Bert, XLNet....): BERT and roBERTa Language I am using the model on (English, Chinese....): English WikiText 2 data. The problem arise when using: * [x] the official example scripts: run_lm_finetuning.py * [ ] my own modified scripts: (give details) The tasks I am working on is: * [x] an official GLUE/SQUaD task: language model fine-tuning with run_lm_finetuning.py * [] my own task or dataset: (give details) ## To Reproduce Steps to reproduce the behavior: Download the wikitext-2 data, and then run: ``` python run_lm_finetuning.py --output_dir=models/roberta_wikitext --model_type=roberta --model_name_or_path=roberta-base --do_train --train_data_file=/Users/oadams/corpora/wikitext-2/wiki.train.tokens --mlm ``` This is basically what's recommended [in the examples](https://huggingface.co/transformers/examples.html) ## Expected behavior Warning-free training, or a warning that's easy to interpret for the user. ## Environment * OS: MacOS and Ubuntu. * Python version: 3.7.4 * PyTorch version: 1.2.0 * PyTorch Transformers version (or branch): 2.0 * Using GPU ? No. * Distributed of parallel setup ? No * Any other relevant information:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1395/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1395/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1394
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1394/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1394/comments
https://api.github.com/repos/huggingface/transformers/issues/1394/events
https://github.com/huggingface/transformers/issues/1394
500,973,294
MDU6SXNzdWU1MDA5NzMyOTQ=
1,394
Change gpt2 language model loss function
{ "login": "alecalma", "id": 17485593, "node_id": "MDQ6VXNlcjE3NDg1NTkz", "avatar_url": "https://avatars.githubusercontent.com/u/17485593?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alecalma", "html_url": "https://github.com/alecalma", "followers_url": "https://api.github.com/users/alecalma/followers", "following_url": "https://api.github.com/users/alecalma/following{/other_user}", "gists_url": "https://api.github.com/users/alecalma/gists{/gist_id}", "starred_url": "https://api.github.com/users/alecalma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alecalma/subscriptions", "organizations_url": "https://api.github.com/users/alecalma/orgs", "repos_url": "https://api.github.com/users/alecalma/repos", "events_url": "https://api.github.com/users/alecalma/events{/privacy}", "received_events_url": "https://api.github.com/users/alecalma/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Afaik the \"default\" loss function that gets computed if you pass your labels to `GPT2LMHeadModel` is `torch.nn.CrossEntropyLoss`. If you want to use a different loss function, can't you just grab the logits from the model and apply your own?\r\n\r\nSource:\r\nhttps://github.com/huggingface/transformers/blob/391db836ab7ed2ca61c51a7cf1b135b6ab92be58/transformers/modeling_gpt2.py#L539", "Unfortunately if I print a string inside the forward function, and then I run the training script, I don't get anything printed, so it seems like the training script is not using that function at all.", "Hello! The `GPT2LMHeadModel` does have a way to compute its own cross-entropy loss, but only when the `labels` are specified -> you're providing the values like so:\r\n```\r\nmodel(inputs, labels=inputs)\r\n```\r\nand the model takes care of shifting the inputs to calculate a causal language modeling loss on them with cross-entropy.\r\n\r\nIf you wish to use your own loss function, don't specify the labels and the model will return a tuple containing the language modeling logits as the first value.", "Hi,\r\n\r\nthanks for your answer.\r\nCan you tell me why a print statement inside the forward fuction of the GPT2LMHeadModel doesn't print anything when I run the run_lm_finetuning script ?\r\n\r\nWhich is the forward function I need to change?\r\n\r\nThanks.", "Where have you put your print statement? Do you have `transformers` installed in your environment or is it relying on the cloned repository? You could try to add a breakpoint and debug it to see which function calls are made and how the loss is calculated.\r\n\r\nOnce again, if you wish to use your own loss function, don't specify the labels and the model will return a tuple containing the language modeling logits as the first value.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,569
1,576
1,576
NONE
null
Hi all, I want to include a new loss term for the gpt2 training loss. I am using the script run_lm_finetuning from the examples. This is my command: python examples/run_lm_finetuning.py --output_dir=output --model_type=gpt2 --model_name_or_path=gpt2 --do_train --train_data_file=$TRAIN_FILE --eval_data_file=$TEST_FILE --overwrite_output_dir --max_steps 50 but I really can't figure out which loss function is being used. If i print inside the GPT2LMHeadModel forward function, nothing happens. Could you please tell me which loss function should I change? Thank you a lot.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1394/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1394/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1393
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1393/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1393/comments
https://api.github.com/repos/huggingface/transformers/issues/1393/events
https://github.com/huggingface/transformers/issues/1393
500,868,377
MDU6SXNzdWU1MDA4NjgzNzc=
1,393
With GPT-2 is it possible to get previous word prediction?
{ "login": "hypnoai", "id": 52698628, "node_id": "MDQ6VXNlcjUyNjk4NjI4", "avatar_url": "https://avatars.githubusercontent.com/u/52698628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hypnoai", "html_url": "https://github.com/hypnoai", "followers_url": "https://api.github.com/users/hypnoai/followers", "following_url": "https://api.github.com/users/hypnoai/following{/other_user}", "gists_url": "https://api.github.com/users/hypnoai/gists{/gist_id}", "starred_url": "https://api.github.com/users/hypnoai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hypnoai/subscriptions", "organizations_url": "https://api.github.com/users/hypnoai/orgs", "repos_url": "https://api.github.com/users/hypnoai/repos", "events_url": "https://api.github.com/users/hypnoai/events{/privacy}", "received_events_url": "https://api.github.com/users/hypnoai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! There is one big difference between BERT and GPT-2, in that BERT is trained using masked language modeling, whereas GPT-2 is trained using causal language modeling.\r\n\r\nDuring pre-training, BERT learns to predict masked words given a bi-directional context. GPT-2, on the other hand, learns to predict a word given only its left context. This is why GPT-2 is very good at text generation (it only needs the left-hand side context), while BERT isn't.\r\n\r\nGiven this, GPT-2 won't be able to do previous word prediction, as it does not handle the right-hand side context.", "If you want to train your own GPT-2 model to predict previous words, you could feed in your entire training set in reverse word order. Then GPT-2 would learn to predict text backwards, and that model would then be able to tell you what word should come before a piece of text." ]
1,569
1,575
1,570
NONE
null
Feature/Question: With GPT-2 is it possible to get previous word prediction? Hi, I say this after seeing this https://towardsdatascience.com/deconstructing-bert-distilling-6-patterns-from-100-million-parameters-b49113672f77 And wondering how I could maybe write a method that would allow me to predict the previous word? (ideally for GPT2) Many thanks, Vince.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1393/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1393/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1392
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1392/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1392/comments
https://api.github.com/repos/huggingface/transformers/issues/1392/events
https://github.com/huggingface/transformers/issues/1392
500,789,373
MDU6SXNzdWU1MDA3ODkzNzM=
1,392
Bert's keyword argument 'output_all_encoded_layers' does not exist anymore?
{ "login": "fhamborg", "id": 18700166, "node_id": "MDQ6VXNlcjE4NzAwMTY2", "avatar_url": "https://avatars.githubusercontent.com/u/18700166?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fhamborg", "html_url": "https://github.com/fhamborg", "followers_url": "https://api.github.com/users/fhamborg/followers", "following_url": "https://api.github.com/users/fhamborg/following{/other_user}", "gists_url": "https://api.github.com/users/fhamborg/gists{/gist_id}", "starred_url": "https://api.github.com/users/fhamborg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fhamborg/subscriptions", "organizations_url": "https://api.github.com/users/fhamborg/orgs", "repos_url": "https://api.github.com/users/fhamborg/repos", "events_url": "https://api.github.com/users/fhamborg/events{/privacy}", "received_events_url": "https://api.github.com/users/fhamborg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! @thomwolf can correct me if I'm wrong, but I believe this keyword was changed to `output_hidden_states` in version 1.0.0.", "I can confirm what @LysandreJik suggests. The output of the embeddings is now also included as the first element. ", "Alright, thank you very much!" ]
1,569
1,570
1,570
NONE
null
## 📚 Migration Model I am using (Bert, XLNet....): Bert Language I am using the model on (English, Chinese....): English The problem arise when using: * [ ] the official example scripts: (give details) * [X] my own modified scripts: (give details) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details) Details of the issue: I was using the keyword argument `output_all_encoded_layers` before. Now the code throws an error, since it seems that this argument was removed. How can I still set `output_all_encoded_layers` to either True or False, e.g.: ``` context, _ = self.bert(context, output_all_encoded_layers=False) ``` ## Checklist - [X] I have read the migration guide in the readme. - [X] I checked if a related official extension example runs on my machine.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1392/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1392/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1391
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1391/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1391/comments
https://api.github.com/repos/huggingface/transformers/issues/1391/events
https://github.com/huggingface/transformers/issues/1391
500,750,856
MDU6SXNzdWU1MDA3NTA4NTY=
1,391
Built-in pretrained models location
{ "login": "kaunghtetsan275", "id": 21242101, "node_id": "MDQ6VXNlcjIxMjQyMTAx", "avatar_url": "https://avatars.githubusercontent.com/u/21242101?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kaunghtetsan275", "html_url": "https://github.com/kaunghtetsan275", "followers_url": "https://api.github.com/users/kaunghtetsan275/followers", "following_url": "https://api.github.com/users/kaunghtetsan275/following{/other_user}", "gists_url": "https://api.github.com/users/kaunghtetsan275/gists{/gist_id}", "starred_url": "https://api.github.com/users/kaunghtetsan275/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kaunghtetsan275/subscriptions", "organizations_url": "https://api.github.com/users/kaunghtetsan275/orgs", "repos_url": "https://api.github.com/users/kaunghtetsan275/repos", "events_url": "https://api.github.com/users/kaunghtetsan275/events{/privacy}", "received_events_url": "https://api.github.com/users/kaunghtetsan275/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I've found it. It's a binary file. It's in ~/.cache/torch/transformers" ]
1,569
1,569
1,569
NONE
null
My laptop was run out of disk space while loading built-in pre-trained model. Now BertForTokenClassification.from_pretrained("bert-base-cased") gives me RuntimeError: unexpected EOF, expected 5896093 more bytes. The file might be corrupted. Where can I find that incomplete model and delete it so I can download the model from the start again?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1391/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1391/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1390
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1390/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1390/comments
https://api.github.com/repos/huggingface/transformers/issues/1390/events
https://github.com/huggingface/transformers/issues/1390
500,733,997
MDU6SXNzdWU1MDA3MzM5OTc=
1,390
❓ How to use cached hidden states in run_generation ?
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hi! Yes, you understood the gist of it. The self-attention related to already computed tokens is not computed again.\r\n\r\nIn order to use the past, you would get the past from the model pass (I'm using GPT-2 in this example, XLNet would have `mems` instead of `past`):\r\n\r\n```py\r\nlogits, past = model(**inputs)\r\n```\r\n\r\nand you would then use the past on the following pass as follow:\r\n\r\n```py\r\nlogits, past = model(**inputs, past=past)\r\n```", "Thank you for your fast response @LysandreJik !\r\n\r\nNow it's very clear, but I have one more question :\r\n\r\nFor XLNet and TransfoXL, we need to use memory in order to not recompute previously generated token. This is ok when not using the memory for something else.\r\n\r\n**But what if the memory is already used for something else ?**\r\nLike we have a memory of 256 for XLNet, representing previous segments (or whatever), if we update the memory everytime a new token is generated, it means we are loosing part of the memory (and after generating 256 tokens, we will not be able to see anymore the memory of previous segment !).\r\n\r\n**Is there a way around this problem in the current API ?**", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,569
1,575
1,575
CONTRIBUTOR
null
## ❓ Questions & Help https://github.com/huggingface/transformers/blob/5c3b32d44d0164aaa9b91405f48e53cf53a82b35/examples/run_generation.py#L124 This line states that we could use `cached hidden states`. Correct me if I'm wrong : * **Without using `cached hidden states`** : every step, the next token is predicted, but also all previous tokens are re-computed (which is useless because we already predicted it !) * **Using `cached hidden states`** : every step, the next token is predicted, but previous tokens are not re-computed, because we are using their cached states. So using cached hidden states would greatly increase the inference speed, specially for long generations. --- My question is : **How to do that ?** From the documentation I understand how to get the `cached hidden states` from the forward pass of the model, but I don't understand how to use it at the following step ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1390/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1390/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1389
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1389/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1389/comments
https://api.github.com/repos/huggingface/transformers/issues/1389/events
https://github.com/huggingface/transformers/pull/1389
500,721,745
MDExOlB1bGxSZXF1ZXN0MzIzMTMxNTMx
1,389
Fix compatibility issue with PyTorch 1.2
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "organizations_url": "https://api.github.com/users/astariul/orgs", "repos_url": "https://api.github.com/users/astariul/repos", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "received_events_url": "https://api.github.com/users/astariul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\nWe can accept this since it breaks lower versions of PyTorch.\r\nYou can just feed your mask as a FloatTensor (as indicated in the docstrings I think)." ]
1,569
1,571
1,571
CONTRIBUTOR
null
Using PyTorch 1.2.0 give an error when running XLNet. We should use the new way to reverse mask : instead of using `1 - mask`, we should use `~mask`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1389/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1389/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1389", "html_url": "https://github.com/huggingface/transformers/pull/1389", "diff_url": "https://github.com/huggingface/transformers/pull/1389.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1389.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/1388
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1388/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1388/comments
https://api.github.com/repos/huggingface/transformers/issues/1388/events
https://github.com/huggingface/transformers/pull/1388
500,659,132
MDExOlB1bGxSZXF1ZXN0MzIzMDgxMTI5
1,388
Add Roberta SQuAD model
{ "login": "vlarine", "id": 10670098, "node_id": "MDQ6VXNlcjEwNjcwMDk4", "avatar_url": "https://avatars.githubusercontent.com/u/10670098?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vlarine", "html_url": "https://github.com/vlarine", "followers_url": "https://api.github.com/users/vlarine/followers", "following_url": "https://api.github.com/users/vlarine/following{/other_user}", "gists_url": "https://api.github.com/users/vlarine/gists{/gist_id}", "starred_url": "https://api.github.com/users/vlarine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vlarine/subscriptions", "organizations_url": "https://api.github.com/users/vlarine/orgs", "repos_url": "https://api.github.com/users/vlarine/repos", "events_url": "https://api.github.com/users/vlarine/events{/privacy}", "received_events_url": "https://api.github.com/users/vlarine/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1388?src=pr&el=h1) Report\n> Merging [#1388](https://codecov.io/gh/huggingface/transformers/pull/1388?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5c3b32d44d0164aaa9b91405f48e53cf53a82b35?src=pr&el=desc) will **decrease** coverage by `0.16%`.\n> The diff coverage is `23.52%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1388/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1388?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1388 +/- ##\n==========================================\n- Coverage 84.69% 84.52% -0.17% \n==========================================\n Files 84 84 \n Lines 12596 12627 +31 \n==========================================\n+ Hits 10668 10673 +5 \n- Misses 1928 1954 +26\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1388?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1388/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `61.17% <23.52%> (-10.05%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1388?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1388?src=pr&el=footer). Last update [5c3b32d...1ba42ca](https://codecov.io/gh/huggingface/transformers/pull/1388?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I am also working on reproducing the results reported in the Roberta paper and found two issues in this PR. One issue is explained in the comment above. The other issue is that it is required to insert two sep_tokens between question tokens and answer tokens for Roberta as implemented [here](https://github.com/huggingface/transformers/blob/master/transformers/tokenization_roberta.py#L101). Therefore, `max_tokens_for_doc` should be `max_seq_length - len(query_tokens) - 4`.", "In the Fairseq realisation of RoBERTa on Commonsense QA:\r\nhttps://github.com/pytorch/fairseq/tree/master/examples/roberta/commonsense_qa\r\nhttps://github.com/pytorch/fairseq/blob/master/examples/roberta/commonsense_qa/commonsense_qa_task.py\r\n\r\nThere is the only one sep_token between question and answer:\r\n`<s> Q: Where would I not want a fox? </s> A: hen house </s>`", "> In the Fairseq realisation of RoBERTa on Commonsense QA:\r\n> There is the only one sep_token between question and answer:\r\n> `<s> Q: Where would I not want a fox? </s> A: hen house </s>`\r\n\r\nThank you very much for your prompt reply. I did not know this. It seems to be appropriate to use single `sep_token` here because Commonsense QA is somewhat more similar to SQuAD than other tasks (e.g., GLUE).", "Thanks for this @vlarine! (and @ikuyamada)\r\n\r\nWould you agree to share the weights on our S3 as well?\r\n\r\nAlso, did you try with the same separators encoding scheme as the other RoBERTa models? \r\n`<s> Q: Where would I not want a fox? </s> </s> A: hen house </s>` – did the results differ significantly?", "No, I have not tried. But why there are two `</s>` tokens? I think more natural way is:\r\n`<s> Q: Where would I not want a fox? </s> <s> A: hen house </s>`", "@vlarine See this docstring in `fairseq`: https://github.com/pytorch/fairseq/pull/969/files\r\n\r\nDo you think you could try with this sep encoding scheme? Otherwise I'll do it in the next couple of days.\r\n\r\nI would like to merge your PR soon. Any way you can give me write access to your fork, cf. https://help.github.com/en/articles/committing-changes-to-a-pull-request-branch-created-from-a-fork – so that i can add commits on top of your PR?\r\n\r\n", "@vlarine @julien-c thanks for the amazing work! I can try it on SQuAD 2.0 and let you know if anything pops up there", "Nice work. I also tried to add roberta into run_squad.py several days ago. Hope that my implementation would be useful. [run_squad.py with roberta](https://github.com/erenup/pytorch-transformers/pull/4) ", "Folding this PR into #1386, which is close to being ready to being merged.\r\n\r\n@vlarine @ikuyamada @pminervini @erenup Can you guys please check it out?", "Closing in favor of #1386." ]
1,569
1,576
1,576
NONE
null
There is the realisation of a RoBERTa SQuAD finetuning. On 2x1080Ti on RoBERTa Base it gives: python3 run_squad.py \ --model_type roberta \ --model_name_or_path roberta-base \ --do_train \ --do_eval \ --train_file $SQUAD_DIR/train-v1.1.json \ --predict_file $SQUAD_DIR/dev-v1.1.json \ --per_gpu_train_batch_size 8 \ --per_gpu_eval_batch_size 8 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --save_steps 2000 \ --overwrite_output_dir \ --verbose_logging \ --output_dir /tmp/debug_squad/ Results: {'exact': 85.80889309366131, 'f1': 92.09291402361669, 'total': 10570, 'HasAns_exact': 85.80889309366131, 'HasAns_f1': 92.09291402361669, 'HasAns_total': 10570} On RoBERTa Large: python3 run_squad.py \ --model_type roberta \ --model_name_or_path roberta-large \ --do_train \ --do_eval \ --train_file $SQUAD_DIR/train-v1.1.json \ --predict_file $SQUAD_DIR/dev-v1.1.json \ --per_gpu_train_batch_size 2 \ --per_gpu_eval_batch_size 2 \ --learning_rate 3e-5 \ --num_train_epochs 2.0 \ --max_seq_length 384 \ --doc_stride 128 \ --save_steps 2000 \ --overwrite_output_dir \ --verbose_logging \ --output_dir /tmp/debug_squad/ Results: {'exact': 87.04824976348155, 'f1': 93.14253401654709, 'total': 10570, 'HasAns_exact': 87.04824976348155, 'HasAns_f1': 93.14253401654709, 'HasAns_total': 10570}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1388/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1388/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1388", "html_url": "https://github.com/huggingface/transformers/pull/1388", "diff_url": "https://github.com/huggingface/transformers/pull/1388.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1388.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/1387
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1387/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1387/comments
https://api.github.com/repos/huggingface/transformers/issues/1387/events
https://github.com/huggingface/transformers/issues/1387
500,574,184
MDU6SXNzdWU1MDA1NzQxODQ=
1,387
TFTransfoXLLMHeadModel doesn't accept lm_labels parameter
{ "login": "tomweingarten", "id": 3465707, "node_id": "MDQ6VXNlcjM0NjU3MDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3465707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomweingarten", "html_url": "https://github.com/tomweingarten", "followers_url": "https://api.github.com/users/tomweingarten/followers", "following_url": "https://api.github.com/users/tomweingarten/following{/other_user}", "gists_url": "https://api.github.com/users/tomweingarten/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomweingarten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomweingarten/subscriptions", "organizations_url": "https://api.github.com/users/tomweingarten/orgs", "repos_url": "https://api.github.com/users/tomweingarten/repos", "events_url": "https://api.github.com/users/tomweingarten/events{/privacy}", "received_events_url": "https://api.github.com/users/tomweingarten/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I see now that I missed something. The documentation uses the parameter 'lm_labels' but the correct parameter is just 'labels'. The documentation says that when this parameter is present, prediction logits will not be output, but this is incorrect. They are output regardless of the presence of 'labels'.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,569
1,575
1,575
NONE
null
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): TFTransfoXLLMHeadModel Language I am using the model on (English, Chinese....): Other The problem arise when using: * [ ] the official example scripts: (give details) * [ X ] my own modified scripts: I have a script that trains a new TransformerXL The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ X ] my own task or dataset: The dataset is a language modeling dataset of novel symbolic data. ## To Reproduce Steps to reproduce the behavior: Call the TFTransfoXLLMHeadModel as such: mems = transformer(data, lm_labels = lm_labels, mems = mems, training=True) File "/home/tom/.local/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 891, in __call__ outputs = self.call(cast_inputs, *args, **kwargs) TypeError: call() got an unexpected keyword argument 'lm_labels' If I instead include lm_labels in a dict, it is simply ignored. ## Expected behavior The model documentation says that including lm_labels is recommended for training because it allows the adaptive softmax to be calculated more efficiently ## Environment * OS: Ubuntu 19 * PyTorch Transformers version (or branch): 2.0.0 * Using GPU Yes
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1387/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1387/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1386
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1386/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1386/comments
https://api.github.com/repos/huggingface/transformers/issues/1386/events
https://github.com/huggingface/transformers/pull/1386
500,558,576
MDExOlB1bGxSZXF1ZXN0MzIzMDAwOTcy
1,386
Add RoBERTa question answering & Update SQuAD runner to support RoBERTa
{ "login": "stevezheng23", "id": 7437363, "node_id": "MDQ6VXNlcjc0MzczNjM=", "avatar_url": "https://avatars.githubusercontent.com/u/7437363?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevezheng23", "html_url": "https://github.com/stevezheng23", "followers_url": "https://api.github.com/users/stevezheng23/followers", "following_url": "https://api.github.com/users/stevezheng23/following{/other_user}", "gists_url": "https://api.github.com/users/stevezheng23/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevezheng23/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevezheng23/subscriptions", "organizations_url": "https://api.github.com/users/stevezheng23/orgs", "repos_url": "https://api.github.com/users/stevezheng23/repos", "events_url": "https://api.github.com/users/stevezheng23/events{/privacy}", "received_events_url": "https://api.github.com/users/stevezheng23/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@thomwolf / @LysandreJik / @VictorSanh / @julien-c Could you help review this PR? Thanks!", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1386?src=pr&el=h1) Report\n> Merging [#1386](https://codecov.io/gh/huggingface/transformers/pull/1386?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/be916cb3fb4579e278ceeaec11a6524662797d7f?src=pr&el=desc) will **decrease** coverage by `0.15%`.\n> The diff coverage is `21.21%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1386/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1386?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1386 +/- ##\n==========================================\n- Coverage 86.16% 86.01% -0.16% \n==========================================\n Files 91 91 \n Lines 13593 13626 +33 \n==========================================\n+ Hits 11713 11720 +7 \n- Misses 1880 1906 +26\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1386?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1386/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `69.18% <21.21%> (-11.39%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1386?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1386?src=pr&el=footer). Last update [be916cb...ee83f98](https://codecov.io/gh/huggingface/transformers/pull/1386?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Hi @thomwolf / @LysandreJik / @VictorSanh / @julien-c \r\n\r\nI have also run experiments using RoBERT large setting in original paper and reproduced their results,\r\n- **SQuAD v1.1**\r\n{\r\n \"exact\": 88.25922421948913,\r\n \"f1\": 94.43790487416292,\r\n \"total\": 10570,\r\n \"HasAns_exact\": 88.25922421948913,\r\n \"HasAns_f1\": 94.43790487416292,\r\n \"HasAns_total\": 10570\r\n}\r\n- **SQuAD v2.0**\r\n{\r\n \"exact\": 86.05238777057188,\r\n \"f1\": 88.99602665148535,\r\n \"total\": 11873,\r\n \"HasAns_exact\": 83.38394062078272,\r\n \"HasAns_f1\": 89.27965999208608,\r\n \"HasAns_total\": 5928,\r\n \"NoAns_exact\": 88.71320437342304,\r\n \"NoAns_f1\": 88.71320437342304,\r\n \"NoAns_total\": 5945,\r\n \"best_exact\": 86.5914259243662,\r\n \"best_exact_thresh\": -2.146007537841797,\r\n \"best_f1\": 89.43104312625539,\r\n \"best_f1_thresh\": -2.146007537841797\r\n}", "Awesome @stevezheng23. Can I push on top of your PR to change a few things before we merge?\r\n\r\n(We refactored the tokenizer to handle the encoding of sequence pairs, including special tokens. So we don't need to do it inside each example script anymore)", "@julien-c sure, please add changes in this PR if needed 👍 ", "@julien-c I've also upload the roberta large model finetuned on squad v2.0 data together with its prediction & evaluation results to public cloud storage https://storage.googleapis.com/mrc_data/squad/roberta.large.squad.v2.zip", "Can you check my latest commit @stevezheng23? Main change is that I removed the `add_prefix_space` for RoBERTa (which the RoBERTa authors don't use, as far as I know) which doesn't seem to make a significant difference.\r\n\r\n@thomwolf @LysandreJik this is ready for review.", "Everything looks good.\r\n\r\nAs for the `add_prefix_space` flag,\r\n- For `add_prefix_space=True`, I have run the experiment, the F1 score is around 89.4\r\n- For `add_prefix_space=False`, I have also run the experiment, the F1 score is around 88.2", "Great! Good job on reimplementing the cross-entropy loss when start/end positions are given.", "Look good to me.\r\nWe'll probably be able to simplify `utils_squad` a lot soon but that will be fine for now.\r\nDo you want to add your experimental results with RoBERTa in `examples/readme`, with a recommendation to use `add_prefix_space=True` (fyi it's the opposite for NER)?", "@julien-c do you want to add the roberta model finetuned on squad by @stevezheng23 in our library?", "Yep @thomwolf ", "@thomwolf I have updated README file as you suggested, you can merge this PR when you think it's good to go. BTW, it seems CI build is broken", "Ok thanks, I'll let @julien-c finish to handle this PR when he's back.", "> @julien-c I've also upload the roberta large model finetuned on squad v2.0 data together with its prediction & evaluation results to public cloud storage https://storage.googleapis.com/mrc_data/squad/roberta.large.squad.v2.zip\r\n\r\nHey @stevezheng23 !\r\n\r\nI just tried to reproduce your model with slightly different hyperparameters (`batch_size=2` and `gradient_accumulation=6` instead of `batch_size=12`), and I am currently getting worse results.\r\n\r\nResults with your model:\r\n\r\n```\r\n{\r\n \"exact\": 86.05238777057188,\r\n \"f1\": 88.99602665148535,\r\n \"total\": 11873,\r\n \"HasAns_exact\": 83.38394062078272,\r\n \"HasAns_f1\": 89.27965999208608,\r\n \"HasAns_total\": 5928,\r\n \"NoAns_exact\": 88.71320437342304,\r\n \"NoAns_f1\": 88.71320437342304,\r\n \"NoAns_total\": 5945\r\n}\r\n```\r\n\r\nResults with the model I trained, on the best checkpoint I was able to obtain after training for 8 epochs:\r\n\r\n```\r\n{\r\n \"exact\": 82.85184873241809,\r\n \"f1\": 85.85477834702593,\r\n \"total\": 11873,\r\n \"HasAns_exact\": 77.80026990553306,\r\n \"HasAns_f1\": 83.8147407750069,\r\n \"HasAns_total\": 5928,\r\n \"NoAns_exact\": 87.88898233809924,\r\n \"NoAns_f1\": 87.88898233809924,\r\n \"NoAns_total\": 5945\r\n}\r\n```\r\n\r\nYour hyperparameters:\r\n\r\n```\r\nNamespace(adam_epsilon=1e-08, cache_dir='', config_name='', device=device(type='cuda', index=0), do_eval=True, do_lower_case=False, do_train=True, doc_stride=128, eval_all_checkpoints=False, evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=1.5e-05, local_rank=0, logging_steps=50, max_answer_length=30, max_grad_norm=1.0, max_query_length=64, max_seq_length=512, max_steps=-1, model_name_or_path='roberta-large', model_type='roberta', n_best_size=20, n_gpu=1, no_cuda=False, null_score_diff_threshold=0.0, num_train_epochs=2.0, output_dir='output/squad/v2.0/roberta.large', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=12, per_gpu_train_batch_size=12, predict_file='data/squad/v2.0/dev-v2.0.json', save_steps=500, seed=42, server_ip='', server_port='', tokenizer_name='', train_batch_size=12, train_file='data/squad/v2.0/train-v2.0.json', verbose_logging=False, version_2_with_negative=True, warmup_steps=500, weight_decay=0.01)\r\n```\r\n\r\nMy hyperparameters:\r\n\r\n```\r\nNamespace(adam_epsilon=1e-08, cache_dir='', config_name='', device=device(type='cuda'), do_eval=True, do_lower_case=False, do_train=True, doc_stride=128, eval_all_checkpoints=False, evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=6, learning_rate=1.5e-05, local_rank=-1, logging_steps=50, max_answer_length=30, max_grad_norm=1.0, max_query_length=64, max_seq_length=512, max_steps=-1, model_name_or_path='roberta-large', model_type='roberta', n_best_size=20, n_gpu=1, no_cuda=False, null_score_diff_threshold=0.0, num_train_epochs=8.0, output_dir='../roberta.large.squad2.v1p', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=2, per_gpu_train_batch_size=2, predict_file='/home/testing/drive/invariance//workspace/data/squad/dev-v2.0.json', save_steps=500, seed=42, server_ip='', server_port='', tokenizer_name='', train_batch_size=2, train_file='/home/testing/drive/invariance//workspace/data/squad/train-v2.0.json', verbose_logging=False, version_2_with_negative=True, warmup_steps=500, weight_decay=0.01)\r\n```\r\n\r\nDo you have any ideas why this is happening ?\r\n\r\nOne thing that may be happening is that, when using `max_grad_norm` and `gradient_accumulation=n`, the clipping of the gradient norm seems to be done `n` times rather than just 1, but I need to look deeper into this.\r\n\r\nI'd like to see what happens without the need of gradient accumulation - anyone with a spare TPU to share? 😬", "> Ok thanks, I'll let @julien-c finish to handle this PR when he's back.\r\n\r\nthanks, @thomwolf ", "@pminervini I haven't tried out using `max_grad_norm` and `gradient_accumulation=n` combination before. One thing you could pay attention to is that the checkpoint is trained with `add_prefix_space=True` for RoBERTa tokenizer.", "@stevezheng23 if you look at it, the `max_grad_norm` is performed on all the gradients in the accumulation - I think it should be done just before the `optimizer.step()` call.\r\n\r\nhttps://github.com/huggingface/transformers/blob/master/examples/run_squad.py#L163\r\n\r\n@thomwolf what do you think ? should I go and do a PR ?", "@LysandreJik just significantly rewrote our SQuAD integration in https://github.com/huggingface/transformers/pull/1984 so we were holding out on merging this.\r\n\r\nDoes anyone here want to revisit this PR with the changes from #1984? Otherwise, we'll do it, time permitting.", "cool, I'm willing to revisit it. I will take a look at your changes and tansformers' recent updates today (have been away from the Master branch for some time😊).", "> > @julien-c I've also upload the roberta large model finetuned on squad v2.0 data together with its prediction & evaluation results to public cloud storage https://storage.googleapis.com/mrc_data/squad/roberta.large.squad.v2.zip\r\n> \r\n> Hey @stevezheng23 !\r\n> \r\n> I just tried to reproduce your model with slightly different hyperparameters (`batch_size=2` and `gradient_accumulation=6` instead of `batch_size=12`), and I am currently getting worse results.\r\n> Your hyperparameters:\r\n> \r\n> ```\r\n> Namespace(adam_epsilon=1e-08, cache_dir='', config_name='', device=device(type='cuda', index=0), do_eval=True, do_lower_case=False, do_train=True, doc_stride=128, eval_all_checkpoints=False, evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=1.5e-05, local_rank=0, logging_steps=50, max_answer_length=30, max_grad_norm=1.0, max_query_length=64, max_seq_length=512, max_steps=-1, model_name_or_path='roberta-large', model_type='roberta', n_best_size=20, n_gpu=1, no_cuda=False, null_score_diff_threshold=0.0, num_train_epochs=2.0, output_dir='output/squad/v2.0/roberta.large', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=12, per_gpu_train_batch_size=12, predict_file='data/squad/v2.0/dev-v2.0.json', save_steps=500, seed=42, server_ip='', server_port='', tokenizer_name='', train_batch_size=12, train_file='data/squad/v2.0/train-v2.0.json', verbose_logging=False, version_2_with_negative=True, warmup_steps=500, weight_decay=0.01)\r\n> ```\r\n> \r\n> My hyperparameters:\r\n> \r\n> ```\r\n> Namespace(adam_epsilon=1e-08, cache_dir='', config_name='', device=device(type='cuda'), do_eval=True, do_lower_case=False, do_train=True, doc_stride=128, eval_all_checkpoints=False, evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=6, learning_rate=1.5e-05, local_rank=-1, logging_steps=50, max_answer_length=30, max_grad_norm=1.0, max_query_length=64, max_seq_length=512, max_steps=-1, model_name_or_path='roberta-large', model_type='roberta', n_best_size=20, n_gpu=1, no_cuda=False, null_score_diff_threshold=0.0, num_train_epochs=8.0, output_dir='../roberta.large.squad2.v1p', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=2, per_gpu_train_batch_size=2, predict_file='/home/testing/drive/invariance//workspace/data/squad/dev-v2.0.json', save_steps=500, seed=42, server_ip='', server_port='', tokenizer_name='', train_batch_size=2, train_file='/home/testing/drive/invariance//workspace/data/squad/train-v2.0.json', verbose_logging=False, version_2_with_negative=True, warmup_steps=500, weight_decay=0.01)\r\n> ```\r\n> \r\n> Do you have any ideas why this is happening ?\r\n\r\n\r\nYou're using num_train_epochs=8 instead of 2, which makes the learning rate decay more slowly. Maybe that is causing the difference?", "Regarding `max_grad_norm` - RoBERTa doesn't use gradient clipping, so the `max_grad_norm` changes aren't strictly necessary here\r\n\r\nRoBERTa also uses `adam_epsilon=1e-06` as I understand, but I'm not sure if it would change the results here", "Hi @stevezheng23 @julien-c @thomwolf @ethanjperez , I updated the run squad with roberta in #2173 \r\n based on #1984 and #1386. Could you please help to review it? Thank you very much.", "Closed in favor of #2173 which should be merged soon. \r\n\r\n" ]
1,569
1,576
1,576
NONE
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1386/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1386/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1386", "html_url": "https://github.com/huggingface/transformers/pull/1386", "diff_url": "https://github.com/huggingface/transformers/pull/1386.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1386.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/1385
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1385/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1385/comments
https://api.github.com/repos/huggingface/transformers/issues/1385/events
https://github.com/huggingface/transformers/pull/1385
500,497,651
MDExOlB1bGxSZXF1ZXN0MzIyOTUzNDYx
1,385
[multiple-choice] Simplify and use tokenizer.encode_plus
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Great addition. I feel like using enums would be especially helpful for the truncating strategy, indeed.", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1385?src=pr&el=h1) Report\n> Merging [#1385](https://codecov.io/gh/huggingface/transformers/pull/1385?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5c3b32d44d0164aaa9b91405f48e53cf53a82b35?src=pr&el=desc) will **decrease** coverage by `0.07%`.\n> The diff coverage is `41.66%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1385/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1385?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1385 +/- ##\n==========================================\n- Coverage 84.69% 84.61% -0.08% \n==========================================\n Files 84 84 \n Lines 12596 12610 +14 \n==========================================\n+ Hits 10668 10670 +2 \n- Misses 1928 1940 +12\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1385?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1385/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `87.73% <41.66%> (-2.46%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1385?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1385?src=pr&el=footer). Last update [5c3b32d...9e136ff](https://codecov.io/gh/huggingface/transformers/pull/1385?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I have reviewed this PR and It looks good to me. Thank you! @julien-c . I added two lines of comments above. Hope they are useful. Thank you. ", "Merged in, and superseded by, #1384" ]
1,569
1,574
1,570
MEMBER
null
Our base tokenizer `PreTrainedTokenizer` now has the ability to encode a sentence pair up to a `max_length`, adding special tokens for each model and returning a mask of `token_type_ids`. In this PR we upgrade `run_multiple_choice` by adopting this factorized tokenizer API. To ensure the results are strictly the same as before, we implement a new `TruncatingStrategy` (ideally this could be an enum). @erenup as you spent a lot of time on this script, would you be able to review this PR? Result of eval with parameters from [examples/readme](https://github.com/huggingface/transformers/blob/julien_multiple-choice/examples/README.md#multiple-choice): ``` eval_acc = 0.8352494251724483 eval_loss = 0.42866929549320487 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1385/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1385/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1385", "html_url": "https://github.com/huggingface/transformers/pull/1385", "diff_url": "https://github.com/huggingface/transformers/pull/1385.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1385.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/1384
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1384/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1384/comments
https://api.github.com/repos/huggingface/transformers/issues/1384/events
https://github.com/huggingface/transformers/pull/1384
500,451,678
MDExOlB1bGxSZXF1ZXN0MzIyOTIwMTE1
1,384
Quality of life enhancements in encoding + patch MLM masking
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think we should drop the `always_truncate` param, and just set it to `True` iff `max_length is not None`", "Other than that I like it.", "As seen with @julien-c , `always_truncate` really should be enabled by default when a `max_length` is specified." ]
1,569
1,578
1,570
MEMBER
null
This PR aims to add quality of life features to the encoding mechanism and patches an issue with the masked language modeling masking function. 1 - ~It introduces an `always_truncate` argument to the `encode` method.~ The `always_truncate` argument is now used as default, with no option to set it to `False` when a `max_length` is specified. Currently, if a `max_length` is specified to the `encode` method with a sequence pair, with both sequences being longer than the max length, then the sequence pair won't be truncated. This may then result in a sequence longer than the specified max length, which may crash the preprocessing mechanism (see current `run_glue.py` with the QNLI task). This argument may be further improved by truncating according to the pair of sequences length ratio. 2 - It adds a new return to the `encode_plus` return dictionary: `sequence_ids`. This is a list of numbers corresponding to the position of special/sequence ids. As an example: ```py sequence = "This is a sequence" input_ids_no_special = tok.encode(sequence) # [1188, 1110, 170, 4954] input_ids = tok.encode(sequence, add_special_tokens=True) # [101, 1188, 1110, 170, 4954, 102] # Special tokens ─────────────────────────────────────────────┴───────────────────────────┘ ``` The new method offers several choices: single sequence (with or without special tokens), sequence pairs, and already existing special tokens: ```py tok.get_sequence_ids(input_ids_no_special) # [0, 1, 1, 1, 1, 0] tok.get_sequence_ids(input_ids, special_tokens_present=True) # [0, 1, 1, 1, 1, 0] ``` This offers several quality of life changes: 1 - The users are now aware of the location of the encoded sequences in their input ids: they can have custom truncating methods while leveraging model agnostic encoding 2 - Being aware of the location of special tokens is essential in the case of masked language modeling: we do not want to mask special tokens. An example of this is shown in the modified `run_lm_finetuning.py` script. Considering sequence ids, the naming may not be optimal, therefore I'm especially open to propositions @thomwolf. Furthermore, I'm not sure it is necessary to consider the cases where no special tokens are currently in the sequence.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1384/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1384/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1384", "html_url": "https://github.com/huggingface/transformers/pull/1384", "diff_url": "https://github.com/huggingface/transformers/pull/1384.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1384.patch", "merged_at": 1570634305000 }
https://api.github.com/repos/huggingface/transformers/issues/1383
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1383/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1383/comments
https://api.github.com/repos/huggingface/transformers/issues/1383/events
https://github.com/huggingface/transformers/pull/1383
500,442,605
MDExOlB1bGxSZXF1ZXN0MzIyOTEyOTk5
1,383
Adding CTRL
{ "login": "keskarnitish", "id": 5945552, "node_id": "MDQ6VXNlcjU5NDU1NTI=", "avatar_url": "https://avatars.githubusercontent.com/u/5945552?v=4", "gravatar_id": "", "url": "https://api.github.com/users/keskarnitish", "html_url": "https://github.com/keskarnitish", "followers_url": "https://api.github.com/users/keskarnitish/followers", "following_url": "https://api.github.com/users/keskarnitish/following{/other_user}", "gists_url": "https://api.github.com/users/keskarnitish/gists{/gist_id}", "starred_url": "https://api.github.com/users/keskarnitish/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keskarnitish/subscriptions", "organizations_url": "https://api.github.com/users/keskarnitish/orgs", "repos_url": "https://api.github.com/users/keskarnitish/repos", "events_url": "https://api.github.com/users/keskarnitish/events{/privacy}", "received_events_url": "https://api.github.com/users/keskarnitish/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1383?src=pr&el=h1) Report\n> Merging [#1383](https://codecov.io/gh/huggingface/transformers/pull/1383?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1c5079952f5f10eeac4cb6801b4fd1f36b0eff73?src=pr&el=desc) will **increase** coverage by `1.63%`.\n> The diff coverage is `92.38%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1383/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1383?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1383 +/- ##\n==========================================\n+ Coverage 83.79% 85.42% +1.63% \n==========================================\n Files 84 91 +7 \n Lines 12587 13464 +877 \n==========================================\n+ Hits 10547 11502 +955 \n+ Misses 2040 1962 -78\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1383?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1383/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `80.57% <ø> (+15.1%)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_gpt2\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1383/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2dwdDJfdGVzdC5weQ==) | `94.73% <0%> (ø)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1383/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `95.38% <100%> (+7.88%)` | :arrow_up: |\n| [transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1383/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `81.75% <100%> (+1.35%)` | :arrow_up: |\n| [transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1383/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2ZpbGVfdXRpbHMucHk=) | `74.17% <100%> (ø)` | :arrow_up: |\n| [transformers/modeling\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1383/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2F1dG8ucHk=) | `51.85% <20%> (-2.1%)` | :arrow_down: |\n| [transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1383/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fYXV0by5weQ==) | `58.82% <33.33%> (-2.47%)` | :arrow_down: |\n| [transformers/tokenization\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1383/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9hdXRvLnB5) | `67.64% <33.33%> (-3.33%)` | :arrow_down: |\n| [transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1383/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9jdHJsLnB5) | `83.6% <83.6%> (ø)` | |\n| [transformers/configuration\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1383/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fY3RybC5weQ==) | `88.88% <88.88%> (ø)` | |\n| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/1383/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1383?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1383?src=pr&el=footer). Last update [1c50799...d9e60f4](https://codecov.io/gh/huggingface/transformers/pull/1383?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Ok for merge", "Thanks for adding this! I'm currently doing some experiments with the CTRL model, and I've a question about the tokenization:\r\n\r\n```bash\r\ntokenizer.tokenize(\"Munich and Berlin are nice cities.\") \r\nOut[6]: ['m@@', 'unic@@', 'h', 'and', 'ber@@', 'lin', 'are', 'nice', 'cities', '.']\r\n```\r\n\r\nDo you have any idea, why the output returns lowercased tokens only - `Berlin` and `Munich` do both appear in the vocab file (cased, and the splitting of `Munich` looks really weird 😅).", "Yes, we are aware of the issue.\r\n\r\nWe are fixing this problem in #1480." ]
1,569
1,570
1,570
CONTRIBUTOR
null
EDIT 10/04 Almost complete (tests pass / generation makes sense). Please comment with issues if you find them. **Incomplete - Adding to facilitate collaboration** This PR would add functionality to perform inference on CTRL (https://github.com/salesforce/ctrl) in the `🤗/transformers` repo. Commits will be squashed later before merging.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1383/reactions", "total_count": 13, "+1": 0, "-1": 0, "laugh": 0, "hooray": 6, "confused": 0, "heart": 7, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1383/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1383", "html_url": "https://github.com/huggingface/transformers/pull/1383", "diff_url": "https://github.com/huggingface/transformers/pull/1383.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1383.patch", "merged_at": 1570635066000 }
https://api.github.com/repos/huggingface/transformers/issues/1382
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1382/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1382/comments
https://api.github.com/repos/huggingface/transformers/issues/1382/events
https://github.com/huggingface/transformers/issues/1382
500,308,414
MDU6SXNzdWU1MDAzMDg0MTQ=
1,382
Issue with `decode` in the presence of special tokens
{ "login": "harkous", "id": 5602332, "node_id": "MDQ6VXNlcjU2MDIzMzI=", "avatar_url": "https://avatars.githubusercontent.com/u/5602332?v=4", "gravatar_id": "", "url": "https://api.github.com/users/harkous", "html_url": "https://github.com/harkous", "followers_url": "https://api.github.com/users/harkous/followers", "following_url": "https://api.github.com/users/harkous/following{/other_user}", "gists_url": "https://api.github.com/users/harkous/gists{/gist_id}", "starred_url": "https://api.github.com/users/harkous/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/harkous/subscriptions", "organizations_url": "https://api.github.com/users/harkous/orgs", "repos_url": "https://api.github.com/users/harkous/repos", "events_url": "https://api.github.com/users/harkous/events{/privacy}", "received_events_url": "https://api.github.com/users/harkous/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Can't reproduce this on master now. Seems to be fixed.", "Thanks a lot. It seems to be fixed. Now I get `'[SEP]'` and `' [SEP]'` consecutively with the first and the second command above. So we can close this issue." ]
1,569
1,570
1,570
CONTRIBUTOR
null
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): GPT-2 Language I am using the model on (English, Chinese....): English The problem arise when using: * [ ] the official example scripts: (give details) * [x] my own modified scripts: (give details) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details) ## To Reproduce Steps to reproduce the behavior: Run the following: ```bash from transformers.tokenization_gpt2 import GPT2Tokenizer tokenizer = GPT2Tokenizer.from_pretrained('gpt2') tokenizer.add_special_tokens({"sep_token": "[SEP]"}) # this works, outputting "[SEP]" tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(tokenizer.encode("[SEP]"))) # this fails tokenizer.decode(tokenizer.encode("[SEP]")) ``` The last command gives this error: ``` miniconda3/envs/deepnlg/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 937, in decode text = text.replace(self._cls_token, self._sep_token) TypeError: replace() argument 1 must be str, not None ``` <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> ## Expected behavior The expectation is that [SEP] is output from the `decode` function. ## Environment * OS: OSX * Python version: 3.7 * PyTorch version: 1.1.0 * PyTorch Transformers version (or branch): Master (2dc8cb87341223e86220516951bb4ad84f880b4a) * Using GPU ? No * Distributed of parallel setup ? No * Any other relevant information: ## Additional context <!-- Add any other context about the problem here. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1382/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1382/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1381
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1381/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1381/comments
https://api.github.com/repos/huggingface/transformers/issues/1381/events
https://github.com/huggingface/transformers/issues/1381
500,302,590
MDU6SXNzdWU1MDAzMDI1OTA=
1,381
how to train RoBERTa from scratch
{ "login": "008karan", "id": 18630864, "node_id": "MDQ6VXNlcjE4NjMwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/18630864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/008karan", "html_url": "https://github.com/008karan", "followers_url": "https://api.github.com/users/008karan/followers", "following_url": "https://api.github.com/users/008karan/following{/other_user}", "gists_url": "https://api.github.com/users/008karan/gists{/gist_id}", "starred_url": "https://api.github.com/users/008karan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/008karan/subscriptions", "organizations_url": "https://api.github.com/users/008karan/orgs", "repos_url": "https://api.github.com/users/008karan/repos", "events_url": "https://api.github.com/users/008karan/events{/privacy}", "received_events_url": "https://api.github.com/users/008karan/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.pretraining.md", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "You can now leave `--model_name_or_path` to None in `run_language_modeling.py` to train a model from scratch.\r\n\r\nSee also https://huggingface.co/blog/how-to-train", "When I put new --config_name and --tokenizer_name. It shows me that \r\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\r\nAnyone can help me?" ]
1,569
1,582
1,576
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I want to train RoBERTa model from scratch on different language. Is there any implementation available here to do this?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1381/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1381/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1380
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1380/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1380/comments
https://api.github.com/repos/huggingface/transformers/issues/1380/events
https://github.com/huggingface/transformers/issues/1380
500,045,764
MDU6SXNzdWU1MDAwNDU3NjQ=
1,380
Confusing tokenizer result on single word
{ "login": "malmaud", "id": 987837, "node_id": "MDQ6VXNlcjk4NzgzNw==", "avatar_url": "https://avatars.githubusercontent.com/u/987837?v=4", "gravatar_id": "", "url": "https://api.github.com/users/malmaud", "html_url": "https://github.com/malmaud", "followers_url": "https://api.github.com/users/malmaud/followers", "following_url": "https://api.github.com/users/malmaud/following{/other_user}", "gists_url": "https://api.github.com/users/malmaud/gists{/gist_id}", "starred_url": "https://api.github.com/users/malmaud/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/malmaud/subscriptions", "organizations_url": "https://api.github.com/users/malmaud/orgs", "repos_url": "https://api.github.com/users/malmaud/repos", "events_url": "https://api.github.com/users/malmaud/events{/privacy}", "received_events_url": "https://api.github.com/users/malmaud/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @malmaud I think this #1196 can help you. The Roberta/GPT2 tokenizer expect a space to start. Without that, it sounds like you'll get strange behaviors.\r\n\r\nTo get the same output, in your first example, change it to \r\n```\r\nt.tokenize(\"mystery\", add_prefix_space=True)\r\n['Ġmystery']\r\n```", "That does work, thanks. I'm still confused why this doesn't work, though:\r\n\r\n```\r\nt.tokenize(\"<s> mystery </s>\")\r\n```\r\n\r\ngives `['<s>', 'my', 'stery', '</s>']`", "Hey @malmaud, spent some time going through the source code. So like above this gives the correct result:\r\n```\r\nt.tokenize(\"mystery\", add_prefix_space=True)\r\n['Ġmystery']\r\n```\r\nHowever\r\n```\r\nt.tokenizer(\" mystery\")\r\n['my', 'stery']\r\n```\r\nI thought these should be doing the same thing. In the tokenization_gpt2.py file, it says:\r\n```\r\nif add_prefix_space:\r\n text = ' ' + text\r\n```\r\nThis should give the same results in both files then however when I add a print(text) statement before and after that I noticed I got these results. (using your example now)\r\n```\r\nt.tokenize(\"<s> mystery\")\r\nmystery\r\nmystery\r\n['<s>', 'my', 'stery']\r\n\r\nt.tokenize(\"<s> mystery\", add_prefix_space=True)\r\nmystery\r\n mystery\r\n['<s>', 'Ġmystery']\r\n```\r\n\r\nThis means that even though we are putting a single word in with a leading space, something in the preprocessing is getting rid of the initial space(s). So we need to use the add_prefix_space=True in order to get the space back or else the function won't be using the string we are expecting it will be using.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,569
1,575
1,575
NONE
null
Not sure if this is expected, but it seems confusing to me: ```python import transformers t=transformers.AutoTokenizer.from_pretrained('roberta-base') t.tokenize("mystery") ``` yields two tokens, `['my', 'stery']`. Yet ``` t.tokenize("a mystery") ``` *also* yields two tokens, `['a', 'Ġmystery']`. I would have thought this should yield one more token than tokenizing "mystery" alone.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1380/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1380/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1379
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1379/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1379/comments
https://api.github.com/repos/huggingface/transformers/issues/1379/events
https://github.com/huggingface/transformers/issues/1379
499,994,566
MDU6SXNzdWU0OTk5OTQ1NjY=
1,379
TransfoXLCorpus requires pytorch to tokenize files
{ "login": "tomweingarten", "id": 3465707, "node_id": "MDQ6VXNlcjM0NjU3MDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3465707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomweingarten", "html_url": "https://github.com/tomweingarten", "followers_url": "https://api.github.com/users/tomweingarten/followers", "following_url": "https://api.github.com/users/tomweingarten/following{/other_user}", "gists_url": "https://api.github.com/users/tomweingarten/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomweingarten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomweingarten/subscriptions", "organizations_url": "https://api.github.com/users/tomweingarten/orgs", "repos_url": "https://api.github.com/users/tomweingarten/repos", "events_url": "https://api.github.com/users/tomweingarten/events{/privacy}", "received_events_url": "https://api.github.com/users/tomweingarten/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,569
1,575
1,575
NONE
null
## 🐛 Bug The current TransfoXLCorpus code requires pytorch, and fails if it is not installed. Model I am using (Bert, XLNet....): Transformer-XL Language I am using the model on (English, Chinese....): Other The problem arise when using: * [ X ] my own modified scripts: I'm using a very simple script to read in text files, see code below The tasks I am working on is: * [ X ] my own task or dataset: I am attempting to build a corpus from my own dataset of long text sentences. ## To Reproduce Steps to reproduce the behavior: corpus = TransfoXLCorpus(lower_case=True, delimiter=" ") corpus.build_corpus(EXAMPLE_DIR, "text8") Traceback (most recent call last): File "build_xl_corpus.py", line 26, in <module> corpus.build_corpus(EXAMPLE_DIR, "text8") File "/home/tom/.local/lib/python3.7/site-packages/transformers/tokenization_transfo_xl.py", line 521, in build_corpus os.path.join(path, 'train.txt'), ordered=True, add_eos=False) File "/home/tom/.local/lib/python3.7/site-packages/transformers/tokenization_transfo_xl.py", line 187, in encode_file encoded.append(self.convert_to_tensor(symbols)) File "/home/tom/.local/lib/python3.7/site-packages/transformers/tokenization_transfo_xl.py", line 246, in convert_to_tensor return torch.LongTensor(self.convert_tokens_to_ids(symbols)) NameError: name 'torch' is not defined ## Expected behavior I did not expect this behavior to require pytorch ## Environment * OS: Ubuntu * Python version: * PyTorch version: None * PyTorch Transformers version (or branch): 2.0.0 * Using GPU ? Yes * Distributed of parallel setup ? No * Any other relevant information:
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1379/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1379/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1378
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1378/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1378/comments
https://api.github.com/repos/huggingface/transformers/issues/1378/events
https://github.com/huggingface/transformers/issues/1378
499,954,198
MDU6SXNzdWU0OTk5NTQxOTg=
1,378
TFDistilBertForSequenceClassification - TypeError: len is not well defined for symbolic Tensors during model.fit()
{ "login": "rickysaurav", "id": 13986039, "node_id": "MDQ6VXNlcjEzOTg2MDM5", "avatar_url": "https://avatars.githubusercontent.com/u/13986039?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rickysaurav", "html_url": "https://github.com/rickysaurav", "followers_url": "https://api.github.com/users/rickysaurav/followers", "following_url": "https://api.github.com/users/rickysaurav/following{/other_user}", "gists_url": "https://api.github.com/users/rickysaurav/gists{/gist_id}", "starred_url": "https://api.github.com/users/rickysaurav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rickysaurav/subscriptions", "organizations_url": "https://api.github.com/users/rickysaurav/orgs", "repos_url": "https://api.github.com/users/rickysaurav/repos", "events_url": "https://api.github.com/users/rickysaurav/events{/privacy}", "received_events_url": "https://api.github.com/users/rickysaurav/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "so, how to solve this problem?", "Should be solved on master and the latest release." ]
1,569
1,571
1,570
NONE
null
## 🐛 Bug <!-- Important information --> Model I am using (TFDistilBertForSequenceClassification): Language I am using the model on (English): The problem arise when using: model.fit() * [ ] the official example scripts: * [x] my own modified scripts: The tasks I am working on is: * [ ] an official GLUE/SQUaD task: * [x] my own task or dataset: ## To Reproduce Steps to reproduce the behavior: 1. create a random classification train,test set 2. get the pretrained TFDistilBertForSequenceClassification model 3. call fit() on the model for finetuning ```python x_train = np.random.randint(2000, size=(100, 12)) x_train[:,0]=101 x_train[:,11]=102 y_train = np.random.randint(2, size=100) model = TFDistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased',num_labels = 2) model.compile() model.fit(x_train,y_train,epochs = 1,batch_size = 32,verbose=1) ``` ``` TypeError: in converted code: relative to /usr/local/lib/python3.6/dist-packages: transformers/modeling_tf_distilbert.py:680 call * distilbert_output = self.distilbert(inputs, **kwargs) tensorflow_core/python/keras/engine/base_layer.py:842 __call__ outputs = call_fn(cast_inputs, *args, **kwargs) transformers/modeling_tf_distilbert.py:447 call * tfmr_output = self.transformer([embedding_output, attention_mask, head_mask], training=training) tensorflow_core/python/keras/engine/base_layer.py:891 __call__ outputs = self.call(cast_inputs, *args, **kwargs) transformers/modeling_tf_distilbert.py:382 call layer_outputs = layer_module([hidden_state, attn_mask, head_mask[i]], training=training) tensorflow_core/python/keras/engine/base_layer.py:891 __call__ outputs = self.call(cast_inputs, *args, **kwargs) transformers/modeling_tf_distilbert.py:324 call sa_output = self.attention([x, x, x, attn_mask, head_mask], training=training) tensorflow_core/python/keras/engine/base_layer.py:891 __call__ outputs = self.call(cast_inputs, *args, **kwargs) transformers/modeling_tf_distilbert.py:229 call assert 2 <= len(tf.shape(mask)) <= 3 tensorflow_core/python/framework/ops.py:741 __len__ "shape information.".format(self.name)) TypeError: len is not well defined for symbolic Tensors. (tf_distil_bert_for_sequence_classification/distilbert/transformer/layer_._0/attention/Shape_2:0) Please call `x.shape` rather than `len(x)` for shape information. ``` ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> ## Environment * OS: Colab Notebook * Python version:3.6.8 * PyTorch version:N/A * Tensorflow version:tf-nightly-gpu-2.0-preview * PyTorch Transformers version (or branch): 2.0/0 * Using GPU ? yes * Distributed of parallel setup ? No ## Additional context Calling the model directly with the input as mentioned in the example model doc works fine
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1378/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1378/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1377
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1377/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1377/comments
https://api.github.com/repos/huggingface/transformers/issues/1377/events
https://github.com/huggingface/transformers/issues/1377
499,932,246
MDU6SXNzdWU0OTk5MzIyNDY=
1,377
Error when calculate tokens_id and Mask LM
{ "login": "monanahe", "id": 29702203, "node_id": "MDQ6VXNlcjI5NzAyMjAz", "avatar_url": "https://avatars.githubusercontent.com/u/29702203?v=4", "gravatar_id": "", "url": "https://api.github.com/users/monanahe", "html_url": "https://github.com/monanahe", "followers_url": "https://api.github.com/users/monanahe/followers", "following_url": "https://api.github.com/users/monanahe/following{/other_user}", "gists_url": "https://api.github.com/users/monanahe/gists{/gist_id}", "starred_url": "https://api.github.com/users/monanahe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/monanahe/subscriptions", "organizations_url": "https://api.github.com/users/monanahe/orgs", "repos_url": "https://api.github.com/users/monanahe/repos", "events_url": "https://api.github.com/users/monanahe/events{/privacy}", "received_events_url": "https://api.github.com/users/monanahe/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,569
1,575
1,575
NONE
null
## 🐛 Bug <!-- Important information --> Model I am using (DistilBert): Language I am using the model on (English): The problem arise when using: Distiller.prepare_batch( ) Error when token_ids is masked by mask LM matrix * the official example scripts: _token_ids_real = token_ids[pred_mask] * my own modified scripts: _token_ids_real=torch.mul(token_ids, pred_mask) The tasks I am working on is: * [GLUE ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details) ## To Reproduce Steps to reproduce the behavior: 1. pred_mask is matrix with 0,1. Operation token_ids[pred_mask] seems to make some same matrix, instead of masking token_ids <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> ## Environment * OS: Win10 * Python version: 3.6 * PyTorch version: 1.1 * PyTorch Transformers version (or branch): 2.0/0 * Using GPU ? Yes * Distributed of parallel setup ? No * Any other relevant information: ## Additional context <!-- Add any other context about the problem here. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1377/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1377/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1376
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1376/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1376/comments
https://api.github.com/repos/huggingface/transformers/issues/1376/events
https://github.com/huggingface/transformers/issues/1376
499,916,787
MDU6SXNzdWU0OTk5MTY3ODc=
1,376
Is it save the best model when used example like run_glue?
{ "login": "Reveyer", "id": 39128351, "node_id": "MDQ6VXNlcjM5MTI4MzUx", "avatar_url": "https://avatars.githubusercontent.com/u/39128351?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Reveyer", "html_url": "https://github.com/Reveyer", "followers_url": "https://api.github.com/users/Reveyer/followers", "following_url": "https://api.github.com/users/Reveyer/following{/other_user}", "gists_url": "https://api.github.com/users/Reveyer/gists{/gist_id}", "starred_url": "https://api.github.com/users/Reveyer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Reveyer/subscriptions", "organizations_url": "https://api.github.com/users/Reveyer/orgs", "repos_url": "https://api.github.com/users/Reveyer/repos", "events_url": "https://api.github.com/users/Reveyer/events{/privacy}", "received_events_url": "https://api.github.com/users/Reveyer/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,569
1,569
1,569
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I read the code of `run_glue.py`, I think it just save model checkpoint and the last step. Is it wrong for me, or do I have to do some other operations?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1376/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1376/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1375
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1375/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1375/comments
https://api.github.com/repos/huggingface/transformers/issues/1375/events
https://github.com/huggingface/transformers/issues/1375
499,912,208
MDU6SXNzdWU0OTk5MTIyMDg=
1,375
cannot import name 'TFBertForSequenceClassification'
{ "login": "samarthsarin", "id": 40137295, "node_id": "MDQ6VXNlcjQwMTM3Mjk1", "avatar_url": "https://avatars.githubusercontent.com/u/40137295?v=4", "gravatar_id": "", "url": "https://api.github.com/users/samarthsarin", "html_url": "https://github.com/samarthsarin", "followers_url": "https://api.github.com/users/samarthsarin/followers", "following_url": "https://api.github.com/users/samarthsarin/following{/other_user}", "gists_url": "https://api.github.com/users/samarthsarin/gists{/gist_id}", "starred_url": "https://api.github.com/users/samarthsarin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/samarthsarin/subscriptions", "organizations_url": "https://api.github.com/users/samarthsarin/orgs", "repos_url": "https://api.github.com/users/samarthsarin/repos", "events_url": "https://api.github.com/users/samarthsarin/events{/privacy}", "received_events_url": "https://api.github.com/users/samarthsarin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! The TensorFlow components are only available when you have TF2 installed on your system. Could you please check that you have it in the environment in which you're running your code?", "It worked. Thanks" ]
1,569
1,569
1,569
NONE
null
I am unable to import TFBertForSequenceClassification. from transformers import TFBertForSequenceClassification shows an error of cannot import name 'TFBertForSequenceClassification'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1375/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1375/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1374
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1374/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1374/comments
https://api.github.com/repos/huggingface/transformers/issues/1374/events
https://github.com/huggingface/transformers/pull/1374
499,911,762
MDExOlB1bGxSZXF1ZXN0MzIyNTAxMzA3
1,374
Fix run_glue.py on QNLI part
{ "login": "adamluo1995", "id": 18718520, "node_id": "MDQ6VXNlcjE4NzE4NTIw", "avatar_url": "https://avatars.githubusercontent.com/u/18718520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adamluo1995", "html_url": "https://github.com/adamluo1995", "followers_url": "https://api.github.com/users/adamluo1995/followers", "following_url": "https://api.github.com/users/adamluo1995/following{/other_user}", "gists_url": "https://api.github.com/users/adamluo1995/gists{/gist_id}", "starred_url": "https://api.github.com/users/adamluo1995/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adamluo1995/subscriptions", "organizations_url": "https://api.github.com/users/adamluo1995/orgs", "repos_url": "https://api.github.com/users/adamluo1995/repos", "events_url": "https://api.github.com/users/adamluo1995/events{/privacy}", "received_events_url": "https://api.github.com/users/adamluo1995/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,569
1,575
1,575
NONE
null
In QNLI task, the ids should be truncated is the pair cuz that is the huge one. Or we can't load QNLI dataset successfully.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1374/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1374/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1374", "html_url": "https://github.com/huggingface/transformers/pull/1374", "diff_url": "https://github.com/huggingface/transformers/pull/1374.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1374.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/1373
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1373/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1373/comments
https://api.github.com/repos/huggingface/transformers/issues/1373/events
https://github.com/huggingface/transformers/pull/1373
499,906,047
MDExOlB1bGxSZXF1ZXN0MzIyNDk4MTU0
1,373
Fixed critical css font-family issues
{ "login": "TimYagan", "id": 30977192, "node_id": "MDQ6VXNlcjMwOTc3MTky", "avatar_url": "https://avatars.githubusercontent.com/u/30977192?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TimYagan", "html_url": "https://github.com/TimYagan", "followers_url": "https://api.github.com/users/TimYagan/followers", "following_url": "https://api.github.com/users/TimYagan/following{/other_user}", "gists_url": "https://api.github.com/users/TimYagan/gists{/gist_id}", "starred_url": "https://api.github.com/users/TimYagan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TimYagan/subscriptions", "organizations_url": "https://api.github.com/users/TimYagan/orgs", "repos_url": "https://api.github.com/users/TimYagan/repos", "events_url": "https://api.github.com/users/TimYagan/events{/privacy}", "received_events_url": "https://api.github.com/users/TimYagan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Amazing!" ]
1,569
1,570
1,570
CONTRIBUTOR
null
Fixed critical css font-family issues to ensure compatibility with multiple web browsers
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1373/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1373/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1373", "html_url": "https://github.com/huggingface/transformers/pull/1373", "diff_url": "https://github.com/huggingface/transformers/pull/1373.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1373.patch", "merged_at": 1570143843000 }
https://api.github.com/repos/huggingface/transformers/issues/1372
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1372/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1372/comments
https://api.github.com/repos/huggingface/transformers/issues/1372/events
https://github.com/huggingface/transformers/pull/1372
499,880,746
MDExOlB1bGxSZXF1ZXN0MzIyNDg0OTQ3
1,372
Simplify code by using six.string_types
{ "login": "cclauss", "id": 3709715, "node_id": "MDQ6VXNlcjM3MDk3MTU=", "avatar_url": "https://avatars.githubusercontent.com/u/3709715?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cclauss", "html_url": "https://github.com/cclauss", "followers_url": "https://api.github.com/users/cclauss/followers", "following_url": "https://api.github.com/users/cclauss/following{/other_user}", "gists_url": "https://api.github.com/users/cclauss/gists{/gist_id}", "starred_url": "https://api.github.com/users/cclauss/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cclauss/subscriptions", "organizations_url": "https://api.github.com/users/cclauss/orgs", "repos_url": "https://api.github.com/users/cclauss/repos", "events_url": "https://api.github.com/users/cclauss/events{/privacy}", "received_events_url": "https://api.github.com/users/cclauss/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1372?src=pr&el=h1) Report\n> Merging [#1372](https://codecov.io/gh/huggingface/transformers/pull/1372?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fd97761c5a977fd22df789d2851cf57c7c9c0930?src=pr&el=desc) will **increase** coverage by `1.42%`.\n> The diff coverage is `83.33%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1372/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1372?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1372 +/- ##\n==========================================\n+ Coverage 84.74% 86.16% +1.42% \n==========================================\n Files 91 91 \n Lines 13593 13593 \n==========================================\n+ Hits 11519 11713 +194 \n+ Misses 2074 1880 -194\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1372?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1372/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `91.43% <83.33%> (ø)` | :arrow_up: |\n| [transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1372/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `81.75% <0%> (+1.35%)` | :arrow_up: |\n| [transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1372/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `95.45% <0%> (+2.27%)` | :arrow_up: |\n| [transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1372/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `73.28% <0%> (+2.46%)` | :arrow_up: |\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1372/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `80.57% <0%> (+15.1%)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1372/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `96.8% <0%> (+17.02%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1372/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `92.95% <0%> (+83.09%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1372?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1372?src=pr&el=footer). Last update [fd97761...ba6f2d6](https://codecov.io/gh/huggingface/transformers/pull/1372?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "We'll handle this by dropping python2 support in the next release (and using flake8) cc @aaugustin " ]
1,569
1,576
1,576
NONE
null
https://six.readthedocs.io/#six.string_types
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1372/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1372/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1372", "html_url": "https://github.com/huggingface/transformers/pull/1372", "diff_url": "https://github.com/huggingface/transformers/pull/1372.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1372.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/1371
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1371/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1371/comments
https://api.github.com/repos/huggingface/transformers/issues/1371/events
https://github.com/huggingface/transformers/pull/1371
499,873,785
MDExOlB1bGxSZXF1ZXN0MzIyNDgwOTEw
1,371
Make activation functions available from modeling_utils (PyTorch)
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1371?src=pr&el=h1) Report\n> Merging [#1371](https://codecov.io/gh/huggingface/transformers/pull/1371?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ae50ad91ea2fedb64ecd2e7c8e2d0d4778dc03aa?src=pr&el=desc) will **increase** coverage by `0.97%`.\n> The diff coverage is `85.71%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1371/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1371?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1371 +/- ##\n==========================================\n+ Coverage 83.76% 84.74% +0.97% \n==========================================\n Files 84 84 \n Lines 12596 12559 -37 \n==========================================\n+ Hits 10551 10643 +92 \n+ Misses 2045 1916 -129\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1371?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1371/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `80.41% <100%> (ø)` | :arrow_up: |\n| [transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1371/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `72.02% <100%> (+0.77%)` | :arrow_up: |\n| [transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/1371/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2dwdDIucHk=) | `83.88% <100%> (-0.11%)` | :arrow_down: |\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1371/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `71.01% <100%> (+5.54%)` | :arrow_up: |\n| [transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1371/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2Rpc3RpbGJlcnQucHk=) | `95.8% <100%> (-0.03%)` | :arrow_down: |\n| [transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/1371/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RyYW5zZm9feGwucHk=) | `75.16% <100%> (ø)` | :arrow_up: |\n| [transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1371/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2JlcnQucHk=) | `88.41% <100%> (+0.23%)` | :arrow_up: |\n| [transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/1371/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbS5weQ==) | `88.36% <100%> (-0.07%)` | :arrow_down: |\n| [transformers/modeling\\_transfo\\_xl\\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/1371/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RyYW5zZm9feGxfdXRpbGl0aWVzLnB5) | `54.16% <37.5%> (+0.27%)` | :arrow_up: |\n| [transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1371/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `92.57% <90%> (-0.12%)` | :arrow_down: |\n| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/1371/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1371?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1371?src=pr&el=footer). Last update [ae50ad9...716d783](https://codecov.io/gh/huggingface/transformers/pull/1371?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Unstale. ", "Would feel syntactically cleaner if we could do `ACT2FN.gelu()` instead of a dict (also gives some IDE goodness like autocomplete) (I guess through a class or namespace or something), what do you guys think?", "> Would feel syntactically cleaner if we could do `ACT2FN.gelu()` instead of a dict (also gives some IDE goodness like autocomplete) (I guess through a class or namespace or something), what do you guys think?\r\n\r\nSounds good but note that this is not something I introduced. The ACT2FN dict already existed, but wasn't used consistently it seemed.", "Ah yeah, I see. Would you want to do this change, if you have the time/bandwidth? (+ rebasing on current master so we can merge easily?)", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "AFAICT, this has been done by @sshleifer on master. Re-open if necessary!" ]
1,569
1,583
1,583
COLLABORATOR
null
* This commit replaces references to PyTorch activation functions/modules by a dict of functions that lives in `modeling_utils`. This ensures that all activation functions are available to all modules, praticularly custom functions such as swish and new_gelu. * In addition, when available (PT1.2) the native PyTorch gelu function will be used - it supports a CPP/CUDA implementation. **NOTE** that this replaces all `nn.Module`'s by bare functions except for one which was required for testing to be of the type `nn.Module`. If requested, this can be reverted so that only function calls are replaced by ACT2FN functions, and that existing `nn.Module`s are untouched. **NOTE** that one would thus also expect that _all_ usages of activation functions are taken from `ACT2FN` for consistency's sake. **NOTE** since the Module counter-part of PyTorch's GeLU [isn't available (yet)](https://github.com/pytorch/pytorch/pull/20665#issuecomment-536359684), it might be worth waiting to implement this pull, and then use Modules and functions in the right places where one would expect, i.e. `Module` when part of architecture, function when processing other kinds of data.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1371/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1371/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1371", "html_url": "https://github.com/huggingface/transformers/pull/1371", "diff_url": "https://github.com/huggingface/transformers/pull/1371.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1371.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/1370
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1370/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1370/comments
https://api.github.com/repos/huggingface/transformers/issues/1370/events
https://github.com/huggingface/transformers/issues/1370
499,844,775
MDU6SXNzdWU0OTk4NDQ3NzU=
1,370
considerd to add albert?
{ "login": "fengzuo97", "id": 48614846, "node_id": "MDQ6VXNlcjQ4NjE0ODQ2", "avatar_url": "https://avatars.githubusercontent.com/u/48614846?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fengzuo97", "html_url": "https://github.com/fengzuo97", "followers_url": "https://api.github.com/users/fengzuo97/followers", "following_url": "https://api.github.com/users/fengzuo97/following{/other_user}", "gists_url": "https://api.github.com/users/fengzuo97/gists{/gist_id}", "starred_url": "https://api.github.com/users/fengzuo97/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fengzuo97/subscriptions", "organizations_url": "https://api.github.com/users/fengzuo97/orgs", "repos_url": "https://api.github.com/users/fengzuo97/repos", "events_url": "https://api.github.com/users/fengzuo97/events{/privacy}", "received_events_url": "https://api.github.com/users/fengzuo97/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Would definitely love to see an implementation of ALBERT added to this repository. Just for completeness:\r\n\r\n* paper: https://arxiv.org/abs/1909.11942\r\n* reddit: https://www.reddit.com/r/MachineLearning/comments/d9tdfo/albert_a_lite_bert_for_selfsupervised_learning_of/\r\n* medium: https://medium.com/syncedreview/googles-albert-is-a-leaner-bert-achieves-sota-on-3-nlp-benchmarks-f64466dd583\r\n\r\nThat said, it could be even more interesting to implement the core improvements (factorized embedding parameterization, cross-layer parameter sharing) from ALBERT in (some?/all?) other transformers as optional features?\r\n", "Knowing how fast the team works, I would expect ALBERT to be implemented quite soon. That being said, I haven't had time to read the ALBERT paper yet, so it might be more difficult than previous BERT iterations such as distilbert and RoBERTa.", "I think ALBERT is very cool! Expect...", "And in pytorch (using code from this repo and weights from brightmart) https://github.com/lonePatient/albert_pytorch", "Any Update on the progress?", "The ALBERT paper will be presented at ICLR in April 2020. From what I last heard, the huggingface team has been talking with the people over at Google AI to share the details of the model, but I can imagine that the researchers rather wait until the paper has been presented. One of those reasons being that they want to get citations from their ICLR talk rather than an arXiv citation which, in the field, is \"worth less\" than a big conference proceeding. \r\n\r\nFor now, just be patient. I am sure that the huggingface team will have a big announcement (follow their Twitter/LinkedIn channels) with a new version bump. No need to keep bumping this topic.", "https://github.com/interviewBubble/Google-ALBERT", "The official code and models got released :slightly_smiling_face: \r\nhttps://github.com/google-research/google-research/tree/master/albert ", "[WIP]\r\nALBERT in tensorflow 2.0\r\nhttps://github.com/kamalkraj/ALBERT-TF2.0\r\n", "https://github.com/lonePatient/albert_pytorch\r\n\r\nDataset: MNLI\r\nModel: ALBERT_BASE_V2\r\nDev accuracy : 0.8418\r\n\r\nDataset: SST-2\r\nModel: ALBERT_BASE_V2\r\nDev accuracy :0.926", "PR was created, see here:\r\n\r\nhttps://github.com/huggingface/transformers/pull/1683", "> [WIP]\r\n> ALBERT in tensorflow 2.0\r\n> https://github.com/kamalkraj/ALBERT-TF2.0\r\n\r\nVerison 2 weights added.\r\nSupport for SQuAD 1.1 and 2.0 added. \r\nReproduces the same results from paper. From my experiments, ALBERT model is very sensitive to hyperparameter like Batch Size. FineTuning using AdamW as Default in Original Repo. AdamW performs better than LAMB on Model finetuning. ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,569
1,578
1,578
NONE
null
## 🚀 Feature <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Additional context <!-- Add any other context or screenshots about the feature request here. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1370/reactions", "total_count": 39, "+1": 39, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1370/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1369
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1369/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1369/comments
https://api.github.com/repos/huggingface/transformers/issues/1369/events
https://github.com/huggingface/transformers/pull/1369
499,832,540
MDExOlB1bGxSZXF1ZXN0MzIyNDU0MTgw
1,369
Update README.md
{ "login": "Santosh-Gupta", "id": 5524261, "node_id": "MDQ6VXNlcjU1MjQyNjE=", "avatar_url": "https://avatars.githubusercontent.com/u/5524261?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Santosh-Gupta", "html_url": "https://github.com/Santosh-Gupta", "followers_url": "https://api.github.com/users/Santosh-Gupta/followers", "following_url": "https://api.github.com/users/Santosh-Gupta/following{/other_user}", "gists_url": "https://api.github.com/users/Santosh-Gupta/gists{/gist_id}", "starred_url": "https://api.github.com/users/Santosh-Gupta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Santosh-Gupta/subscriptions", "organizations_url": "https://api.github.com/users/Santosh-Gupta/orgs", "repos_url": "https://api.github.com/users/Santosh-Gupta/repos", "events_url": "https://api.github.com/users/Santosh-Gupta/events{/privacy}", "received_events_url": "https://api.github.com/users/Santosh-Gupta/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1369?src=pr&el=h1) Report\n> Merging [#1369](https://codecov.io/gh/huggingface/transformers/pull/1369?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ae50ad91ea2fedb64ecd2e7c8e2d0d4778dc03aa?src=pr&el=desc) will **increase** coverage by `0.92%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1369/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1369?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1369 +/- ##\n==========================================\n+ Coverage 83.76% 84.69% +0.92% \n==========================================\n Files 84 84 \n Lines 12596 12596 \n==========================================\n+ Hits 10551 10668 +117 \n+ Misses 2045 1928 -117\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1369?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1369/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `72.14% <0%> (+0.89%)` | :arrow_up: |\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1369/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `71.22% <0%> (+5.75%)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1369/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `95% <0%> (+7.5%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1369/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `76.92% <0%> (+66.43%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1369?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1369?src=pr&el=footer). Last update [ae50ad9...d1176d5](https://codecov.io/gh/huggingface/transformers/pull/1369?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Great, thanks for updating the README!" ]
1,569
1,569
1,569
CONTRIBUTOR
null
Lines 183 - 200, fixed indentation. Line 198, replaced `tokenizer_class` with `BertTokenizer`, since `tokenizer_class` is not defined in the loop it belongs to.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1369/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1369/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1369", "html_url": "https://github.com/huggingface/transformers/pull/1369", "diff_url": "https://github.com/huggingface/transformers/pull/1369.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1369.patch", "merged_at": 1569869282000 }
https://api.github.com/repos/huggingface/transformers/issues/1368
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1368/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1368/comments
https://api.github.com/repos/huggingface/transformers/issues/1368/events
https://github.com/huggingface/transformers/issues/1368
499,832,359
MDU6SXNzdWU0OTk4MzIzNTk=
1,368
Tried to import TFBertForPreTraining in google colab
{ "login": "mandavachetana", "id": 55931846, "node_id": "MDQ6VXNlcjU1OTMxODQ2", "avatar_url": "https://avatars.githubusercontent.com/u/55931846?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mandavachetana", "html_url": "https://github.com/mandavachetana", "followers_url": "https://api.github.com/users/mandavachetana/followers", "following_url": "https://api.github.com/users/mandavachetana/following{/other_user}", "gists_url": "https://api.github.com/users/mandavachetana/gists{/gist_id}", "starred_url": "https://api.github.com/users/mandavachetana/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mandavachetana/subscriptions", "organizations_url": "https://api.github.com/users/mandavachetana/orgs", "repos_url": "https://api.github.com/users/mandavachetana/repos", "events_url": "https://api.github.com/users/mandavachetana/events{/privacy}", "received_events_url": "https://api.github.com/users/mandavachetana/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Hey @mandavachetana its not just a google colab thing. Take a look here #1375 You need to make sure you are using tensorflow 2.0 and it should work.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,569
1,575
1,575
NONE
null
Tried to import TFBertForPreTraining and received an error from transformers import BertTokenizer, TFBertForPreTraining --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-24-91f8709e090f> in <module>() ----> 1 from transformers import BertTokenizer, TFBertForPreTraining ImportError: cannot import name 'TFBertForPreTraining' --------------------------------------------------------------------------- NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt. To view examples of installing some common dependencies, click the "Open Examples" button below. ---------------------------------------------------------------------------
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1368/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1368/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1367
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1367/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1367/comments
https://api.github.com/repos/huggingface/transformers/issues/1367/events
https://github.com/huggingface/transformers/issues/1367
499,815,565
MDU6SXNzdWU0OTk4MTU1NjU=
1,367
Model does not train when using new BertModel, but does with old BertModel
{ "login": "Peter-Devine", "id": 49399312, "node_id": "MDQ6VXNlcjQ5Mzk5MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/49399312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Peter-Devine", "html_url": "https://github.com/Peter-Devine", "followers_url": "https://api.github.com/users/Peter-Devine/followers", "following_url": "https://api.github.com/users/Peter-Devine/following{/other_user}", "gists_url": "https://api.github.com/users/Peter-Devine/gists{/gist_id}", "starred_url": "https://api.github.com/users/Peter-Devine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Peter-Devine/subscriptions", "organizations_url": "https://api.github.com/users/Peter-Devine/orgs", "repos_url": "https://api.github.com/users/Peter-Devine/repos", "events_url": "https://api.github.com/users/Peter-Devine/events{/privacy}", "received_events_url": "https://api.github.com/users/Peter-Devine/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "You can check the two migration guides, they explain all the differences:\r\n- https://github.com/huggingface/transformers#Migrating-from-pytorch-transformers-to-transformers\r\n- https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,569
1,576
1,576
NONE
null
## 📚 Migration I am currently working on using Transformers with Snorkel's classification library [https://github.com/snorkel-team/snorkel](https://github.com/snorkel-team/snorkel) (for MTL learning in the future). I currently am trying to troubleshoot why the model is not learning, and so have my experiment set up such that the Snorkel library learns one task, essentially training a BERT model and linear layer. The code for this experiment can be found at [https://github.com/Peter-Devine/test_cls_snorkel_mtl]( https://github.com/Peter-Devine/test_cls_snorkel_mtl ). To run it, you will need torch, snorkel, numpy, pytorch_pretrained_bert and transformers. My problem is as follows. When I run the code in `test_cls_snorkel_mtl/tutorials/ISEAR_pretrain_tutorial.py`, my code runs fine and the model's validation accuracy scores are good. This is because I am using the old pytorch_pretrained_bert BertModel in `test_cls_snorkel_mtl/modules/bert_module.py`. If you uncomment line 6 of `test_cls_snorkel_mtl/modules/bert_module.py` and use the new transformers BertModel, then running `test_cls_snorkel_mtl/tutorials/ISEAR_pretrain_tutorial.py` will result in a model that never converges and bad validation accuracy. From reading the code on Snorkel, I cannot seem to find the reason as to why this would be. What are the major changes in training a model between versions of pytorch_pretrained_bert and transformers. Do back-passes etc. work the same way in both models? Thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1367/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1367/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1366
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1366/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1366/comments
https://api.github.com/repos/huggingface/transformers/issues/1366/events
https://github.com/huggingface/transformers/pull/1366
499,792,138
MDExOlB1bGxSZXF1ZXN0MzIyNDI2NTU4
1,366
fix redundant initializations of Embeddings in RobertaEmbeddings
{ "login": "ikuyamada", "id": 426342, "node_id": "MDQ6VXNlcjQyNjM0Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/426342?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ikuyamada", "html_url": "https://github.com/ikuyamada", "followers_url": "https://api.github.com/users/ikuyamada/followers", "following_url": "https://api.github.com/users/ikuyamada/following{/other_user}", "gists_url": "https://api.github.com/users/ikuyamada/gists{/gist_id}", "starred_url": "https://api.github.com/users/ikuyamada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ikuyamada/subscriptions", "organizations_url": "https://api.github.com/users/ikuyamada/orgs", "repos_url": "https://api.github.com/users/ikuyamada/repos", "events_url": "https://api.github.com/users/ikuyamada/events{/privacy}", "received_events_url": "https://api.github.com/users/ikuyamada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Sorry, I will fix this " ]
1,569
1,569
1,569
CONTRIBUTOR
null
Based on the discussion with @julien-c in #1258, this PR fixes the issue of redundant multiple initializations of the embeddings in the constructor of `RobertaEmbeddings` by removing the constructor call of its parent class (i.e., `BertEmbeddings`) and creating `token_type_embeddings`, `LayerNorm`, and `dropout` in the constructor.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1366/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1366/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1366", "html_url": "https://github.com/huggingface/transformers/pull/1366", "diff_url": "https://github.com/huggingface/transformers/pull/1366.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1366.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/1365
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1365/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1365/comments
https://api.github.com/repos/huggingface/transformers/issues/1365/events
https://github.com/huggingface/transformers/issues/1365
499,774,508
MDU6SXNzdWU0OTk3NzQ1MDg=
1,365
Why add the arguments 'head_mask' and when to use this arguments
{ "login": "AMANKB", "id": 21056295, "node_id": "MDQ6VXNlcjIxMDU2Mjk1", "avatar_url": "https://avatars.githubusercontent.com/u/21056295?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AMANKB", "html_url": "https://github.com/AMANKB", "followers_url": "https://api.github.com/users/AMANKB/followers", "following_url": "https://api.github.com/users/AMANKB/following{/other_user}", "gists_url": "https://api.github.com/users/AMANKB/gists{/gist_id}", "starred_url": "https://api.github.com/users/AMANKB/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AMANKB/subscriptions", "organizations_url": "https://api.github.com/users/AMANKB/orgs", "repos_url": "https://api.github.com/users/AMANKB/repos", "events_url": "https://api.github.com/users/AMANKB/events{/privacy}", "received_events_url": "https://api.github.com/users/AMANKB/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,569
1,575
1,575
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> **head_mask**: (`optional`) ``torch.FloatTensor`` of shape ``(num_heads,)`` or ``(num_layers, num_heads)``: Mask to nullify selected heads of the self-attention modules. Mask values selected in ``[0, 1]``: ``1`` indicates the head is **not masked**, ``0`` indicates the head is **masked**.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1365/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1365/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1364
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1364/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1364/comments
https://api.github.com/repos/huggingface/transformers/issues/1364/events
https://github.com/huggingface/transformers/issues/1364
499,769,288
MDU6SXNzdWU0OTk3NjkyODg=
1,364
Is there any plan for Roberta in SQuAD?
{ "login": "kugwzk", "id": 15382517, "node_id": "MDQ6VXNlcjE1MzgyNTE3", "avatar_url": "https://avatars.githubusercontent.com/u/15382517?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kugwzk", "html_url": "https://github.com/kugwzk", "followers_url": "https://api.github.com/users/kugwzk/followers", "following_url": "https://api.github.com/users/kugwzk/following{/other_user}", "gists_url": "https://api.github.com/users/kugwzk/gists{/gist_id}", "starred_url": "https://api.github.com/users/kugwzk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kugwzk/subscriptions", "organizations_url": "https://api.github.com/users/kugwzk/orgs", "repos_url": "https://api.github.com/users/kugwzk/repos", "events_url": "https://api.github.com/users/kugwzk/events{/privacy}", "received_events_url": "https://api.github.com/users/kugwzk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,569
1,569
1,569
NONE
null
## 🚀 Feature <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> Hello, thx for the RoBERTa implementation. But I want to know is there any plan for the RoBERTa in SQuAD, because it is complex. And I simple changed the run_squad code as the run_gule code, I got some bugs. And the fairseq doesn't give a official code, too. I really want to know how to use the RoBERTa in SQuAD use the transformers. ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Additional context <!-- Add any other context or screenshots about the feature request here. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1364/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1364/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1363
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1363/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1363/comments
https://api.github.com/repos/huggingface/transformers/issues/1363/events
https://github.com/huggingface/transformers/issues/1363
499,761,986
MDU6SXNzdWU0OTk3NjE5ODY=
1,363
Why the RoBERTa's max_position_embeddings size is 512+2=514?
{ "login": "kugwzk", "id": 15382517, "node_id": "MDQ6VXNlcjE1MzgyNTE3", "avatar_url": "https://avatars.githubusercontent.com/u/15382517?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kugwzk", "html_url": "https://github.com/kugwzk", "followers_url": "https://api.github.com/users/kugwzk/followers", "following_url": "https://api.github.com/users/kugwzk/following{/other_user}", "gists_url": "https://api.github.com/users/kugwzk/gists{/gist_id}", "starred_url": "https://api.github.com/users/kugwzk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kugwzk/subscriptions", "organizations_url": "https://api.github.com/users/kugwzk/orgs", "repos_url": "https://api.github.com/users/kugwzk/repos", "events_url": "https://api.github.com/users/kugwzk/events{/privacy}", "received_events_url": "https://api.github.com/users/kugwzk/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "What's your precise question?", "> What's your precise question?\r\n\r\nthe self.padding_idx's meaning in modeling_roberta.py", "It's the position of the padding vector. It's not unique to RoBERTa but far more general, especially for embeddings. Take a look at [the PyTorch documentation](https://pytorch.org/docs/stable/nn.html#embedding).", "> It's the position of the padding vector. It's not unique to RoBERTa but far more general, especially for embeddings. Take a look at [the PyTorch documentation](https://pytorch.org/docs/stable/nn.html#embedding).\r\n\r\nI know that, but I confuse about why there is 1 and the \\<s\\> is 0, is it ignore and why the max_position_embeddings size is 512+2=514?", "Because that's their index [in the vocab](https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-vocab.json). The max_position_embeddings size is indeed 514, I'm not sure why. The tokenizer seems to handle text correctly with a max of 512. Perhaps someone of the developers can help with that. I would advise you to change the title of your topic.\r\n\r\nhttps://github.com/huggingface/transformers/blob/ae50ad91ea2fedb64ecd2e7c8e2d0d4778dc03aa/transformers/tokenization_roberta.py#L84-L85", "@LysandreJik can chime in if I’m wrong, but afaik `max_position_embeddings` is just the name of the variable that we use to encode the size of the embedding matrix. Max_len is correctly set to 512.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Answer here in case anyone from the future is curious: https://github.com/pytorch/fairseq/issues/1187", "> Answer here in case anyone from the future is curious: [pytorch/fairseq#1187](https://github.com/pytorch/fairseq/issues/1187)\r\n\r\n@morganmcg1 Tks for this, was getting all kinds of CUDA errors because i setted `max_position_embeddings=512`, now that i setted 514 it's running ok..." ]
1,569
1,615
1,575
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> When I see the code of Roberta, I have a question about the padding_idx = 1, I don't know very well. And the comment is still confused for me.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1363/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1363/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1362
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1362/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1362/comments
https://api.github.com/repos/huggingface/transformers/issues/1362/events
https://github.com/huggingface/transformers/pull/1362
499,742,934
MDExOlB1bGxSZXF1ZXN0MzIyMzk1MTc1
1,362
fix link
{ "login": "FeiWang96", "id": 19998174, "node_id": "MDQ6VXNlcjE5OTk4MTc0", "avatar_url": "https://avatars.githubusercontent.com/u/19998174?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FeiWang96", "html_url": "https://github.com/FeiWang96", "followers_url": "https://api.github.com/users/FeiWang96/followers", "following_url": "https://api.github.com/users/FeiWang96/following{/other_user}", "gists_url": "https://api.github.com/users/FeiWang96/gists{/gist_id}", "starred_url": "https://api.github.com/users/FeiWang96/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FeiWang96/subscriptions", "organizations_url": "https://api.github.com/users/FeiWang96/orgs", "repos_url": "https://api.github.com/users/FeiWang96/repos", "events_url": "https://api.github.com/users/FeiWang96/events{/privacy}", "received_events_url": "https://api.github.com/users/FeiWang96/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1362?src=pr&el=h1) Report\n> Merging [#1362](https://codecov.io/gh/huggingface/transformers/pull/1362?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a6a6d9e6382961dc92a1a08d1bab05a52dc815f9?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1362/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1362?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1362 +/- ##\n=======================================\n Coverage 84.69% 84.69% \n=======================================\n Files 84 84 \n Lines 12596 12596 \n=======================================\n Hits 10668 10668 \n Misses 1928 1928\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1362?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1362?src=pr&el=footer). Last update [a6a6d9e...60f7916](https://codecov.io/gh/huggingface/transformers/pull/1362?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "👍 " ]
1,569
1,569
1,569
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1362/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1362/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1362", "html_url": "https://github.com/huggingface/transformers/pull/1362", "diff_url": "https://github.com/huggingface/transformers/pull/1362.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1362.patch", "merged_at": 1569659203000 }
https://api.github.com/repos/huggingface/transformers/issues/1361
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1361/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1361/comments
https://api.github.com/repos/huggingface/transformers/issues/1361/events
https://github.com/huggingface/transformers/pull/1361
499,665,321
MDExOlB1bGxSZXF1ZXN0MzIyMzM5MTYx
1,361
distil-finetuning in run_squad
{ "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "repos_url": "https://api.github.com/users/VictorSanh/repos", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cf https://github.com/huggingface/transformers/issues/1193#issuecomment-534740929", "Ok, as discussed let's copy this script to the `examples/distillation` folder and keep `run_squad` barebone for now as it's going to evolve in the short term.", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1361?src=pr&el=h1) Report\n> Merging [#1361](https://codecov.io/gh/huggingface/transformers/pull/1361?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2dc8cb87341223e86220516951bb4ad84f880b4a?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1361/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1361?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1361 +/- ##\n=======================================\n Coverage 84.69% 84.69% \n=======================================\n Files 84 84 \n Lines 12596 12596 \n=======================================\n Hits 10668 10668 \n Misses 1928 1928\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1361?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1361?src=pr&el=footer). Last update [2dc8cb8...b4df865](https://codecov.io/gh/huggingface/transformers/pull/1361?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "no squash @VictorSanh? 😬" ]
1,569
1,592
1,570
MEMBER
null
- Add the option for double loss: fine-tuning + distillation from a larger squad-finetune model. - Fix `inputs` for `DistilBERT` (also see fix in `run_glue.py` 702f589848baba97ea4897aa3f0bb937e1ec3bcf)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1361/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1361/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1361", "html_url": "https://github.com/huggingface/transformers/pull/1361", "diff_url": "https://github.com/huggingface/transformers/pull/1361.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1361.patch", "merged_at": 1570224196000 }
https://api.github.com/repos/huggingface/transformers/issues/1360
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1360/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1360/comments
https://api.github.com/repos/huggingface/transformers/issues/1360/events
https://github.com/huggingface/transformers/issues/1360
499,626,355
MDU6SXNzdWU0OTk2MjYzNTU=
1,360
Chunking Long Documents for Classification Tasks
{ "login": "anassalamah", "id": 8571003, "node_id": "MDQ6VXNlcjg1NzEwMDM=", "avatar_url": "https://avatars.githubusercontent.com/u/8571003?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anassalamah", "html_url": "https://github.com/anassalamah", "followers_url": "https://api.github.com/users/anassalamah/followers", "following_url": "https://api.github.com/users/anassalamah/following{/other_user}", "gists_url": "https://api.github.com/users/anassalamah/gists{/gist_id}", "starred_url": "https://api.github.com/users/anassalamah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anassalamah/subscriptions", "organizations_url": "https://api.github.com/users/anassalamah/orgs", "repos_url": "https://api.github.com/users/anassalamah/repos", "events_url": "https://api.github.com/users/anassalamah/events{/privacy}", "received_events_url": "https://api.github.com/users/anassalamah/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm not sure that I understand. As you say, you can see it implemented in the run_squad example. What else would you like? ", "Hello Bram,\r\n\r\nI mean I want to apply it with a sequence classification task like BertForSequenceClassification, for example, versus what is being done in squad.\r\n\r\nI don't think it should be too hard but I'm not exactly sure how a long document that is being chunked gets trained. Do we ignore the fact that these are chunks of the same document and just treat them as independent docs? Or do we do some sort of trick to join the tokens/embeddings with the first chunk? \r\n\r\nHow would this be implemented for sequence classification?", "I quickly glared over the `convert_examples_to_features` function, and it seems that given some stride different parts are used as input. So, yes, as far as I can see they are treated as independent docs.\r\n\r\nhttps://github.com/huggingface/transformers/blob/ae50ad91ea2fedb64ecd2e7c8e2d0d4778dc03aa/examples/utils_squad.py#L189-L397\r\n\r\n", "isn't there a way to deal with long documents without ignoring the fact that the chunks represent the same doc? \r\n\r\nMaybe something along the lines of https://finetune.indico.io/chunk.html?highlight=long or https://explosion.ai/blog/spacy-pytorch-transformers#batching", "After a first look, I don't see how `spacy-pytorch-transformers` does anything special rather than processing a document sentence-per-sentence. `finetune`'s approach might be what you after (taking the mean over all the slided windows), but as always: \"a mean is just a mean\", so the question remains how representative it is of the whole document. I am not saying that slicing is _better_ by any means, but averaging can distort \"real\" values greatly.", "Yeah I see your point. I'm starting to think that maybe trying out chunking with a couple of different strides and maybe at inference time taking a voting approach would be a better option.\r\n\r\nIn any case, thank you for your feedback!", "I agree that that might be the more efficient approach. No worries, thanks for the interesting question. If you think it's okay the question, please close it so it's easy to keep track of all open issues.", "Hi, just to let you know that there is an option to manage strides in the `encode_plus` method. It handles special tokens and returns the overflowing elements in the `overflowing_tokens` field of the returned dictionary." ]
1,569
1,569
1,569
NONE
null
## 🚀 Feature A way to process long documents for downstream classification tasks. One approach is to chunk long sequences with a specific stride similar to what is done in the run_squad example. ## Motivation For classification tasks using datasets that are on average longer than 512 tokens, I believe it would improve performance. ## Additional context https://github.com/google-research/bert/issues/27#issuecomment-435265194
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1360/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1360/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1359
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1359/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1359/comments
https://api.github.com/repos/huggingface/transformers/issues/1359/events
https://github.com/huggingface/transformers/pull/1359
499,583,050
MDExOlB1bGxSZXF1ZXN0MzIyMjc0Nzc1
1,359
Update run_lm_finetuning.py
{ "login": "dennymarcels", "id": 12802916, "node_id": "MDQ6VXNlcjEyODAyOTE2", "avatar_url": "https://avatars.githubusercontent.com/u/12802916?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dennymarcels", "html_url": "https://github.com/dennymarcels", "followers_url": "https://api.github.com/users/dennymarcels/followers", "following_url": "https://api.github.com/users/dennymarcels/following{/other_user}", "gists_url": "https://api.github.com/users/dennymarcels/gists{/gist_id}", "starred_url": "https://api.github.com/users/dennymarcels/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dennymarcels/subscriptions", "organizations_url": "https://api.github.com/users/dennymarcels/orgs", "repos_url": "https://api.github.com/users/dennymarcels/repos", "events_url": "https://api.github.com/users/dennymarcels/events{/privacy}", "received_events_url": "https://api.github.com/users/dennymarcels/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1359?src=pr&el=h1) Report\n> Merging [#1359](https://codecov.io/gh/huggingface/transformers/pull/1359?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ca559826c4188be8713e46f191ddf5f379c196e7?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1359/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1359?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1359 +/- ##\n=======================================\n Coverage 84.73% 84.73% \n=======================================\n Files 84 84 \n Lines 12573 12573 \n=======================================\n Hits 10654 10654 \n Misses 1919 1919\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1359?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1359?src=pr&el=footer). Last update [ca55982...9478590](https://codecov.io/gh/huggingface/transformers/pull/1359?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Thanks @dennymarcels!" ]
1,569
1,569
1,569
CONTRIBUTOR
null
The previous method, just as phrased, did not exist in the class.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1359/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1359/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1359", "html_url": "https://github.com/huggingface/transformers/pull/1359", "diff_url": "https://github.com/huggingface/transformers/pull/1359.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1359.patch", "merged_at": 1569617900000 }
https://api.github.com/repos/huggingface/transformers/issues/1358
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1358/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1358/comments
https://api.github.com/repos/huggingface/transformers/issues/1358/events
https://github.com/huggingface/transformers/issues/1358
499,574,334
MDU6SXNzdWU0OTk1NzQzMzQ=
1,358
How to contribute to “Write with transformer”?
{ "login": "mauceri", "id": 1011775, "node_id": "MDQ6VXNlcjEwMTE3NzU=", "avatar_url": "https://avatars.githubusercontent.com/u/1011775?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mauceri", "html_url": "https://github.com/mauceri", "followers_url": "https://api.github.com/users/mauceri/followers", "following_url": "https://api.github.com/users/mauceri/following{/other_user}", "gists_url": "https://api.github.com/users/mauceri/gists{/gist_id}", "starred_url": "https://api.github.com/users/mauceri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mauceri/subscriptions", "organizations_url": "https://api.github.com/users/mauceri/orgs", "repos_url": "https://api.github.com/users/mauceri/repos", "events_url": "https://api.github.com/users/mauceri/events{/privacy}", "received_events_url": "https://api.github.com/users/mauceri/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null }, { "id": 1565794707, "node_id": "MDU6TGFiZWwxNTY1Nzk0NzA3", "url": "https://api.github.com/repos/huggingface/transformers/labels/Write%20With%20Transformer", "name": "Write With Transformer", "color": "a84bf4", "default": false, "description": "" } ]
closed
false
null
[]
[ "What is it that you can contribute? The only (yet impressive) thing that is going on is language modeling. Can you contribute a pre-trained French model for one of the frameworks? That's (as far as I know) the only way to contribute. ", "Thanks Bram, I’m going to investigate what the cost could be for XLNet on clevergrid https://www.clevergrid.io/?pk_campaign=ga-gpu-1&pk_source=adwords&pk_medium=sem&pk_content=gpuasaservicefr&gclid=CjwKCAjwibzsBRAMEiwA1pHZrvm8ozRMrbcDR7YoYiKqsq6gEnPo9AecJwjKzBxa8L-4_hB6ny4uARoCwfMQAvD_BwE\n\nEnvoyé de mon iPad\n\n> Le 28 sept. 2019 à 09:44, Bram Vanroy <[email protected]> a écrit :\n> \n> What is it that you can contribute? The only (yet impressive) thing that is going on is language modeling. Can you contribute a pre-trained French model for one of the frameworks? That's (as far as I know) the only way to contribute.\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "Hi all,\r\nWe (ovh) are open to calculate it for free.", "Not sure if a new French language model is still necessary after Camembert has been introduced.", "That's awesome news, @jqueguiner! Let us know if we can help.\r\n\r\n@BramVanroy To work well with Write With Transformer, we would want more like a FR-pretrained GPT-2-like model. CamemBERT wouldn't do on generation out of the box.\r\n\r\nSee also the more specific issue: https://github.com/huggingface/transformers/issues/1356", "For generation CamemBERT is of no use I think...\n\nEnvoyé de mon iPad\n\n> Le 22 nov. 2019 à 14:02, Bram Vanroy <[email protected]> a écrit :\n> \n> \n> Not sure if a new French language model is still necessary after Camembert has been introduced.\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n", "Yes, CamemBERT is awesome, but for WWT we need a FR-Pretrained GPT-2 model!\n\nEnvoyé de mon iPad\n\n> Le 22 nov. 2019 à 14:55, Julien Chaumond <[email protected]> a écrit :\n> \n> \n> That's awesome news, @jqueguiner! Let us know if we can help.\n> \n> @BramVanroy To work well with Write With Transformer, we would want more like a FR-pretrained GPT-2-like model. CamemBERT wouldn't do on generation out of the box.\n> \n> See also the more specific issue: #1356\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n", "Camembert doesnt offer generation, only syntax analysis and masking due to the nature of the network. Multiple mask generation (<mask><mask><mask>) gives really uggly results as you cna test here : https://market-place.ai.ovh.net/#!/apis/43323c37-59e7-4092-b23c-3759e7c09288/pages/94d31892-4e64-446f-9318-924e64346f9e\r\n\r\nIMO we should start training using OSCAR dataset \r\nhttps://traces1.inria.fr/oscar/\r\n\r\n@julien-c yes we can start with a collab GPT2 french training ipynb together then I'll prepare the env for a DGX1 or something similar. I didn't train a GPT2 before. IS it scaling over multiple GPU's ? do we need horovod adaptation ?", "Oops, sorry everyone. I thought this was a general French model question. My bad. ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,569
1,580
1,580
NONE
null
## 🚀 I would like to contribute to a French version of this App I’m French, I write short stories, and I’m also a software engineer ## Motivation I’ll retire in 6 months and I wanted to build such an app before I stumbled on your demo. ## Additional context https://www.linkedin.com/in/mauceri/
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1358/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1358/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1357
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1357/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1357/comments
https://api.github.com/repos/huggingface/transformers/issues/1357/events
https://github.com/huggingface/transformers/issues/1357
499,564,031
MDU6SXNzdWU0OTk1NjQwMzE=
1,357
Support for SuperGLUE fine-tune/eval?
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "So is HuggingFace going to write the finetuning implementation for SuperGlue?", "Hi @jiachangliu, did you have any news about support for superglue?", "> Hi @jiachangliu, did you have any news about support for superglue?\r\n\r\nNo I have not heard any HugginFace support on SuperGlue. It was not urgent for me to run those experiments. However, if you want to run SuperGlue, I guess you need to install JIANT, which uses the model structures built by HuggingFace.", "> > Hi @jiachangliu, did you have any news about support for superglue?\r\n> \r\n> No I have not heard any HugginFace support on SuperGlue. It was not urgent for me to run those experiments. However, if you want to run SuperGlue, I guess you need to install JIANT, which uses the model structures built by HuggingFace.\r\n\r\nThank you !! " ]
1,569
1,605
1,575
MEMBER
null
## 🚀 Feature https://super.gluebenchmark.com/ Current canonical implem is https://github.com/nyu-mll/jiant/ ## Motivation https://twitter.com/_florianmai/status/1177489945918722050
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1357/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1357/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1356
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1356/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1356/comments
https://api.github.com/repos/huggingface/transformers/issues/1356/events
https://github.com/huggingface/transformers/issues/1356
499,526,622
MDU6SXNzdWU0OTk1MjY2MjI=
1,356
GPT and BERT pretrained models in French
{ "login": "mauceri", "id": 1011775, "node_id": "MDQ6VXNlcjEwMTE3NzU=", "avatar_url": "https://avatars.githubusercontent.com/u/1011775?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mauceri", "html_url": "https://github.com/mauceri", "followers_url": "https://api.github.com/users/mauceri/followers", "following_url": "https://api.github.com/users/mauceri/following{/other_user}", "gists_url": "https://api.github.com/users/mauceri/gists{/gist_id}", "starred_url": "https://api.github.com/users/mauceri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mauceri/subscriptions", "organizations_url": "https://api.github.com/users/mauceri/orgs", "repos_url": "https://api.github.com/users/mauceri/repos", "events_url": "https://api.github.com/users/mauceri/events{/privacy}", "received_events_url": "https://api.github.com/users/mauceri/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Pre-training is indeed a tough pill to swallow. First of all you need a good dataset (does such dataset exist for French?), second you need a lot of processing power. A lot. If a dataset is available (preprocessed, ready to train) then I'd be willing to look into training the model on hardware that I have available. ", "Have you an example of a good dataset prepared for the english language (my experience on such things is limited to training Glove on a cleaned dump of the french wikipedia) ?", "English BERT was trained on Wikipedia and BookCorpus for 1M steps.\r\n\r\nAfter reading throug hthe BERT readme, I have to retract my previous statement, though. I do not have the resources to pretrain such a model. I thought it would be max one week on a V100, but they speak of four days on *4 to 16 cloud TPUs*. I do not possess such power!", "Hi Bram,\n\nI planned to use the French Wikipedia and some Gutenberg famous French works like La comédie humaine for a start, I let you know when I finish to preprocess them. Concerning the hardware I would like to use gpu ec2 spot instances but I do not know how long I’ll have to run them and if it exceeds my meagre financial resources.\n\n\n\nEnvoyé de mon iPad\n\n> Le 28 sept. 2019 à 10:53, Nestor Demeure <[email protected]> a écrit :\n> \n> Have you an example of a good dataset prepared for the english language (my experience on such things is limited to training Glove on a cleaned dump of the french wikipedia) ?\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "Reading [this](https://cloud.google.com/blog/products/ai-machine-learning/now-you-can-train-ml-models-faster-and-lower-cost-cloud-tpu-pods) comparison post, 16 TPUv2's are about twice as fast as 8x V100's that are in the ec2 instances. I would then guess that you'd have to run training for a week.", "Order of magnitude for the compute cost (on cloud platforms) of pre-training a large model is anywhere between $10k and $100k. That's for one pre-training, and you usually at least start multiple ones to search the hyperparameter space.\r\n\r\nRoBERTa was pre-trained for 24 hours on 1,024 (full size, 32GB) V100s.", "> Order of magnitude for the compute cost (on cloud platforms) of pre-training a large model is anywhere between $10k and $100k. That's for one pre-training, and you usually at least start multiple ones to search the hyperparameter space.\r\n> \r\n> RoBERTa was pre-trained for 24 hours on 1,024 (full size, 32GB) V100s.\r\n\r\nPretty sure that [this](https://media1.tenor.com/images/dbf3ee8c8e92b4c1bd3492636a774dc7/tenor.gif) is applicable for everyone here.", "i made a dataset by converting books from [bibebook](http://www.bibebook.com/) package to text files.\r\nit's a package of 1 700 Créative Commons BY-SA and public domain book in french \r\n\r\n[livre en francais kaggle dataset](https://www.kaggle.com/cedriclacrambe/livres-en-francais)", "Wonderful! Thank you very much!\n\n> Le 30 sept. 2019 à 12:33, cedspam <[email protected]> a écrit :\n> \n> i made a dataset by converting books from bibebook <http://www.bibebook.com/> to text files.\n> it's a package of 1 700 Créative Commons BY-SA and public domain book in french\n> \n> livre francais kaggle dataset <https://www.kaggle.com/cedriclacrambe/livres-en-francais>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub <https://github.com/huggingface/transformers/issues/1356?email_source=notifications&email_token=AAHXAP2JVSBU2KSTRLJI6HDQMHIZJA5CNFSM4I3IELGKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD75GLQY#issuecomment-536503747>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AAHXAP7ER7H4ERVY7J7JS7LQMHIZJANCNFSM4I3IELGA>.\n> \n\n", "Hi all,\r\n\r\nI'm currently preparing the `.tfrecords` (both cased and uncased) for a French BERT model (corpus is mainly taken from Wikipedia + OPUS corpora, resulting in ~20GB of text). \r\n\r\nI'll share the results (TF checkpoints + Transformers weights) whenever the training on TPU has finished.\r\n\r\nEvaluation tasks for that model are a bit limited, so I would evaluate the model for PoS tagging and NER (Universal Dependencies and WikiANN) and compare the model with mBERT.", "Great news!\n\nEnvoyé de mon iPad\n\n> Le 5 oct. 2019 à 20:20, Stefan Schweter <[email protected]> a écrit :\n> \n> \n> Hi all,\n> \n> I'm currently preparing the .tfrecords (both cased and uncased) for a French BERT model (corpus is mainly taken from Wikipedia + OPUS corpora, resulting in ~20GB of text).\n> \n> I'll share the results (TF checkpoints + Transformers weights) whenever the training on TPU has finished.\n> \n> Evaluation tasks for that model are a bit limited, so I would evaluate the model for PoS tagging and NER (Universal Dependencies and WikiANN) and compare the model with mBERT.\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "That's awesome @stefan-it. Let us know if we can help.", "I'm training the GPT-2 on corpus of Russian classical literature. I've modified training script to make it more robust and useful. You can find it [here](https://github.com/mgrankin/ru_transformers). ", "Thanks for sharing Mikhail :)\n\nEnvoyé de mon iPad\n\n> Le 7 oct. 2019 à 17:53, Mikhail Grankin <[email protected]> a écrit :\n> \n> \n> I'm training the GPT-2 on corpus of Russian classical literature. I've modified training script to make it more robust and useful. You can find it here.\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or mute the thread.\n", "> Hi all,\r\n> \r\n> I'm currently preparing the `.tfrecords` (both cased and uncased) for a French BERT model (corpus is mainly taken from Wikipedia + OPUS corpora, resulting in ~20GB of text).\r\n> \r\n> I'll share the results (TF checkpoints + Transformers weights) whenever the training on TPU has finished.\r\n> \r\n> Evaluation tasks for that model are a bit limited, so I would evaluate the model for PoS tagging and NER (Universal Dependencies and WikiANN) and compare the model with mBERT.\r\n\r\n@stefan-it Could you explain to me how you trained your model from scratch without using Bert multilingual?\r\n\r\nI would like to train BERT from scratch for a textual base in PT-BR (8GB data). Is it possible to use the run_lm_finetuning.py code to perform this process without using the multi-language bert model?\r\n\r\nI already have a vocab.txt for the PT-BR base and I don't want to load initial weights.\r\n\r\nIs there any script or tutorial to perform this process step by step?", "I don’t know if this link https://github.com/facebookresearch/XLM can answer your question. \n\nEnvoyé de mon iPad\n\n> Le 17 oct. 2019 à 20:03, calusbr <[email protected]> a écrit :\n> \n> \n> Hi all,\n> \n> I'm currently preparing the .tfrecords (both cased and uncased) for a French BERT model (corpus is mainly taken from Wikipedia + OPUS corpora, resulting in ~20GB of text).\n> \n> I'll share the results (TF checkpoints + Transformers weights) whenever the training on TPU has finished.\n> \n> Evaluation tasks for that model are a bit limited, so I would evaluate the model for PoS tagging and NER (Universal Dependencies and WikiANN) and compare the model with mBERT.\n> \n> @stefan-it Could you explain to me how you trained your model from scratch without using Bert multilingual?\n> \n> I would like to train BERT from scratch for a textual base in PT-BR (8GB data). Is it possible to use the run_lm_finetuning.py code to perform this process without using the multi-language bert model?\n> \n> I already have a vocab.txt for the PT-BR base and I don't want to load initial weights.\n> \n> Is there any script or tutorial to perform this process step by step?\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n", "Hi @calusbr,\r\n\r\nI'm using the official Google BERT implementation from [this repository](https://github.com/google-research/bert) on a TPU. Then the trained model TensorFlow model can easily be converted into a Transformers-compatible one (so I can be used with this library).\r\n\r\nRegarding to your question: if you don't want to use and fine-tune the multi-lingual BERT model, you could try to train a model with the official BERT implementation for a few steps (Google Colab has TPU support). Then you can fine-tune this model with `transformers` (or you can try to use the Colab instance) :)", "> Hi all,\r\n> \r\n> I'm currently preparing the `.tfrecords` (both cased and uncased) for a French BERT model (corpus is mainly taken from Wikipedia + OPUS corpora, resulting in ~20GB of text).\r\n> \r\n> I'll share the results (TF checkpoints + Transformers weights) whenever the training on TPU has finished.\r\n> \r\n> Evaluation tasks for that model are a bit limited, so I would evaluate the model for PoS tagging and NER (Universal Dependencies and WikiANN) and compare the model with mBERT.\r\n\r\nHi @stefan-it !\r\nVery happy to know that you will possibly able to share this model with us!\r\nDo you have any update on it?\r\nMany thanks!! :)", "Sure, no problem :)\r\n\r\nI did some experiments with a training corpus size from 16 to 40 GB. I used the same fine-tuning parameters as used in the SciBERT paper/repository. That means training with a sequence length of 128, then fine-tuning with a sequence length of 512.\r\n\r\nUnfortunately, the model trained from scratch is ~ 0.5% worse than the multilingual model on a WikiNER split (80/10/10). In another experiment I used the TensorFlow checkpoint from the multilingual cased model and did training with a sequence length of 128. This results in a +0.2% \"boost\" on WikiNER.\r\n\r\nHowever, for PoS tagging the model (trained from scratch) is always better (~0.3%) than the BERT multilingual cased model (I used 4 PoS tagging datasets).\r\n\r\nI'm currently doing more experiments (mainly focussing on training corpus cleaning...) and will report back here :)", "Thanks Stefan !\n\n> Le 4 nov. 2019 à 11:33, Stefan Schweter <[email protected]> a écrit :\n> \n> Sure, no problem :)\n> \n> I did some experiments with a training corpus size from 16 to 40 GB. I used the same fine-tuning parameters as used in the SciBERT paper/repository. That means training with a sequence length of 128, then fine-tuning with a sequence length of 512.\n> \n> Unfortunately, the model trained from scratch is ~ 0.5% worse than the multilingual model on a WikiNER split (80/10/10). In another experiment I used the TensorFlow checkpoint from the multilingual cased model and did training with a sequence length of 128. This results in a +0.2% \"boost\" on WikiNER.\n> \n> However, for PoS tagging the model (trained from scratch) is always better (~0.3%) than the BERT multilingual cased model (I used 4 PoS tagging datasets).\n> \n> I'm currently doing more experiments (mainly focussing on training corpus cleaning...) and will report back here :)\n> \n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub <https://github.com/huggingface/transformers/issues/1356?email_source=notifications&email_token=AAHXAP7ZVWXK4GP236MLDIDQR726NA5CNFSM4I3IELGKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEC6ZPXY#issuecomment-549296095>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAHXAP3PKUDBDELDVEGMUT3QR726NANCNFSM4I3IELGA>.\n> \n\n", "Thanks for your work @stefan-it. It's nice, but perhaps disappointing, to see that the multilingual models aren't that bad after all. From what I read, the multilingual models were said to perform poorly but from your tests it seems that is not (laways?) the case.", "I think we should wait for CamemBERT then 😅\r\n\r\nhttps://camembert-model.fr/", "Coming soon! cc @louismartin @LysandreJik ", "Two days ago they released on arXiv the [https://128.84.21.199/pdf/1911.03894.pdf](url)\r\n\r\n> I think we should wait for CamemBERT then \r\n> \r\n> https://camembert-model.fr/", "CamemBERT was merged into master: https://github.com/huggingface/transformers/pull/1822\r\n\r\nI'll keep this issue open for GPT.", "Hello, this thread is what I was looking for but I'm not sure I found the answer to my questions:\r\n- how long does it take to go through GPT-2 and BERT in French?\r\n- what configuration of GPUs?\r\n- what size of corpus?\r\n\r\nThanks a lot in advance.", "We trained CamemBERT on 138GB of raw text on 256 GPUs (32 GB Tesla V100) for 1 day.", "Thank you very much for this valuable information !\n\nChristian Mauceri, PhD\nLe 4 déc. 2019 à 16:17 +0100, Louis Martin <[email protected]>, a écrit :\n> We trained CamemBERT on 138GB of raw text on 258 GPUs (32 GB Tesla V100) for 1 day.\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub, or unsubscribe.\n", "> We trained CamemBERT on 138GB of raw text on 258 GPUs (32 GB Tesla V100) for 1 day.\r\n\r\nThanks @louismartin. I find great what your did and published with CamemBERT (I'm French :-) ) and the fact you share as well this kind of information. \r\n\r\nAbout your answer: 258 GPUs Tesla V100... waoooooo!!!!!\r\nWhere did you find this power of computation? In [Facebook AI](https://ai.facebook.com)?\r\n\r\nI read in the [Download section of CamemBERT site](https://camembert-model.fr/#download\r\n) that the model has only 110 millions of parameters. Was it worth to train it on 132 GB of data? ", "> Hi all,\r\n> \r\n> I'm currently preparing the `.tfrecords` (both cased and uncased) for a French BERT model (corpus is mainly taken from Wikipedia + OPUS corpora, resulting in ~20GB of text).\r\n> \r\n> I'll share the results (TF checkpoints + Transformers weights) whenever the training on TPU has finished.\r\n> \r\n> Evaluation tasks for that model are a bit limited, so I would evaluate the model for PoS tagging and NER (Universal Dependencies and WikiANN) and compare the model with mBERT.\r\n\r\nHi @stefan-it , do you mind to upload your French Bert check point ? I am interested in your model for generation task. Thanks" ]
1,569
1,606
1,586
NONE
null
## 🚀 Need for GPT and BERT pretrained models in French All models are in English only and the multilingual models are quite poor ## Motivation Applications like tools for writers and linguists need fully dedicated language support ## Additional context The computation cost to pretrain models in French is still high and it’s difficult for individuals to afford it, I would be glad to take a part of the burden
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1356/reactions", "total_count": 9, "+1": 9, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1356/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1355
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1355/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1355/comments
https://api.github.com/repos/huggingface/transformers/issues/1355/events
https://github.com/huggingface/transformers/pull/1355
499,506,984
MDExOlB1bGxSZXF1ZXN0MzIyMjEzNzI1
1,355
Fix tensorflow_dataset glue support
{ "login": "agrinh", "id": 2157859, "node_id": "MDQ6VXNlcjIxNTc4NTk=", "avatar_url": "https://avatars.githubusercontent.com/u/2157859?v=4", "gravatar_id": "", "url": "https://api.github.com/users/agrinh", "html_url": "https://github.com/agrinh", "followers_url": "https://api.github.com/users/agrinh/followers", "following_url": "https://api.github.com/users/agrinh/following{/other_user}", "gists_url": "https://api.github.com/users/agrinh/gists{/gist_id}", "starred_url": "https://api.github.com/users/agrinh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/agrinh/subscriptions", "organizations_url": "https://api.github.com/users/agrinh/orgs", "repos_url": "https://api.github.com/users/agrinh/repos", "events_url": "https://api.github.com/users/agrinh/events{/privacy}", "received_events_url": "https://api.github.com/users/agrinh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1355?src=pr&el=h1) Report\n> Merging [#1355](https://codecov.io/gh/huggingface/transformers/pull/1355?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ca559826c4188be8713e46f191ddf5f379c196e7?src=pr&el=desc) will **decrease** coverage by `0.04%`.\n> The diff coverage is `50%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1355/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1355?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1355 +/- ##\n==========================================\n- Coverage 84.73% 84.68% -0.05% \n==========================================\n Files 84 84 \n Lines 12573 12592 +19 \n==========================================\n+ Hits 10654 10664 +10 \n- Misses 1919 1928 +9\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1355?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/1355/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2RhdGEvcHJvY2Vzc29ycy91dGlscy5weQ==) | `46.66% <100%> (+1.21%)` | :arrow_up: |\n| [transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/1355/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2RhdGEvcHJvY2Vzc29ycy9nbHVlLnB5) | `27.98% <47.36%> (+1.76%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1355?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1355?src=pr&el=footer). Last update [ca55982...795b3e7](https://codecov.io/gh/huggingface/transformers/pull/1355?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Nice and clean, thanks a lot @agrinh and @philipp-eisen!", "@thomwolf Happy to help, we're finding this package super useful!" ]
1,569
1,569
1,569
NONE
null
This PR fixes issue #1354 . `glue_convert_examples_to_features` assumed that tensorflow_dataset examples contains the features `'sentence1'` and `'sentence2'`. This commit encapsulates the choice of features in the glue processor and uses that to parse examples. Built with @philipp-eisen .
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1355/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1355/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1355", "html_url": "https://github.com/huggingface/transformers/pull/1355", "diff_url": "https://github.com/huggingface/transformers/pull/1355.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1355.patch", "merged_at": 1569617994000 }
https://api.github.com/repos/huggingface/transformers/issues/1354
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1354/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1354/comments
https://api.github.com/repos/huggingface/transformers/issues/1354/events
https://github.com/huggingface/transformers/issues/1354
499,497,695
MDU6SXNzdWU0OTk0OTc2OTU=
1,354
run_tf_glue.py breaks when changing to a glue dataset different from mrpc
{ "login": "philipp-eisen", "id": 8607233, "node_id": "MDQ6VXNlcjg2MDcyMzM=", "avatar_url": "https://avatars.githubusercontent.com/u/8607233?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philipp-eisen", "html_url": "https://github.com/philipp-eisen", "followers_url": "https://api.github.com/users/philipp-eisen/followers", "following_url": "https://api.github.com/users/philipp-eisen/following{/other_user}", "gists_url": "https://api.github.com/users/philipp-eisen/gists{/gist_id}", "starred_url": "https://api.github.com/users/philipp-eisen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philipp-eisen/subscriptions", "organizations_url": "https://api.github.com/users/philipp-eisen/orgs", "repos_url": "https://api.github.com/users/philipp-eisen/repos", "events_url": "https://api.github.com/users/philipp-eisen/events{/privacy}", "received_events_url": "https://api.github.com/users/philipp-eisen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Fixed with #1355" ]
1,569
1,569
1,569
NONE
null
## 🐛 Bug - run_tf_glue.py breaks when changing to a glue dataset different from mrpc <!-- Important information --> [run_tf_glue.py](https://github.com/huggingface/transformers/blob/master/examples/run_tf_glue.py) breaks when changing to a glue dataset different from `mrpc`, where the features are not called `'sentence1'` and `'sentence2'`. That happens because of the hard coded accesses in the tensor_dict https://github.com/huggingface/transformers/blob/ca559826c4188be8713e46f191ddf5f379c196e7/transformers/data/processors/glue.py#L83 The tasks I am working on is: * [x] an official GLUE/SQUaD task: SST-2 * [ ] my own task or dataset: (give details) ## To Reproduce Steps to reproduce the behavior: 1. Go to https://github.com/huggingface/transformers/blob/master/examples/run_tf_glue.py#L11 2. Change `mrpc` to `sst-2` 3. 💥BOOM! broken ## Expected behavior * [ ] Handle all glue datasets from `tensorflow_datasets` correctly P.S.: A colleague and I are currently working on a fix and will submit a PR for this issue in the next couple of minutes.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1354/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1354/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1353
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1353/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1353/comments
https://api.github.com/repos/huggingface/transformers/issues/1353/events
https://github.com/huggingface/transformers/pull/1353
499,493,724
MDExOlB1bGxSZXF1ZXN0MzIyMjAyOTIw
1,353
Fix some typos
{ "login": "pjpjq", "id": 17057603, "node_id": "MDQ6VXNlcjE3MDU3NjAz", "avatar_url": "https://avatars.githubusercontent.com/u/17057603?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pjpjq", "html_url": "https://github.com/pjpjq", "followers_url": "https://api.github.com/users/pjpjq/followers", "following_url": "https://api.github.com/users/pjpjq/following{/other_user}", "gists_url": "https://api.github.com/users/pjpjq/gists{/gist_id}", "starred_url": "https://api.github.com/users/pjpjq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pjpjq/subscriptions", "organizations_url": "https://api.github.com/users/pjpjq/orgs", "repos_url": "https://api.github.com/users/pjpjq/repos", "events_url": "https://api.github.com/users/pjpjq/events{/privacy}", "received_events_url": "https://api.github.com/users/pjpjq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "👍" ]
1,569
1,569
1,569
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1353/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1353/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1353", "html_url": "https://github.com/huggingface/transformers/pull/1353", "diff_url": "https://github.com/huggingface/transformers/pull/1353.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1353.patch", "merged_at": 1569618035000 }
https://api.github.com/repos/huggingface/transformers/issues/1352
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1352/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1352/comments
https://api.github.com/repos/huggingface/transformers/issues/1352/events
https://github.com/huggingface/transformers/issues/1352
499,465,519
MDU6SXNzdWU0OTk0NjU1MTk=
1,352
wwm-bert lm_finetune
{ "login": "yangDDDD", "id": 30787273, "node_id": "MDQ6VXNlcjMwNzg3Mjcz", "avatar_url": "https://avatars.githubusercontent.com/u/30787273?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yangDDDD", "html_url": "https://github.com/yangDDDD", "followers_url": "https://api.github.com/users/yangDDDD/followers", "following_url": "https://api.github.com/users/yangDDDD/following{/other_user}", "gists_url": "https://api.github.com/users/yangDDDD/gists{/gist_id}", "starred_url": "https://api.github.com/users/yangDDDD/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yangDDDD/subscriptions", "organizations_url": "https://api.github.com/users/yangDDDD/orgs", "repos_url": "https://api.github.com/users/yangDDDD/repos", "events_url": "https://api.github.com/users/yangDDDD/events{/privacy}", "received_events_url": "https://api.github.com/users/yangDDDD/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,569
1,575
1,575
NONE
null
## 🚀 Feature in run_lm_finetuning.py present how to finetune language model with dataset ## Motivation But there isn't option to finetune whole word masking bert models I suggest to add it
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1352/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1352/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1351
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1351/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1351/comments
https://api.github.com/repos/huggingface/transformers/issues/1351/events
https://github.com/huggingface/transformers/issues/1351
499,451,961
MDU6SXNzdWU0OTk0NTE5NjE=
1,351
SQUAD: V2 referenced at top of Readme; V1 referenced in usage instructions
{ "login": "descartesholland", "id": 2327884, "node_id": "MDQ6VXNlcjIzMjc4ODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2327884?v=4", "gravatar_id": "", "url": "https://api.github.com/users/descartesholland", "html_url": "https://github.com/descartesholland", "followers_url": "https://api.github.com/users/descartesholland/followers", "following_url": "https://api.github.com/users/descartesholland/following{/other_user}", "gists_url": "https://api.github.com/users/descartesholland/gists{/gist_id}", "starred_url": "https://api.github.com/users/descartesholland/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/descartesholland/subscriptions", "organizations_url": "https://api.github.com/users/descartesholland/orgs", "repos_url": "https://api.github.com/users/descartesholland/repos", "events_url": "https://api.github.com/users/descartesholland/events{/privacy}", "received_events_url": "https://api.github.com/users/descartesholland/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Need to use the flag \r\n--version_2_with_negative" ]
1,569
1,569
1,569
NONE
null
## ❓ Questions & Help There seems to be an inconsistency in the README, namely that run_squad.py is cited to be trained on SQUAD v2 towards the top, but scrolling down to view the command shows that v1 is used. Running the command cited over a copy of the v2 dataset on my machine yields the following error: ``` Traceback (most recent call last): File "./examples/run_squad.py", line 533, in <module> main() File "./examples/run_squad.py", line 478, in main train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False, output_examples=False) File "./examples/run_squad.py", line 291, in load_and_cache_examples version_2_with_negative=args.version_2_with_negative) File "./examples/utils_squad.py", line 151, in read_squad_examples "For training, each question should have exactly 1 answer.") ValueError: For training, each question should have exactly 1 answer. ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1351/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1351/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1350
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1350/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1350/comments
https://api.github.com/repos/huggingface/transformers/issues/1350/events
https://github.com/huggingface/transformers/issues/1350
499,414,129
MDU6SXNzdWU0OTk0MTQxMjk=
1,350
Custom models: MixUp Transformers with TF.Keras code
{ "login": "iliaschalkidis", "id": 1626984, "node_id": "MDQ6VXNlcjE2MjY5ODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iliaschalkidis", "html_url": "https://github.com/iliaschalkidis", "followers_url": "https://api.github.com/users/iliaschalkidis/followers", "following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}", "gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}", "starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions", "organizations_url": "https://api.github.com/users/iliaschalkidis/orgs", "repos_url": "https://api.github.com/users/iliaschalkidis/repos", "events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}", "received_events_url": "https://api.github.com/users/iliaschalkidis/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "The main issue is at line 85 on the forward pass of `TFRobertaMainLayer`:\r\n\r\nhttps://github.com/huggingface/transformers/blob/ca559826c4188be8713e46f191ddf5f379c196e7/transformers/modeling_tf_roberta.py#L85\r\n\r\nIt seems that passing Input placeholders mess up this comparison:\r\n\r\n> OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.\r\n\r\nWhen I comment-out this block of code, the training process works... I can't find any way to by-pass this error without commenting-out though....", "@iliaschalkidis I've also run into this issue when trying to make a plug-and-play wrapper around the numerous TF-compatible models. Like you, I was able to get the RoBERTa model working by hacking around it a bit. Not ideal, but it works.\r\n\r\nFor anyone else that's interested, the line above that raises the error occurs in `TFRobertaMainLayer.call`. You can get around it by wrapping the `call` as a TensorFlow 2.0 `function` whenever you want to use a model that depends on `TFRobertaMainLayer` (which is all of them?). Here I'm using `TFRobertaForSequenceClassification`:\r\n\r\n```python\r\nfrom transformers import TFRobertaForSequenceClassification\r\nimport tensorflow as tf\r\n\r\n# Establish a RoBERTa-based classifier.\r\nclf = TFRobertaForSequenceClassification.from_pretrained(\"roberta-base\", num_labels=5)\r\n\r\n# \"Decorate\" the `call` method as a TensorFlow2.0 function.\r\nclf.roberta.call = tf.function(clf.transformer.roberta.call)\r\n```\r\n\r\nUsing that, I was successfully able to fine-tune the classifier on a multi-GPU setup without much trouble. I still get a ton of warnings from ZMQ and TensorFlow, but I'm not yet sure they're official `transformer` issues. \r\n\r\n**Note:** I suspect you'll have to wrap the `call` instance method any time you initialize this model (e.g., if you save your pre-trained model and re-load for prediction/inference, you may not be able to just use `TFRobertaForSequenceClassification`). In that case, it may be simpler to define a minimal subclass that does it for you. This is untested code, but I suspect it'd work alright:\r\n\r\n```python\r\nimport tensorflow as tf\r\nimport transformers\r\n\r\n\r\nclass _TFRobertaForSequenceClassification(transformers.TFRobertaForSequenceClassification):\r\n \r\n def __init__(self, config, *inputs, **kwargs):\r\n super(_TFRobertaForSequenceClassification, self).__init__(config, *inputs, **kwargs)\r\n self.roberta.call = tf.function(self.roberta.call)\r\n```\r\n\r\nHope this helps!\r\n\r\n---\r\n\r\nUnrelated tip: I also had a bit of trouble using TFv2 metrics (e.g., `tf.keras.metrics.[Precision/Recall/AUC]` because the `TFRobertaClassificationHead` outputs logits (no softmax activation). If anybody else is wondering, you can set the classifier head's output layer to use softmax quite easily:\r\n\r\n```python\r\n# Continuing from the previous setup.\r\nclf.classifier.out_proj.activation = tf.keras.activations.softmax\r\n```\r\n\r\nThis way, you can monitor Precision/Recall/AUC in the call to `clf.compile`:\r\n\r\n```python\r\n# Compile our model.\r\nclf.compile(\r\n optimizer=...,\r\n loss=...,\r\n metrics=[\r\n tf.keras.metrics.CategoricalCrossentropy(from_logits=False),\r\n tf.keras.metrics.Precision(thresholds=.50, name=\"precision\"),\r\n tf.keras.metrics.Recall(thresholds=.50, name=\"recall\"),\r\n tf.keras.metrics.AUC(curve=\"PR\", name=\"auc-pr\")\r\n ]\r\n)\r\n```\r\n\r\nFurthermore, if you want to just fine-tune the classifier layer, you can easily freeze the core RoBERTa layers:\r\n\r\n```python\r\n# Note you have ~125M trainable parameters. This'll take a while!\r\nclf.summary()\r\n\r\n# Freeze core RoBERTa model (embeddings, encoder, pooler).\r\nclf.roberta.trainable = False\r\n\r\n# Note you have ~600K trainable parameters. Much better!\r\nclf.summary()\r\n```", "@dataframing thanx a lot, this was really helpful! I opted to go with a very similar solution...\r\n\r\nDefine a meta-model on top of `TFRobertaModel`:\r\n\r\n```python\r\nimport tensorflow as tf\r\nimport transformers\r\n\r\n\r\nclass ROBERTA(transformers.TFRobertaModel):\r\n\r\n def __init__(self, config, *inputs, **kwargs):\r\n super(ROBERTA, self).__init__(config, *inputs, **kwargs)\r\n self.roberta.call = tf.function(self.roberta.call)\r\n\r\n```\r\n\r\nBuild a wrapper `tf.keras.Model`:\r\n\r\n```python\r\n# Define inputs (token_ids, mask_ids, seg_ids)\r\ntoken_inputs = Input(shape=(None,), name='word_inputs', dtype='int32')\r\nmask_inputs = Input(shape=(None,), name='mask_inputs', dtype='int32')\r\nseg_inputs = Input(shape=(None,), name='seg_inputs', dtype='int32')\r\n\r\n# Load model and collect encodings\r\nroberta = ROBERTA.from_pretrained('roberta-base')\r\nroberta_encodings = roberta([token_inputs, mask_inputs, seg_inputs])[0]\r\n\r\n# Keep [CLS] token encoding\r\ndoc_encoding = tf.squeeze(roberta_encodings[:, 0:1, :], axis=1)\r\n\r\n# Apply dropout\r\ndoc_encoding = Dropout(0.1)(doc_encoding)\r\n\r\n# Final output (projection) layer\r\noutputs = Dense(self.n_classes, activation='sigmoid', name='outputs')(doc_encoding)\r\n\r\n# Wrap-up model\r\nmodel = Model(inputs=[word_inputs, mask_inputs, seg_inputs], outputs=[outputs])\r\nmodel.compile(optimizer=Adam(lr=3e-4), loss='binary_crossentropy')\r\n```\r\n\r\nEverything works like a charm, except the annoying warnings. Although working on a single RTX 2080Ti or any other 12GB GPU has a limitation of batch size up to 4-5 samples of 512 subword units (the same applies for BERT), while I was able to go up to 8 when I was calling `bert-base` via Tensorflow Hub and wrap it as Keras layer, which is really weird... Any idea why, moving to transformers library and TF2 will make such a different?", "Thanks for the report.\r\nWe can probably get rid of this test in the TF version of RoBERTa if it's a blocking element for integrating with other Keras modules.\r\nI've never been a huge fan of this hacky solution anyway. In the future, we should probably move forward with a breaking change in the tokenizers and have control tokens included by default in the tokenizer encoding output instead of having them as an option.\r\ncc @LysandreJik @julien-c ", "@thomwolf Having the tokenizers include special tokens in the call to `tokenizer.encode[_plus]` seems like a pretty safe default, but I think it also makes sense to have this inline inspection to make sure that the end user has properly encoded their tokens. Wrapping the method in a `tf.function` like above call seems to make it work fine as-is, so maybe there's a way to have the best of both worlds?", "@dataframing BERT and ROBERTa work like a charm with the tweaks you proposed, although with XLNet I still have issues:\r\n\r\n```python\r\n# Define token ids as inputs\r\nword_inputs = Input(batch_shape=(2, 2000), name='word_inputs', dtype='int32')\r\n\r\n# Call XLNet model\r\nxlnet = TFXLNetModel.from_pretrained('xlnet-base-cased')\r\nxlnet_encodings = xlnet(word_inputs)\r\n\r\n# Collect last hidden step (CLS)\r\ndoc_encoding = tf.squeeze(xlnet_encodings[:, -1:, :], axis=1)\r\n\r\n# Apply dropout\r\ndoc_encoding = Dropout(dropout_rate)(doc_encoding)\r\n\r\n# Final output (projection) layer\r\noutputs = Dense(n_classes, activation='softmax', name='outputs')(doc_encoding)\r\n\r\n# Compile model\r\nmodel = Model(inputs=[word_inputs], outputs=[outputs])\r\nmodel.compile(optimizer=Adam(lr=lr, loss='categorical_crossentropy'))\r\n```\r\n\r\n> xlnet_encodings = xlnet(word_inputs)\r\n> .../tensorflow_core/python/keras/engine/base_layer.py\", line 842, in __call__\r\n> outputs = call_fn(cast_inputs, *args, **kwargs)\r\n> .../tensorflow_core/python/autograph/impl/api.py\", line 237, in wrapper\r\n> raise e.ag_error_metadata.to_exception(e) AttributeError: in converted code:\r\n> relative to .../transformers/modeling_tf_xlnet.py:810 call *\r\n> outputs = self.transformer(inputs, **kwargs)\r\n> tensorflow_core/python/keras/engine/base_layer.py:874 __call__\r\n> inputs, outputs, args, kwargs)\r\n> tensorflow_core/python/keras/engine/base_layer.py:2038 _set_connectivity_metadata_\r\n> input_tensors=inputs, output_tensors=outputs, arguments=arguments)\r\n> tensorflow_core/python/keras/engine/base_layer.py:2068 _add_inbound_node\r\n> arguments=arguments)\r\n> tensorflow_core/python/keras/engine/node.py:110 __init__\r\n> self.output_shapes = nest.map_structure(backend.int_shape, output_tensors)\r\n> tensorflow_core/python/util/nest.py:535 map_structure\r\n> structure[0], [func(*x) for x in entries],\r\n> tensorflow_core/python/util/nest.py:535 <listcomp>\r\n> structure[0], [func(*x) for x in entries],\r\n> tensorflow_core/python/keras/backend.py:1185 int_shape\r\n> shape = x.shape\r\n> AttributeError: 'NoneType' object has no attribute 'shape'\r\n\r\nPretty much the same story happens using the `TFXLNetForSequenceClassification` class:\r\n\r\n```python\r\n# Call TFXLNetForSequenceClassification model\r\nmodel = TFXLNetForSequenceClassification.from_pretrained('xlnet-base-cased', num_labels=n_classes)\r\n\r\n# Amend activation functions\r\nmodel.logits_proj.activation = tf.keras.activations.softmax\r\n\r\n# Compile model\r\nmodel.compile(optimizer=Adam(lr=lr, loss='categorical_crossentropy'))\r\n```\r\n\r\n> File .../tensorflow_core/python/keras/engine/training.py\", line 2709, in _set_inputs\r\n> outputs = self(inputs, **kwargs)\r\n> File .../tensorflow_core/python/keras/engine/base_layer.py\", line 842, in __call__\r\n> outputs = call_fn(cast_inputs, *args, **kwargs)\r\n> File .../tensorflow_core/python/autograph/impl/api.py\", line 237, in wrapper\r\n> raise e.ag_error_metadata.to_exception(e)\r\n> TypeError: in converted code:\r\n> transformers/modeling_tf_xlnet.py:916 call *\r\n> output = self.sequence_summary(output)\r\n> tensorflow_core/python/keras/engine/base_layer.py:842 __call__\r\n> outputs = call_fn(cast_inputs, *args, **kwargs)\r\n> transformers/modeling_tf_utils.py:459 call *\r\n> output = self.first_dropout(output)\r\n> tensorflow_core/python/autograph/impl/api.py:396 converted_call\r\n> return py_builtins.overload_of(f)(*args)\r\n> TypeError: 'NoneType' object is not callable", "In your case, it might be because you are not extracting the hidden states from the model tuple output.\r\nThis line: `xlnet_encodings = xlnet(word_inputs)`\r\nShould be like this:\r\n```\r\noutputs = xlnet(word_inputs)\r\nxlnet_encodings = outputs[0]\r\n```\r\n\r\nI'm working on adding some tests on this integration with other Keras modules here: #1482", "Hi @thomwolf,\r\n\r\nEven with this update it keeps producing the exact same error. The actual error happens internally on TF2, when the abstract `keras.Layer` calls the Autograph API to do some adjustments. This actually parse the whole network layer by layer and convert the `call()` functions for some reason. It fails in the very end, when it tries to convert the final (outer) call of the `TFXLNetMainLayer`:\r\n\r\n```python\r\noutputs = self.transformer(inputs, **kwargs)\r\n```\r\nThe main reason, as I see it through debugging, is the fact that you return by default as part of the outputs a list called `new_mems`. This returns a list of `None`, if the user do not provide such an input, that later the internal Keras engine cannot handle, because the elements of this list lack of shape and lead to the aforementioned error `AttributeError: 'NoneType' object has no attribute 'shape'`. \r\n\r\nThe only way to surpass this at this stage, is again with some hacking in line 653 of `modeling_tf_xlnet.py`:\r\n\r\n```python\r\noutputs = (tf.transpose(output, perm=(1, 0, 2)), new_mems)\r\n```\r\n\r\nto\r\n\r\n```python\r\noutputs = tf.transpose(output, perm=(1, 0, 2))\r\n```\r\n\r\nProbably, if I pass memories as an input in `TFXLNetModel`, this won't happen any more and I'll avoid hacking. Could you please remind me the notion of memories and how should I pass this information when I'm calling the model? Is this a single integer denoting how many steps back can the Transformer-XL use?", "In two words memories are cached hidden-states to be reused to speed up or allow for longer sentence inputs. The best to understand the notion of memory is to read the Transformer-XL paper which is here: http://arxiv.org/abs/1901.02860\r\n\r\nWe have a couple of models outputting memories and it seems to be a problem for Keras indeed (GPT-2) has the same.\r\n\r\nSo the best (non-breaking) solution is probably to add a flag in the configuration that you can set to False to avoid outputting memories or cache. ", "Great, I read Transformer-XL a few months ago. Maybe if I pass memories as input, I'll avoid this error, and probably I have to do so, if i want the model to act as a real Transformer-XL and not forget all previous timesteps at each segment... What's the specification for `mems`: a tensor of shape (batch_size, ) including integers (e.g., 200 steps back) for the memory length?", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Hi,\r\nWhy not just take the language model layer from the transformer using `roberta_model = roberta_model.layers[0]` and then build on top of it? \r\n\r\n\r\n" ]
1,569
1,646
1,576
NONE
null
Ideally I would like to use `TFRobertaModel` or any other model (BERT, XLNet) as parts (modules) of a bigger model. For example, it could be nice to start with Roberta as a document encoder and then build a multi-label classifier on top of that. Possibly there are ways to hack `TFRobertaForSequenceClassification` in order to do multi-label classification using custom configurations, but the point is: **How we could leverage Roberta and any other pre-trained model and stack other layers on top (e.g., I may want to add a custom attention layer or do a hierarchical version of Roberta with a shared Roberta encoder)?** ``` import tensorflow as tf import numpy as np from transformers import TFRobertaModel, RobertaTokenizer from tensorflow.keras.layers import Input, Dense from tensorflow.keras.models import Model # Define input layer inputs = Input(shape=(None, )) # Define Roberta a document encoder roberta_model = TFRobertaModel.from_pretrained('roberta-base') # Collect hidden state representations roberta_encodings = roberta_model(inputs)[0] # Collect CLS representations document_encodings = tf.squeeze(roberta_encodings[:, 0:1, :], axis=1) # Add classification layer (Linear + Sigmoid) outputs = Dense(10, activation='sigmoid')(document_encodings) # Build meta-model model = Model(inputs=[inputs], outputs=[outputs]) # Compile model model.compile(optimizer='adam', loss='binary_crossentropy') # Train model tokenizer = RobertaTokenizer.from_pretrained('roberta-base') x = np.asarray(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True))[None, :] y = tf.convert_to_tensor(np.zeros((1,10)), dtype=tf.float32) model.fit(x, y) ``` The main issue here is that we can't use an `Input` layer to feed Roberta... Any ideas for a workaround to make this piece of code working...?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1350/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1350/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1349
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1349/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1349/comments
https://api.github.com/repos/huggingface/transformers/issues/1349/events
https://github.com/huggingface/transformers/pull/1349
499,358,473
MDExOlB1bGxSZXF1ZXN0MzIyMDk0NDEw
1,349
Just some typos
{ "login": "ogabrielluiz", "id": 24829397, "node_id": "MDQ6VXNlcjI0ODI5Mzk3", "avatar_url": "https://avatars.githubusercontent.com/u/24829397?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ogabrielluiz", "html_url": "https://github.com/ogabrielluiz", "followers_url": "https://api.github.com/users/ogabrielluiz/followers", "following_url": "https://api.github.com/users/ogabrielluiz/following{/other_user}", "gists_url": "https://api.github.com/users/ogabrielluiz/gists{/gist_id}", "starred_url": "https://api.github.com/users/ogabrielluiz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ogabrielluiz/subscriptions", "organizations_url": "https://api.github.com/users/ogabrielluiz/orgs", "repos_url": "https://api.github.com/users/ogabrielluiz/repos", "events_url": "https://api.github.com/users/ogabrielluiz/events{/privacy}", "received_events_url": "https://api.github.com/users/ogabrielluiz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "👍 ", "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1349?src=pr&el=h1) Report\n> Merging [#1349](https://codecov.io/gh/huggingface/transformers/pull/1349?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d83d295763b738aa0c071f8b63ad6e155b6cf515?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1349/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1349?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1349 +/- ##\n=======================================\n Coverage 84.73% 84.73% \n=======================================\n Files 84 84 \n Lines 12573 12573 \n=======================================\n Hits 10654 10654 \n Misses 1919 1919\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1349?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1349?src=pr&el=footer). Last update [d83d295...d2de5b9](https://codecov.io/gh/huggingface/transformers/pull/1349?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,569
1,569
1,569
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1349/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1349/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1349", "html_url": "https://github.com/huggingface/transformers/pull/1349", "diff_url": "https://github.com/huggingface/transformers/pull/1349.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1349.patch", "merged_at": 1569582481000 }
https://api.github.com/repos/huggingface/transformers/issues/1348
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1348/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1348/comments
https://api.github.com/repos/huggingface/transformers/issues/1348/events
https://github.com/huggingface/transformers/issues/1348
499,334,082
MDU6SXNzdWU0OTkzMzQwODI=
1,348
Urgent: RoBERTa-Large-MNLI does not work for 2-way classification anymore
{ "login": "wyin-Salesforce", "id": 53835505, "node_id": "MDQ6VXNlcjUzODM1NTA1", "avatar_url": "https://avatars.githubusercontent.com/u/53835505?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wyin-Salesforce", "html_url": "https://github.com/wyin-Salesforce", "followers_url": "https://api.github.com/users/wyin-Salesforce/followers", "following_url": "https://api.github.com/users/wyin-Salesforce/following{/other_user}", "gists_url": "https://api.github.com/users/wyin-Salesforce/gists{/gist_id}", "starred_url": "https://api.github.com/users/wyin-Salesforce/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wyin-Salesforce/subscriptions", "organizations_url": "https://api.github.com/users/wyin-Salesforce/orgs", "repos_url": "https://api.github.com/users/wyin-Salesforce/repos", "events_url": "https://api.github.com/users/wyin-Salesforce/events{/privacy}", "received_events_url": "https://api.github.com/users/wyin-Salesforce/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Please choose a better title for your post and specify (or remove) the first part of your post. \r\n\r\nAs far as I can tell this is an issue specific to the mnli model. As you say it's pre-trained with three final out features. When loading the state dict into the model, all weights from the pretrained model are \"transferred\" to the initialized model, this is a one-to-one mapping. Since RobertaForSequenceClassification has a classification head, which you can configure a.o. with `num_labels` it _can_ clash with the classification head of the pretrained model.\r\n\r\nThe intuitive solution would be to just load all weight excluding the classifier - so that torch doesn't try to load those mis-matching states, if and only if the num_labels specified in `from_pretrained` are not the same as the ones inside the models `self.config`. **However**, I'm not sure if that is the right approach, since one method (from_pretrained) then does different things with the same given pretrained model. In one case you use it completely, in the other you only use part.", "Thanks for the hint. But I do not think this is the mnli model problem, because it worked before even the label size is not 3. This is my old log:\r\n\r\n09/11/2019 22:37:46 - INFO - pytorch_transformers.modeling_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-large-mnli-config.json from cache at /root/.cache/torch/pytorch_transformers/54eef9bf74f919edd81b765fee413c8229620f3e271a51bdcdc67797422ef3f3.233bd69ec613d2ebcb1d55823dfc5b1e109157918e13bdbde6db7f694e1a0039\r\n09/11/2019 22:37:46 - INFO - pytorch_transformers.modeling_utils - Model config {\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"finetuning_task\": null,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 1024,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 4096,\r\n \"layer_norm_eps\": 1e-05,\r\n \"max_position_embeddings\": 514,\r\n \"num_attention_heads\": 16,\r\n \"num_hidden_layers\": 24,\r\n \"num_labels\": 2,\r\n \"output_attentions\": false,\r\n \"output_hidden_states\": false,\r\n \"pruned_heads\": {},\r\n \"torchscript\": false,\r\n \"type_vocab_size\": 1,\r\n \"vocab_size\": 50265\r\n}\r\n\r\n09/11/2019 22:37:46 - INFO - pytorch_transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-large-mnli-pytorch_model.bin from cache at /root/.cache/torch/pytorch_transformers/1c2e185bc053ae7261ce2289653438a4c05b871ff7f30eaee1cdb787154410e0.c1823b934e18e923174ff260ba955eef25b2205f48fe2655c432a5fb805f8c8a\r\n09/11/2019 22:38:02 - INFO - pytorch_transformers.modeling_utils - Weights of RobertaForSequenceClassification not initialized from pretrained model: ['classifier.dense.weight', 'classifier.dense.bias', 'classifier.out_proj.weight', 'classifier.out_proj.bias']\r\n09/11/2019 22:38:02 - INFO - pytorch_transformers.modeling_utils - Weights from pretrained model not used in RobertaForSequenceClassification: ['lm_head.weight', 'lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias']\r\n\r\nYou can see that the system can automatically detect which part of the mnli parameters not used to initialize, then it will neglect it; but now the transformers only output error message, I found it from yesterday.\r\n\r\nMy code is the same, but behave differently now", "It's odd since `modeling_utils` hasn't seen any updates apart from the naming update. I'm not sure where else to look for this issue.", "My \"pytorch_transformers\" was installed maybe 3 weeks ago, but the latest \"transformers\" was installed yesterday. But both will make the same error, in different lines:\r\n\r\n\"transformers\" in line 411:\r\nFile \"/opt/conda/lib/python3.6/site-packages/transformers/modeling_utils.py\", line 411, in from_pretrained\r\n model.__class__.__name__, \"\\n\\t\".join(error_msgs)))\r\nRuntimeError: Error(s) in loading state_dict for RobertaForSequenceClassification:\r\n\tsize mismatch for classifier.out_proj.weight: copying a param with shape torch.Size([3, 1024]) from checkpoint, the shape in current model is torch.Size([2, 1024]).\r\n\tsize mismatch for classifier.out_proj.bias: copying a param with shape torch.Size([3]) from checkpoint, the shape in current model is torch.Size([2]).\r\n\r\n\"pytorch_transformers\" in line 594:\r\nFile \"/opt/conda/lib/python3.6/site-packages/pytorch_transformers/modeling_utils.py\", line 594, in from_pretrained\r\n model.__class__.__name__, \"\\n\\t\".join(error_msgs)))\r\nRuntimeError: Error(s) in loading state_dict for RobertaForSequenceClassification:\r\n\tsize mismatch for classifier.out_proj.weight: copying a param with shape torch.Size([3, 1024]) from checkpoint, the shape in current model is torch.Size([2, 1024]).\r\n\tsize mismatch for classifier.out_proj.bias: copying a param with shape torch.Size([3]) from checkpoint, the shape in current model is torch.Size([2]).\r\n\r\n\r\nBut both come from the same reason. Since the \"pytorch_transformers\" worked for me a couple days ago (even though i found \"transformer\" did not work for me yesterday, but very likely something changed before yesterday, since I haven't run the piece of code for some days).\r\n\r\nNow, I kind of agree with you that the \"roberta-large-mnli\" model itself had something changed recently, which makes it unable to neglect the mismatch of hyperparameters.\r\n", "Also having this issue using the `roberta-large-mnli` model on a single-document (not paired) multiclass classification task.", "I guess the simplest solution would be to load the model with previous num_labels and than directly change its num_labels and initialize a new classifier layer in the `run_glue.py` script. This way you won't need to modify any of the `transformers` code.\r\n\r\nThis is what I do:\r\n```\r\n # for num_labels(mnli)\r\n num_labels_old = config_class.from_pretrained(args.model_name_or_path).num_labels\r\n config = config_class.from_pretrained(args.config_name if args.config_name else args.model_name_or_path,\r\n num_labels=num_labels_old,\r\n finetuning_task=args.task_name,\r\n cache_dir=args.cache_dir if args.cache_dir else None)\r\n tokenizer = tokenizer_class.from_pretrained(args.tokenizer_name if args.tokenizer_name else args.model_name_or_path,\r\n do_lower_case=args.do_lower_case,\r\n cache_dir=args.cache_dir if args.cache_dir else None)\r\n if num_labels != num_labels_old:\r\n config.num_labels = num_labels_old\r\n model = model_class.from_pretrained(args.model_name_or_path,\r\n from_tf=bool('.ckpt' in args.model_name_or_path),\r\n config=config,\r\n cache_dir=args.cache_dir if args.cache_dir else None)\r\n config.num_labels = num_labels\r\n logger.info('Reintializing model classifier layer...')\r\n model.num_labels = num_labels\r\n model.classifier = RobertaClassificationHead(config)\r\n\r\n else:\r\n model = model_class.from_pretrained(args.model_name_or_path,\r\n from_tf=bool('.ckpt' in args.model_name_or_path),\r\n config=config,\r\n cache_dir=args.cache_dir if args.cache_dir else None)\r\n\r\n```\r\n\r\nOf course, it would be better to modify the `transformers` code directly. \r\n", "Hi felicity,\n\nSorry for the late reply. I actually have forgotten how i solve that, or\ngave up. I will check my code when I finish a deadline in the next couple\nof days. Thanks for sharing your experience.\n\nBest.\n\nOn Thu, Nov 28, 2019 at 8:01 AM felicitywang <[email protected]>\nwrote:\n\n> I'm getting the same error. Did you solve this problem? @wyin-Salesforce\n> <https://github.com/wyin-Salesforce> @pmbaumgartner\n> <https://github.com/pmbaumgartner> Would really appreciate it if you\n> could share your solutions. Thank you.\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/1348?email_source=notifications&email_token=AM2XN4JEMO7KYWJ4OFXFSYDQV7TPFA5CNFSM4I3D6BP2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFM7XIA#issuecomment-559545248>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AM2XN4IUZKSK4D22E2WI3ZDQV7TPFANCNFSM4I3D6BPQ>\n> .\n>\n\n\n-- \n\n\nWenpeng Yin\nResearch Scientist @ Salesforce Research, Palo Alto\nhttps://sites.google.com/site/yinwenpeng1987/\n", "Thanks @wyin-Salesforce . If people are still having trouble with this, the solution above worked. ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I want to evaluate the pre-trained **roberta-large-mnli** model on a **2-way classification task**. I tried to imitate what @felicitywang posted by adding these four lines after calling config/tokenizer/model in run_glue.py (after line 134):\r\n```\r\n num_labels = 2 # ADDED\r\n config.num_labels = num_labels # ADDED\r\n model.num_labels = num_labels # ADDED\r\n model.classifier = RobertaClassificationHead(config) # ADDED\r\n```\r\n\r\nHowever, I'm still getting the following error (from line 131) when I run my modified run_glue.py:\r\n```\r\nRuntimeError: Error(s) in loading state_dict for RobertaForSequenceClassification:\r\n\tsize mismatch for classifier.out_proj.weight: copying a param with shape torch.Size([3, 1024]) from checkpoint, the shape in current model is torch.Size([2, 1024]).\r\n\tsize mismatch for classifier.out_proj.bias: copying a param with shape torch.Size([3]) from checkpoint, the shape in current model is torch.Size([2]).\r\n```\r\n\r\nwhere the line 131 is the last line ('cache_dir=...') of this code block:\r\n```\r\n model = AutoModelForSequenceClassification.from_pretrained(\r\n model_args.model_name_or_path,\r\n from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\r\n config=config,\r\n cache_dir=model_args.cache_dir,\r\n )\r\n```\r\n\r\nDoes anyone know how to make it work?", "> I want to evaluate the pre-trained **roberta-large-mnli** model on a **2-way classification task**. I tried to imitate what @felicitywang posted by adding these four lines after calling config/tokenizer/model in run_glue.py (after line 134):\r\n> \r\n> ```\r\n> num_labels = 2 # ADDED\r\n> config.num_labels = num_labels # ADDED\r\n> model.num_labels = num_labels # ADDED\r\n> model.classifier = RobertaClassificationHead(config) # ADDED\r\n> ```\r\n> \r\n> However, I'm still getting the following error (from line 131) when I run my modified run_glue.py:\r\n> \r\n> ```\r\n> RuntimeError: Error(s) in loading state_dict for RobertaForSequenceClassification:\r\n> \tsize mismatch for classifier.out_proj.weight: copying a param with shape torch.Size([3, 1024]) from checkpoint, the shape in current model is torch.Size([2, 1024]).\r\n> \tsize mismatch for classifier.out_proj.bias: copying a param with shape torch.Size([3]) from checkpoint, the shape in current model is torch.Size([2]).\r\n> ```\r\n> \r\n> where the line 131 is the last line ('cache_dir=...') of this code block:\r\n> \r\n> ```\r\n> model = AutoModelForSequenceClassification.from_pretrained(\r\n> model_args.model_name_or_path,\r\n> from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\r\n> config=config,\r\n> cache_dir=model_args.cache_dir,\r\n> )\r\n> ```\r\n> \r\n> Does anyone know how to make it work?\r\n\r\n@scarletcho \r\nIf you just want to evaluate the pretrained roberta-large-mnli on a new dataset without any fine-tuning; let's say your new dataset has two classes \"entail\" and \"non_entail\", then you just manually combine the outputs \"neutral\" and \"contradict\" as a single output \"non_entail\".\r\n\r\nIf you want to load this pretrained model and fine-tune on your 2-way dataset, today I just found the following approach works for using N-way fine-tuning:\r\n\r\n` model_config = BartConfig.from_pretrained(pretrain_model_dir)\r\n model_config.num_labels=new_num_labels\r\n model = BartForSequenceClassification.from_pretrained(pretrain_model_dir, config=model_config)`\r\n \r\nI tried Bart, but it should work for roberta too (here \"pretrain_model_dir\" is string \"facebook/bart-large\", you can use \"roberta-large-mnli\" instead)", "> I guess the simplest solution would be to load the model with previous num_labels and than directly change its num_labels and initialize a new classifier layer in the `run_glue.py` script. This way you won't need to modify any of the `transformers` code.\r\n> \r\n> This is what I do:\r\n> \r\n> ```\r\n> # for num_labels(mnli)\r\n> num_labels_old = config_class.from_pretrained(args.model_name_or_path).num_labels\r\n> config = config_class.from_pretrained(args.config_name if args.config_name else args.model_name_or_path,\r\n> num_labels=num_labels_old,\r\n> finetuning_task=args.task_name,\r\n> cache_dir=args.cache_dir if args.cache_dir else None)\r\n> tokenizer = tokenizer_class.from_pretrained(args.tokenizer_name if args.tokenizer_name else args.model_name_or_path,\r\n> do_lower_case=args.do_lower_case,\r\n> cache_dir=args.cache_dir if args.cache_dir else None)\r\n> if num_labels != num_labels_old:\r\n> config.num_labels = num_labels_old\r\n> model = model_class.from_pretrained(args.model_name_or_path,\r\n> from_tf=bool('.ckpt' in args.model_name_or_path),\r\n> config=config,\r\n> cache_dir=args.cache_dir if args.cache_dir else None)\r\n> config.num_labels = num_labels\r\n> logger.info('Reintializing model classifier layer...')\r\n> model.num_labels = num_labels\r\n> model.classifier = RobertaClassificationHead(config)\r\n> \r\n> else:\r\n> model = model_class.from_pretrained(args.model_name_or_path,\r\n> from_tf=bool('.ckpt' in args.model_name_or_path),\r\n> config=config,\r\n> cache_dir=args.cache_dir if args.cache_dir else None)\r\n> ```\r\n> \r\n> Of course, it would be better to modify the `transformers` code directly.\r\n\r\nHi,\r\n\r\nI am using this code to solve this issue. What is `RobertaClassificationHead(config)` ? I cannot find this from huggingface.", "> > I want to evaluate the pre-trained **roberta-large-mnli** model on a **2-way classification task**. I tried to imitate what @felicitywang posted by adding these four lines after calling config/tokenizer/model in run_glue.py (after line 134):\r\n> > ```\r\n> > num_labels = 2 # ADDED\r\n> > config.num_labels = num_labels # ADDED\r\n> > model.num_labels = num_labels # ADDED\r\n> > model.classifier = RobertaClassificationHead(config) # ADDED\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > However, I'm still getting the following error (from line 131) when I run my modified run_glue.py:\r\n> > ```\r\n> > RuntimeError: Error(s) in loading state_dict for RobertaForSequenceClassification:\r\n> > \tsize mismatch for classifier.out_proj.weight: copying a param with shape torch.Size([3, 1024]) from checkpoint, the shape in current model is torch.Size([2, 1024]).\r\n> > \tsize mismatch for classifier.out_proj.bias: copying a param with shape torch.Size([3]) from checkpoint, the shape in current model is torch.Size([2]).\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > where the line 131 is the last line ('cache_dir=...') of this code block:\r\n> > ```\r\n> > model = AutoModelForSequenceClassification.from_pretrained(\r\n> > model_args.model_name_or_path,\r\n> > from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\r\n> > config=config,\r\n> > cache_dir=model_args.cache_dir,\r\n> > )\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > Does anyone know how to make it work?\r\n> \r\n> @scarletcho\r\n> If you just want to evaluate the pretrained roberta-large-mnli on a new dataset without any fine-tuning; let's say your new dataset has two classes \"entail\" and \"non_entail\", then you just manually combine the outputs \"neutral\" and \"contradict\" as a single output \"non_entail\".\r\n> \r\n> If you want to load this pretrained model and fine-tune on your 2-way dataset, today I just found the following approach works for using N-way fine-tuning:\r\n> \r\n> ` model_config = BartConfig.from_pretrained(pretrain_model_dir) model_config.num_labels=new_num_labels model = BartForSequenceClassification.from_pretrained(pretrain_model_dir, config=model_config)`\r\n> \r\n> I tried Bart, but it should work for roberta too (here \"pretrain_model_dir\" is string \"facebook/bart-large\", you can use \"roberta-large-mnli\" instead)\r\n\r\nIt is ok to use roberta-large, but it stills has the error in roberta-large-mnli.", "I fix the issue when I use `transformers=2.3.0` and I put num_labels in the config and then put the config into the model.", "I'm using `transformers=4.20.1` and [this example code](https://github.com/huggingface/transformers/blob/24a85cca61fda92b9376fe45da1dcb10c8853066/examples/pytorch/text-classification/run_glue.py) (the most recent commit that passed all the automated testing) and I'm still running into this error.\r\n\r\nIn the code, it looks like they do [add the num_labels](https://github.com/huggingface/transformers/issues/1348#issuecomment-888779209) to the config, and then put the config into the model, but I'm still getting the error.\r\n\r\nThe exact command I'm running is `python run_glue.py --train_file sg_train_dataset.csv --validation_file sg_test_dataset.csv --do_train --do_eval --model_name roberta-large-mnli --output_dir output --overwrite_output_dir` where `sg_test_dataset.csv` is a CSV file with three columns, \"sentence1\", \"sentence2\" and \"label\" and \"label\" is either 0 or 1.\r\n\r\nAny suggestions on how to fix it?" ]
1,569
1,657
1,581
NONE
null
## 🐛 Bug <!-- Important information --> Model I am using (RoBERTa): Language I am using the model on (English): The problem arise when using: * [ ] the official example scripts: (give details) * [ ] my own modified scripts: (give details) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (RTE) ## To Reproduce Steps to reproduce the behavior: pretrain_model_dir = 'roberta-large-mnli' #'roberta-large' , 'roberta-large-mnli' model = RobertaForSequenceClassification.from_pretrained(pretrain_model_dir, num_labels=2) It will have error message as follows: > model = RobertaForSequenceClassification.from_pretrained(pretrain_model_dir, num_labels=num_labels) File "/opt/conda/lib/python3.6/site-packages/transformers/modeling_utils.py", line 411, in from_pretrained model.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for RobertaForSequenceClassification: size mismatch for classifier.out_proj.weight: copying a param with shape torch.Size([3, 1024]) from checkpoint, the shape in current model is torch.Size([2, 1024]). size mismatch for classifier.out_proj.bias: copying a param with shape torch.Size([3]) from checkpoint, the shape in current model is torch.Size([2]). This only happened yesterday when I used the pretrained 3-way roberta-large-mnli model for a 2-way classification task; seems like the a bug in initializating or neglecting the classifier's parameters <!-- Add any other context about the problem here. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1348/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1348/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1347
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1347/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1347/comments
https://api.github.com/repos/huggingface/transformers/issues/1347/events
https://github.com/huggingface/transformers/issues/1347
499,321,543
MDU6SXNzdWU0OTkzMjE1NDM=
1,347
Use PyTorch's GELU activation
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,569
1,575
1,575
COLLABORATOR
null
## 🚀 Feature PyTorch 1.2 provides a built-in, GPU-accelerated GELU function at `torch.nn.functional.gelu`. Reading through the merged pull request (https://github.com/pytorch/pytorch/pull/20665) it seems that this is optimised for CUDA, too. Therefore I would propose trying to import the built-in gelu function first, and use the back-off gelu definition if it's not found for torch < 1.2. ## Additional context I started _very_ basic changes over at https://github.com/BramVanroy/transformers/tree/pytorch_gelu by changing the gelu definition in e.g. BERT to something like ```python def gelu(x): """ Original Implementation of the gelu activation function in Google Bert repo when initialy created. For information: OpenAI GPT's gelu is slightly different (and gives slightly different results): 0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3)))) Also see https://arxiv.org/abs/1606.08415 """ return x * 0.5 * (1.0 + torch.erf(x / math.sqrt(2.0))) ACT2FN = {"relu": torch.nn.functional.relu, "swish": swish, "gelu_new": gelu_new} try: ACT2FN["gelu"] = torch.nn.functional.gelu except AttributeError: ACT2FN["gelu"] = gelu ``` However, I wonder whether it wouldn't be cleaner to have all activation functions in an importable constant `ACT2FN` somewhere. Maybe under `modeling_utils`? This should make it easier to keep a good overview of all activation functions that can be used. If requested, I can put some time in refactoring this.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1347/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1347/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1346
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1346/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1346/comments
https://api.github.com/repos/huggingface/transformers/issues/1346/events
https://github.com/huggingface/transformers/pull/1346
499,299,039
MDExOlB1bGxSZXF1ZXN0MzIyMDQ2NTcy
1,346
Add small note about the output of hidden states (closes #1332)
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1346?src=pr&el=h1) Report\n> Merging [#1346](https://codecov.io/gh/huggingface/transformers/pull/1346?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/da2e47ad15e552b84815da20daf3282b517103f7?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1346/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1346?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1346 +/- ##\n=======================================\n Coverage 84.73% 84.73% \n=======================================\n Files 84 84 \n Lines 12573 12573 \n=======================================\n Hits 10654 10654 \n Misses 1919 1919\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1346?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1346?src=pr&el=footer). Last update [da2e47a...15749bf](https://codecov.io/gh/huggingface/transformers/pull/1346?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Awesome, thanks @BramVanroy!" ]
1,569
1,569
1,569
COLLABORATOR
null
Closes huggingface/transformers#1332
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1346/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1346/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1346", "html_url": "https://github.com/huggingface/transformers/pull/1346", "diff_url": "https://github.com/huggingface/transformers/pull/1346.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1346.patch", "merged_at": 1569573008000 }
https://api.github.com/repos/huggingface/transformers/issues/1345
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1345/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1345/comments
https://api.github.com/repos/huggingface/transformers/issues/1345/events
https://github.com/huggingface/transformers/issues/1345
499,291,501
MDU6SXNzdWU0OTkyOTE1MDE=
1,345
Ram utilisation of DistilBERT
{ "login": "008karan", "id": 18630864, "node_id": "MDQ6VXNlcjE4NjMwODY0", "avatar_url": "https://avatars.githubusercontent.com/u/18630864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/008karan", "html_url": "https://github.com/008karan", "followers_url": "https://api.github.com/users/008karan/followers", "following_url": "https://api.github.com/users/008karan/following{/other_user}", "gists_url": "https://api.github.com/users/008karan/gists{/gist_id}", "starred_url": "https://api.github.com/users/008karan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/008karan/subscriptions", "organizations_url": "https://api.github.com/users/008karan/orgs", "repos_url": "https://api.github.com/users/008karan/repos", "events_url": "https://api.github.com/users/008karan/events{/privacy}", "received_events_url": "https://api.github.com/users/008karan/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,569
1,575
1,575
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> I was checking the memory consumption of RoBERTa and DistilBERT. I found there is no significant change in memory usage. Although Inference time is around 1sec for DistilBERT and for RoBERTa is 2sec. Memory usage on CPU: Port 9000: DistilBERT Port 9002: RoBERTa ![compute](https://user-images.githubusercontent.com/18630864/65751708-e8ab6980-e128-11e9-93a7-937cc0211009.png) Have you guys seen any significant change in memory usage or am I missing something here?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1345/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1345/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1344
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1344/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1344/comments
https://api.github.com/repos/huggingface/transformers/issues/1344/events
https://github.com/huggingface/transformers/issues/1344
499,173,502
MDU6SXNzdWU0OTkxNzM1MDI=
1,344
Errors when using fp16 with traced models
{ "login": "chessgecko", "id": 1816945, "node_id": "MDQ6VXNlcjE4MTY5NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/1816945?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chessgecko", "html_url": "https://github.com/chessgecko", "followers_url": "https://api.github.com/users/chessgecko/followers", "following_url": "https://api.github.com/users/chessgecko/following{/other_user}", "gists_url": "https://api.github.com/users/chessgecko/gists{/gist_id}", "starred_url": "https://api.github.com/users/chessgecko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chessgecko/subscriptions", "organizations_url": "https://api.github.com/users/chessgecko/orgs", "repos_url": "https://api.github.com/users/chessgecko/repos", "events_url": "https://api.github.com/users/chessgecko/events{/privacy}", "received_events_url": "https://api.github.com/users/chessgecko/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Has this ever been solved? I have the same issue", "I think maybe the reason is `.half()` only change(cast) the data, but it is not traceable. And there is no fp32 -> fp16 in python code, the function will expect fp32 input instead of fp16.", "Just confirming I am still using this in production and never found a solution. If there is an easy solution here I'd happily pay a small bounty for the information.", "Just stumbled across this trying to look for anything talking about using fp16 precision with torchscript. Converting the model and inputs to half seems to work. I get a lot higher warnings about loss of precision with torchscript when I have `use_fp16=True`. Not sure if I'm being paranoid with the `torch.no_grad()` statement, I don't know if it'll do that internally within `torch.jit.trace` but I couldn't see anything about it.\r\n\r\n```python\r\nmodel = model.cuda()\r\nmodel.eval()\r\n\r\nwith torch.no_grad():\r\n inputs = torch.randn(input_shape, device='cuda')\r\n \r\n if use_fp16:\r\n model = model.half()\r\n inputs = inputs.half()\r\n\r\n traced_model = torch.jit.trace(model, inputs)\r\n```", "It's kind of strange. When I don't check the trace during tracing and call inference without ``torch.no_grad()`` it does actually work (but consumes way too much memory of course because of gradient computations).\r\n\r\n```\r\nmodel.half()\r\nmodel = torch.jit.trace(model, (dummy_input, dummy_input, dummy_input), check_trace=False)\r\n\r\noutputs = model(inputs)\r\n```\r\n\r\nActually I also have another issue with TorchScript, because I cannot feed the inputs as dict during tracing. In the case of BERT it then somehow uses ``input_ids, attention_mask, inputs`` as input names instead of ``input_ids, attention_mask, token_type_ids``.", "For someone who encounters the same problem, this issue is fixed in torch 1.5 and present back in torch 1.6." ]
1,569
1,599
1,575
NONE
null
## 🐛 Bug When I run ``` roberta_model = RobertaForMaskedLM.from_pretrained("roberta-base", torchscript=True) roberta_model.cuda() roberta_model.half() traced_model = torch.jit.trace(roberta_model, (r_input_ids)) ``` I get the following error ` Expected object of scalar type Float but got scalar type Half for argument #2 'mat2' ` When I attempt to load a normally traced model ``` loaded_model = torch.jit.load("traced_roberta_cuda.pt") loaded_model.cuda() loaded_model.half() loaded_model(r_input_ids ) ``` I get `RuntimeError: expected device cuda:0 and dtype Float but got device cuda:0 and dtype Half ` Is there a way to use fp16 with traced models? It happened with BertForSequenceClassification, RobertaForSequenceClassification and RobertaForMaskedLM. ## Environment * Models tested on: Bert and Roberta: * Language: English * OS: Ubuntu 18.04 * Python version: 3.6.9 * PyTorch version: 1.2.0 * PyTorch Transformers version (or branch): 2.0.0
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1344/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1344/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1343
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1343/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1343/comments
https://api.github.com/repos/huggingface/transformers/issues/1343/events
https://github.com/huggingface/transformers/issues/1343
499,092,393
MDU6SXNzdWU0OTkwOTIzOTM=
1,343
RobertaTokenizer documentation is off with the new transformers library
{ "login": "cformosa", "id": 13603877, "node_id": "MDQ6VXNlcjEzNjAzODc3", "avatar_url": "https://avatars.githubusercontent.com/u/13603877?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cformosa", "html_url": "https://github.com/cformosa", "followers_url": "https://api.github.com/users/cformosa/followers", "following_url": "https://api.github.com/users/cformosa/following{/other_user}", "gists_url": "https://api.github.com/users/cformosa/gists{/gist_id}", "starred_url": "https://api.github.com/users/cformosa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cformosa/subscriptions", "organizations_url": "https://api.github.com/users/cformosa/orgs", "repos_url": "https://api.github.com/users/cformosa/repos", "events_url": "https://api.github.com/users/cformosa/events{/privacy}", "received_events_url": "https://api.github.com/users/cformosa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You're right! Thanks for letting us know." ]
1,569
1,569
1,569
NONE
null
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): Roberta Language I am using the model on (English, Chinese....): NA The problem arise when using: * [ ] the official example scripts: The tasks I am working on is: NA ## To Reproduce Steps to reproduce the behavior: In the documentation for the tokenization_roberta.py, it says in the RobertaTokenizer class ``` RoBERTa BPE tokenizer, derived from the GPT-2 tokenizer. Peculiarities: - Byte-level Byte-Pair-Encoding - Requires a space to start the input string => will add a space is there isn't. As a consequence, this tokenizer `encode` and `decode` method will not conserve the absence of a space at the beginning of a string: `tokenizer.decode(tokenizer.encode("Hello")) = " Hello" ``` However, with using the new transformers library, when I run this example I get ``` from transformers import RobertaTokenizer tokenizer = RobertaTokenizer.from_pretrained("roberta-base") tokenizer.decode(tokenizer.encode("Hello")) "Hello" ``` The leading space seems to no longer present as it was in pytorch_transformers, however if (per the source code) if I add the arg add_prefix_space = True, then it outputs with the leading space. Just a tiny fix to hopefully help out anyone else who gets confused by it. Thanks and love the new updates to the library!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1343/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1342
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1342/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1342/comments
https://api.github.com/repos/huggingface/transformers/issues/1342/events
https://github.com/huggingface/transformers/issues/1342
499,041,973
MDU6SXNzdWU0OTkwNDE5NzM=
1,342
AttributeError: 'RobertaTokenizer' object has no attribute 'add_special_tokens_sentences_pair'
{ "login": "frankfka", "id": 31530056, "node_id": "MDQ6VXNlcjMxNTMwMDU2", "avatar_url": "https://avatars.githubusercontent.com/u/31530056?v=4", "gravatar_id": "", "url": "https://api.github.com/users/frankfka", "html_url": "https://github.com/frankfka", "followers_url": "https://api.github.com/users/frankfka/followers", "following_url": "https://api.github.com/users/frankfka/following{/other_user}", "gists_url": "https://api.github.com/users/frankfka/gists{/gist_id}", "starred_url": "https://api.github.com/users/frankfka/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/frankfka/subscriptions", "organizations_url": "https://api.github.com/users/frankfka/orgs", "repos_url": "https://api.github.com/users/frankfka/repos", "events_url": "https://api.github.com/users/frankfka/events{/privacy}", "received_events_url": "https://api.github.com/users/frankfka/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @frankfka \r\n\r\nPerhaps you are looking for tokenizer.add_special_tokens_sequence_pair instead of tokenizer.add_special_tokens_sentences_pair?\r\n\r\n```\r\nfrom transformers import RobertaTokenizer\r\ntokenizer = RobertaTokenizer.from_pretrained(\"roberta-base\")\r\n\r\ntokenizer.add_special_tokens_sequence_pair([31414], [31414])\r\n```\r\nThis returns <br>\r\n```\r\n[0, 31414, 2, 2, 31414, 2]\r\n```", "> Hey @frankfka\r\n> \r\n> Perhaps you are looking for tokenizer.add_special_tokens_sequence_pair instead of tokenizer.add_special_tokens_sentences_pair?\r\n> \r\n> ```\r\n> from transformers import RobertaTokenizer\r\n> tokenizer = RobertaTokenizer.from_pretrained(\"roberta-base\")\r\n> \r\n> tokenizer.add_special_tokens_sequence_pair([31414], [31414])\r\n> ```\r\n> \r\n> This returns\r\n> \r\n> ```\r\n> [0, 31414, 2, 2, 31414, 2]\r\n> ```\r\n\r\nGood catch, thanks! I suppose this was renamed in this release?" ]
1,569
1,569
1,569
NONE
null
With the latest update to `Transformers`, has the function been removed? I still see it in the code, but I run into the error: `AttributeError: 'RobertaTokenizer' object has no attribute 'add_special_tokens_sentences_pair'`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1342/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1342/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1341
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1341/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1341/comments
https://api.github.com/repos/huggingface/transformers/issues/1341/events
https://github.com/huggingface/transformers/issues/1341
499,028,190
MDU6SXNzdWU0OTkwMjgxOTA=
1,341
Examples in Colab
{ "login": "redditTroll", "id": 55472806, "node_id": "MDQ6VXNlcjU1NDcyODA2", "avatar_url": "https://avatars.githubusercontent.com/u/55472806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/redditTroll", "html_url": "https://github.com/redditTroll", "followers_url": "https://api.github.com/users/redditTroll/followers", "following_url": "https://api.github.com/users/redditTroll/following{/other_user}", "gists_url": "https://api.github.com/users/redditTroll/gists{/gist_id}", "starred_url": "https://api.github.com/users/redditTroll/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/redditTroll/subscriptions", "organizations_url": "https://api.github.com/users/redditTroll/orgs", "repos_url": "https://api.github.com/users/redditTroll/repos", "events_url": "https://api.github.com/users/redditTroll/events{/privacy}", "received_events_url": "https://api.github.com/users/redditTroll/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Why not simply run the example scripts in colab yourself?", "I'm not exactly sure how to set it up , this is a pretty popular library so I was thinking their might be a blog post out there ", "https://huggingface.co/transformers/notebooks.html", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,569
1,575
1,575
NONE
null
Hi all , does anyone have a Colab sample to share ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1341/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1341/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1340
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1340/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1340/comments
https://api.github.com/repos/huggingface/transformers/issues/1340/events
https://github.com/huggingface/transformers/issues/1340
499,002,913
MDU6SXNzdWU0OTkwMDI5MTM=
1,340
Size mismatch when loading pretrained model
{ "login": "malmaud", "id": 987837, "node_id": "MDQ6VXNlcjk4NzgzNw==", "avatar_url": "https://avatars.githubusercontent.com/u/987837?v=4", "gravatar_id": "", "url": "https://api.github.com/users/malmaud", "html_url": "https://github.com/malmaud", "followers_url": "https://api.github.com/users/malmaud/followers", "following_url": "https://api.github.com/users/malmaud/following{/other_user}", "gists_url": "https://api.github.com/users/malmaud/gists{/gist_id}", "starred_url": "https://api.github.com/users/malmaud/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/malmaud/subscriptions", "organizations_url": "https://api.github.com/users/malmaud/orgs", "repos_url": "https://api.github.com/users/malmaud/repos", "events_url": "https://api.github.com/users/malmaud/events{/privacy}", "received_events_url": "https://api.github.com/users/malmaud/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm having the same problem with RoBERTa, it didn't happen until a few hours.", "Hi, thanks for pointing it out, I made a mistake with a config object hosted on our S3. It should be fixed now.", "Running the following snippet:\r\n`# Load the model in fairseq`\r\n`from fairseq.models.roberta import RobertaModel`\r\n`roberta = RobertaModel.from_pretrained('./roberta.large', checkpoint_file='model.pt')`\r\n`roberta.eval() # disable dropout (or leave in train mode to finetune)`\r\n\r\nI got the following error:\r\n`RuntimeError: Error(s) in loading state_dict for RobertaModel:\r\n\tMissing key(s) in state_dict: \"decoder.sentence_encoder.layers.0.self_attn.k_proj.weight\", \"decoder.sentence_encoder.layers.0.self_attn.k_proj.bias\", \"decoder.sentence_encoder.layers.0.self_attn.v_proj.weight\", \"decoder.sentence_encoder.layers.0.self_attn.v_proj.bias\", \"decoder.sentence_encoder.layers.0.self_attn.q_proj.weight\", \"decoder.sentence_encoder.layers.0.self_attn.q_proj.bias\", \"decoder.sentence_encoder.layers.1.self_attn.k_proj.weight\", \"decoder.sentence_encoder.layers.1.self_attn.k_proj.bias\", \"decoder.sentence_encoder.layers.1.self_attn.v_proj.weight\", \"decoder.sentence_encoder.layers.1.self_attn.v_proj.bias\", \"decoder.sentence_encoder.layers.1.self_attn.q_proj.weight\", \"decoder.sentence_encoder.layers.1.self_attn.q_proj.bias\", \"decoder.sentence_encoder.layers.2.self_attn.k_proj.weight\", \"decoder.sentence_encoder.layers.2.self_attn.k_proj.bias\", \"decoder.sentence_encoder.layers.2.self_attn.v_proj.weight\", \"decoder.sentence_encoder.layers.2.self_attn.v_proj.bias\", \"decoder.sentence_encoder.layers.2.self_attn.q_proj.weight\", \"decoder.sentence_encoder.layers.2.self_attn.q_proj.bias\", \"decoder.sentence_encoder.layers.3.self_attn.k_proj.weight\", \"decoder.sentence_encoder.layers.3.self_attn.k_proj.bias\", \"decoder.sentence_encoder.layers.3.self_attn.v_proj.weight\", \"decoder.sentence_encoder.layers.3.self_attn.v_proj.bias\", \"decoder.sentence_encoder.layers.3.self_attn.q_proj.weight\", \"decoder.sentence_encoder.layers.3.self_attn.q_proj.bias\", \"decoder.sentence_encoder....\r\n\tUnexpected key(s) in state_dict: \"decoder.sentence_encoder.layers.0.self_attn.in_proj_weight\", \"decoder.sentence_encoder.layers.0.self_attn.in_proj_bias\", \"decoder.sentence_encoder.layers.1.self_attn.in_proj_weight\", \"decoder.sentence_encoder.layers.1.self_attn.in_proj_bias\", \"decoder.sentence_encoder.layers.2.self_attn.in_proj_weight\", \"decoder.sentence_encoder.layers.2.self_attn.in_proj_bias\", \"decoder.sentence_encoder.layers.3.self_attn.in_proj_weight\", \"decoder.sentence_encoder.layers.3.self_attn.in_proj_bias\", \"decoder.sentence_encoder.layers.4.self_attn.in_proj_weight\", \"decoder.sentence_encoder.layers.4.self_attn.in_proj_bias\", \"decoder.sentence_encoder.layers.5.self_attn.in_proj_weight\", \"decoder.sentence_encoder.layers.5.self_attn.in_proj_bias\", \"decoder.sentence_encoder.layers.6.self_attn.in_proj_weight\", \"decoder.sentence_encoder.layers.6.self_attn.in_proj_bias\", \"decoder.sentence_encoder.layers.7.self_attn.in_proj_weight\", \"decoder.sentence_encoder.layers.7.self_attn.in_proj_bias\", \"decoder.sentence_encoder.layers.8.self_attn.in_proj_weight\", \"decoder.sentence_encoder.layers.8.self_attn.in_proj_bias\", \"decoder.sentence_encoder.layers.9.self_attn.in_proj_weight\", \"decoder.sentence_encoder.layers.9.self_attn.in_proj_bias\", \"decoder.sentence_encoder.layers.10.self_attn.in_proj_weight\", \"decoder.sentence_encoder.layers.10.self_attn.in_proj_bias\", \"decoder.sentence_encoder.layers.11.self_attn.in_proj_weight\", \"decoder.sentence_encoder.layers.11.self_attn.in_proj_bi...`\r\n\r\nIs it related to the above error? How can we fix it?", "I am seeing the same error as @pbabvey is seeing. I suspect the S3 object is out-of-sync with the code?", "I am running into the same issue as @pbabvey while loading the model. Is there any fix available for this?", "Hi, if you're running the following code:\r\n\r\n```py\r\n# Load the model in fairseq\r\nfrom fairseq.models.roberta import RobertaModel\r\nroberta = RobertaModel.from_pretrained('./roberta.large', checkpoint_file='model.pt')\r\nroberta.eval() # disable dropout (or leave in train mode to finetune)\r\n```\r\nThen you are not using our library, but [fairseq](https://github.com/pytorch/fairseq).\r\n\r\nTo use our library you would do it as follows:\r\n\r\n```py\r\nfrom transformers import RobertaModel\r\n\r\nmodel = RobertaModel.from_pretrained(\"roberta-large\")\r\n```", "There is one argument called `ignore_mismatched_sizes` in `from_pretrained` method. ISSUE: [#13187](https://github.com/huggingface/transformers/issues/13187)" ]
1,569
1,629
1,569
NONE
null
I'm seeing this: ``` In [1]: import pytorch_transformers In [2]: m=pytorch_transformers.AutoModel.from_pretrained('roberta-base') --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-2-7a33f5ecb345> in <module> ----> 1 m=pytorch_transformers.AutoModel.from_pretrained('roberta-base') /opt/anaconda3/lib/python3.7/site-packages/pytorch_transformers/modeling_auto.py in from_pretrained(cls, pretrained _model_name_or_path, *model_args, **kwargs) 240 return DistilBertModel.from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs) 241 elif 'roberta' in pretrained_model_name_or_path: --> 242 return RobertaModel.from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs) 243 elif 'bert' in pretrained_model_name_or_path: 244 return BertModel.from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs) /opt/anaconda3/lib/python3.7/site-packages/pytorch_transformers/modeling_utils.py in from_pretrained(cls, pretraine d_model_name_or_path, *model_args, **kwargs) 592 if len(error_msgs) > 0: 593 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( --> 594 model.__class__.__name__, "\n\t".join(error_msgs))) 595 596 if hasattr(model, 'tie_weights'): RuntimeError: Error(s) in loading state_dict for RobertaModel: size mismatch for roberta.embeddings.position_embeddings.weight: copying a param with shape torch.Size([514 , 768]) from checkpoint, the shape in current model is torch.Size([512, 768]). ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1340/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1340/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1339
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1339/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1339/comments
https://api.github.com/repos/huggingface/transformers/issues/1339/events
https://github.com/huggingface/transformers/issues/1339
498,958,266
MDU6SXNzdWU0OTg5NTgyNjY=
1,339
Why is the vocabulary of token_type_ids and input_ids shared?
{ "login": "ZeweiChu", "id": 2027005, "node_id": "MDQ6VXNlcjIwMjcwMDU=", "avatar_url": "https://avatars.githubusercontent.com/u/2027005?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZeweiChu", "html_url": "https://github.com/ZeweiChu", "followers_url": "https://api.github.com/users/ZeweiChu/followers", "following_url": "https://api.github.com/users/ZeweiChu/following{/other_user}", "gists_url": "https://api.github.com/users/ZeweiChu/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZeweiChu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZeweiChu/subscriptions", "organizations_url": "https://api.github.com/users/ZeweiChu/orgs", "repos_url": "https://api.github.com/users/ZeweiChu/repos", "events_url": "https://api.github.com/users/ZeweiChu/events{/privacy}", "received_events_url": "https://api.github.com/users/ZeweiChu/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,569
1,575
1,575
NONE
null
## ❓ Questions & Help <!-- A clear and concise description of the question. --> https://github.com/huggingface/transformers/blob/17ea43cf985829634bd86b36b44e5410c6f83e36/transformers/modeling_gpt2.py#L421 In GPT2Model, forward method, it seems the vocabulary of token_type_ids and input_ids is shared. I checked the vocabulary table, 0 and 1 corresponds to the exclamation sign and the quote sign. What is the reason of sharing the vocabulary? Is it on purpose?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1339/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1339/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1338
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1338/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1338/comments
https://api.github.com/repos/huggingface/transformers/issues/1338/events
https://github.com/huggingface/transformers/issues/1338
498,923,282
MDU6SXNzdWU0OTg5MjMyODI=
1,338
Extending `examples/` to TensorFlow
{ "login": "HanGuo97", "id": 18187806, "node_id": "MDQ6VXNlcjE4MTg3ODA2", "avatar_url": "https://avatars.githubusercontent.com/u/18187806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HanGuo97", "html_url": "https://github.com/HanGuo97", "followers_url": "https://api.github.com/users/HanGuo97/followers", "following_url": "https://api.github.com/users/HanGuo97/following{/other_user}", "gists_url": "https://api.github.com/users/HanGuo97/gists{/gist_id}", "starred_url": "https://api.github.com/users/HanGuo97/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HanGuo97/subscriptions", "organizations_url": "https://api.github.com/users/HanGuo97/orgs", "repos_url": "https://api.github.com/users/HanGuo97/repos", "events_url": "https://api.github.com/users/HanGuo97/events{/privacy}", "received_events_url": "https://api.github.com/users/HanGuo97/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Indeed, there is currently one example for tensorflow, `run_tf_glue` and it doesn't have command-line arguments. We'll update this one to make it as flexible as the PyTorch one and add other examples when we have the bandwidth.\r\n\r\nDo you want to help in this project? Happy to welcome a PR on this topic (for instance to add command line argument similar to `run_glue` in `run_tf_glue`).", "Thanks for the response. I'm not an expert in this field but I'm happy to help and review codes.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,569
1,575
1,575
NONE
null
## 🚀 Feature Hi, thanks for putting in the tremendous effort for TensorFlow-PyTorch interoperability! Would those scripts in the `examples/` be soon extended to Tensorflow as well? ## Motivation I (and presumably many others) rely on the examples to quickly experiment with models and ideas. Extending the examples to Tensorflow would be hugely helpful, and should help the codebase reach a broader set of audiences. ## Additional context <!-- Add any other context or screenshots about the feature request here. -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1338/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1338/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1337
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1337/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1337/comments
https://api.github.com/repos/huggingface/transformers/issues/1337/events
https://github.com/huggingface/transformers/pull/1337
498,902,034
MDExOlB1bGxSZXF1ZXN0MzIxNzMzODI2
1,337
faster dataset building
{ "login": "mgrankin", "id": 3540879, "node_id": "MDQ6VXNlcjM1NDA4Nzk=", "avatar_url": "https://avatars.githubusercontent.com/u/3540879?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mgrankin", "html_url": "https://github.com/mgrankin", "followers_url": "https://api.github.com/users/mgrankin/followers", "following_url": "https://api.github.com/users/mgrankin/following{/other_user}", "gists_url": "https://api.github.com/users/mgrankin/gists{/gist_id}", "starred_url": "https://api.github.com/users/mgrankin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mgrankin/subscriptions", "organizations_url": "https://api.github.com/users/mgrankin/orgs", "repos_url": "https://api.github.com/users/mgrankin/repos", "events_url": "https://api.github.com/users/mgrankin/events{/privacy}", "received_events_url": "https://api.github.com/users/mgrankin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1337?src=pr&el=h1) Report\n> Merging [#1337](https://codecov.io/gh/huggingface/transformers/pull/1337?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a3e0dbba9512866064c20e9bc99c62725f6c36fb?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1337/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1337?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1337 +/- ##\n=======================================\n Coverage 84.73% 84.73% \n=======================================\n Files 84 84 \n Lines 12573 12573 \n=======================================\n Hits 10654 10654 \n Misses 1919 1919\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1337?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1337?src=pr&el=footer). Last update [a3e0dbb...f71a457](https://codecov.io/gh/huggingface/transformers/pull/1337?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "I was about to make this PR myself and then saw this!", "Thanks a lot @mgrankin (was meaning to fix this as well haha)!" ]
1,569
1,569
1,569
CONTRIBUTOR
null
Now it takes around 1 minute to process 20mb and it takes forever for 200mb dataset (it's non-linear). This is a fix to make it linear.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1337/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1337/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1337", "html_url": "https://github.com/huggingface/transformers/pull/1337", "diff_url": "https://github.com/huggingface/transformers/pull/1337.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1337.patch", "merged_at": 1569573313000 }
https://api.github.com/repos/huggingface/transformers/issues/1336
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1336/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1336/comments
https://api.github.com/repos/huggingface/transformers/issues/1336/events
https://github.com/huggingface/transformers/pull/1336
498,834,770
MDExOlB1bGxSZXF1ZXN0MzIxNjc4ODM0
1,336
Completed the documentation with TF2
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,569
1,569
1,569
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1336/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1336/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1336", "html_url": "https://github.com/huggingface/transformers/pull/1336", "diff_url": "https://github.com/huggingface/transformers/pull/1336.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1336.patch", "merged_at": 1569498341000 }
https://api.github.com/repos/huggingface/transformers/issues/1335
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1335/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1335/comments
https://api.github.com/repos/huggingface/transformers/issues/1335/events
https://github.com/huggingface/transformers/issues/1335
498,514,263
MDU6SXNzdWU0OTg1MTQyNjM=
1,335
Optimize XLNet model to generate embedding of long documents
{ "login": "FannySB", "id": 23128843, "node_id": "MDQ6VXNlcjIzMTI4ODQz", "avatar_url": "https://avatars.githubusercontent.com/u/23128843?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FannySB", "html_url": "https://github.com/FannySB", "followers_url": "https://api.github.com/users/FannySB/followers", "following_url": "https://api.github.com/users/FannySB/following{/other_user}", "gists_url": "https://api.github.com/users/FannySB/gists{/gist_id}", "starred_url": "https://api.github.com/users/FannySB/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FannySB/subscriptions", "organizations_url": "https://api.github.com/users/FannySB/orgs", "repos_url": "https://api.github.com/users/FannySB/repos", "events_url": "https://api.github.com/users/FannySB/events{/privacy}", "received_events_url": "https://api.github.com/users/FannySB/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,569
1,569
1,569
NONE
null
We experiment generating embeddings with TransformerXL and XLnet. Our documents have 5000 to 80000 characters each. We got an average of 0.8 second per document with TransformerXL and 1.3 second per document with XLNet. To optimize XLNet we found that using only 200 tokens per call is optimal. A ratio around 350 tokens/second with xlnet-base-cased and 100 tokens/second with large. Any idea if XLNet can be optimize? tokenizer = XLNetTokenizer.from_pretrained("xlnet-base-cased") model = XLNetModel.from_pretrained("xlnet-base-cased") outputs = model(text_tokens, mems=mems)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1335/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1335/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1334
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1334/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1334/comments
https://api.github.com/repos/huggingface/transformers/issues/1334/events
https://github.com/huggingface/transformers/issues/1334
498,283,195
MDU6SXNzdWU0OTgyODMxOTU=
1,334
Typo in modeling_bert file
{ "login": "ishan00", "id": 26858696, "node_id": "MDQ6VXNlcjI2ODU4Njk2", "avatar_url": "https://avatars.githubusercontent.com/u/26858696?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ishan00", "html_url": "https://github.com/ishan00", "followers_url": "https://api.github.com/users/ishan00/followers", "following_url": "https://api.github.com/users/ishan00/following{/other_user}", "gists_url": "https://api.github.com/users/ishan00/gists{/gist_id}", "starred_url": "https://api.github.com/users/ishan00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ishan00/subscriptions", "organizations_url": "https://api.github.com/users/ishan00/orgs", "repos_url": "https://api.github.com/users/ishan00/repos", "events_url": "https://api.github.com/users/ishan00/events{/privacy}", "received_events_url": "https://api.github.com/users/ishan00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! Indeed it is inconsistent, but it doesn't really change anything as the superclass `PreTrainedModel` assigns the config as one of its attributes: `self.config = config`. Referencing `config` or `self.config` therefore references the same object!" ]
1,569
1,569
1,569
NONE
null
I was looking at the code of BertModel adapted for different tasks here https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/modeling_bert.py I noticed a small typo in line 882 `self.classifier = nn.Linear(config.hidden_size, self.config.num_labels)` I think it should be either `self.num_labels` or `config.num_labels` in the second argument The complete function ``` def __init__(self, config): super(BertForSequenceClassification, self).__init__(config) self.num_labels = config.num_labels self.bert = BertModel(config) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, self.config.num_labels) self.init_weights() ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1334/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1334/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1333
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1333/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1333/comments
https://api.github.com/repos/huggingface/transformers/issues/1333/events
https://github.com/huggingface/transformers/pull/1333
498,269,342
MDExOlB1bGxSZXF1ZXN0MzIxMjMyODMx
1,333
[FIX] fix run_generation.py to work with batch_size > 1
{ "login": "mataney", "id": 11559198, "node_id": "MDQ6VXNlcjExNTU5MTk4", "avatar_url": "https://avatars.githubusercontent.com/u/11559198?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mataney", "html_url": "https://github.com/mataney", "followers_url": "https://api.github.com/users/mataney/followers", "following_url": "https://api.github.com/users/mataney/following{/other_user}", "gists_url": "https://api.github.com/users/mataney/gists{/gist_id}", "starred_url": "https://api.github.com/users/mataney/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mataney/subscriptions", "organizations_url": "https://api.github.com/users/mataney/orgs", "repos_url": "https://api.github.com/users/mataney/repos", "events_url": "https://api.github.com/users/mataney/events{/privacy}", "received_events_url": "https://api.github.com/users/mataney/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@thomwolf\r\nI created this PR to deal with the `top p` generations.\r\nShould I have opened an issue first to check if it is needed?\r\nShould I deal with the conflicts?\r\n\r\nCheers.", "Hi @mataney, thanks.\r\n\r\nThis was rebased, fixed by https://github.com/huggingface/transformers/commit/f96ce1c24151349251880c95e9a9fb144b62367c, and merged to master by 2a5663c28043dc6d2746e69f0fb89e0d5872c63d.\r\n\r\nCheck that everything looks good on your side if you can." ]
1,569
1,572
1,572
CONTRIBUTOR
null
I expended the `top_k_top_p_filtering` function, and by that the`run_generation.py` script to work with num_samples > 1. This can be expended by scattering the sorted tensors. First pull request in this repository, so let me know if I need to do anything else :) Cheers, Matan.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1333/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1333/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1333", "html_url": "https://github.com/huggingface/transformers/pull/1333", "diff_url": "https://github.com/huggingface/transformers/pull/1333.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1333.patch", "merged_at": 1572546599000 }
https://api.github.com/repos/huggingface/transformers/issues/1332
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1332/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1332/comments
https://api.github.com/repos/huggingface/transformers/issues/1332/events
https://github.com/huggingface/transformers/issues/1332
498,175,777
MDU6SXNzdWU0OTgxNzU3Nzc=
1,332
pytorch-transformers returns output of 13 layers?
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I am looking at this too and I believe (might be wrong) that the embedding layer sits in the last position. So I guess you should do [-2:-5] ", "> I am looking at this too and I believe (might be wrong) that the embedding layer sits in the last position. So I guess you should do [-2:-5]\r\n\r\nHm, I don't think so. The embedding state is passed to the forward function, and that state is used to initialize the `all_hidden_states` variable. Then you iterate over all layers and append to the tuple sequentially.\r\n\r\nhttps://github.com/huggingface/pytorch-transformers/blob/7c0f2d0a6a8937063bb310fceb56ac57ce53811b/pytorch_transformers/modeling_bert.py#L337-L359", "Hi Bram,\r\n\r\nPlease read the details of `BertModel`'s outputs in the docstring or the doc here: https://huggingface.co/pytorch-transformers/model_doc/bert.html#pytorch_transformers.BertModel\r\n\r\nThe first element of the output tuple of Bert is always the last hidden-state and the full list of hidden-states is the last element of the output tuple in your case.\r\n\r\nThese lines:\r\n```\r\nout, _ = self.bert_model(input_ids=bert_ids, attention_mask=bert_mask)\r\nhidden_states = out[2]\r\n```\r\nshould be changed in:\r\n```\r\nmodel_outputs = self.bert_model(input_ids=bert_ids, attention_mask=bert_mask)\r\nhidden_states = model_outputs[-1]\r\n```", "> Hi Bram,\r\n> \r\n> Please read the details of `BertModel`'s outputs in the docstring or the doc here: https://huggingface.co/pytorch-transformers/model_doc/bert.html#pytorch_transformers.BertModel\r\n> \r\n> The first element of the output tuple of Bert is always the last hidden-state and the full list of hidden-states is the last element of the output tuple in your case.\r\n> \r\n> These lines:\r\n> \r\n> ```\r\n> out, _ = self.bert_model(input_ids=bert_ids, attention_mask=bert_mask)\r\n> hidden_states = out[2]\r\n> ```\r\n> \r\n> should be changed in:\r\n> \r\n> ```\r\n> model_outputs = self.bert_model(input_ids=bert_ids, attention_mask=bert_mask)\r\n> hidden_states = model_outputs[-1]\r\n> ```\r\n\r\nHi Thomas, thank you for your time\r\n\r\nApparently a mistake crept into my comment on GitHub. In my code, I do have the correct version, i.e.\r\n\r\n```python\r\nout = self.bert_model(input_ids=bert_ids, attention_mask=bert_mask)\r\nhidden_states = out[2]\r\n```\r\n\r\nThe question that I have is, when you then print the length of those hidden states, you get different numbers.\r\n\r\n```python\r\nprint(len(hidden_states))\r\n# 13 for pytorch_transformers, 12 for pytorch_pretrained_bert\r\n```\r\n\r\nGoing through the source code, it seems that the input hidden state (final hidden state of the embeddings) is included when using `pytorch_transformers`, but not for `pytorch_pretrained_bert`.\r\n\r\nhttps://github.com/huggingface/pytorch-transformers/blob/7c0f2d0a6a8937063bb310fceb56ac57ce53811b/pytorch_transformers/modeling_bert.py#L337-L352\r\n\r\nI couldn't find this documented anywhere, but I am curious to see the reasoning behind this - since the embedding state is _not_ an encoder state, so it might not be what one expects to get back from the model. On the other hand, it does make it easy for users to get the embeddings.", "Hi Bram,\r\nIt's written in the link to the doc that I've sent you above and also in the docstring of the model:\r\n![image](https://user-images.githubusercontent.com/7353373/65694609-6ebaa800-e076-11e9-88f4-7b149e893584.png)\r\nI'll see if I can find a way to make it more visible.\r\n\r\nThere are a few reasons we did that, one is this great paper by Tenney et al (http://arxiv.org/abs/1905.05950) which use the output of the embeddings as well at the hidden states to study Bert's performances. Another is to have easy access to the embeddings as you mention.", "> # Add last layer \r\n> if self.output_hidden_states: \r\n> all_hidden_states = all_hidden_states + (hidden_states,)\r\n\r\nhttps://github.com/huggingface/transformers/blob/7c0f2d0a6a8937063bb310fceb56ac57ce53811b/pytorch_transformers/modeling_bert.py#L350-L352\r\n\r\nBut on line 350-352, it adds the \"hidden states\" (last layer of embedding) to the \"all_hidden_states\", so the last item is the embedding output. ", "> > # Add last layer\r\n> > ```\r\n> > if self.output_hidden_states: \r\n> > all_hidden_states = all_hidden_states + (hidden_states,)\r\n> > ```\r\n> \r\n> https://github.com/huggingface/transformers/blob/7c0f2d0a6a8937063bb310fceb56ac57ce53811b/pytorch_transformers/modeling_bert.py#L350-L352\r\n> \r\n> But on line 350-352, it adds the \"hidden states\" (last layer of embedding) to the \"all_hidden_states\", so the last item is the embedding output.\r\n\r\nNo, by that time the initial `hidden_states` variable has already been reassigned in the for loop. So at each step hidden_states is:\r\n\r\n enter function: it is the embeddings\r\n on each iteration in the loop: `hidden_states = layer_outputs[0]`\r\n\r\nPerhaps the not-so-intuitive part is that the `hidden_states` are appended to `all_hidden_states` as the first thing in the loop. That means that in the at the end of the first iteration; `all_hidden_states` consists *only* of the embeddings, and at the end of the last iteration, it does not contain the last hidden state yet (because appending happens *before* getting the layer_outputs). Therefore, the hidden states of the last layer (iteration) have to be added manually still, on the lines that you mentioned.", "> > > # Add last layer\r\n> > > ```\r\n> > > if self.output_hidden_states: \r\n> > > all_hidden_states = all_hidden_states + (hidden_states,)\r\n> > > ```\r\n> > \r\n> > \r\n> > https://github.com/huggingface/transformers/blob/7c0f2d0a6a8937063bb310fceb56ac57ce53811b/pytorch_transformers/modeling_bert.py#L350-L352\r\n> > \r\n> > But on line 350-352, it adds the \"hidden states\" (last layer of embedding) to the \"all_hidden_states\", so the last item is the embedding output.\r\n> \r\n> No, by that time the initial `hidden_states` variable has already been reassigned in the for loop. So at each step hidden_states is:\r\n> \r\n> ```\r\n> enter function: it is the embeddings\r\n> on each iteration in the loop: `hidden_states = layer_outputs[0]`\r\n> ```\r\n> \r\n> Perhaps the not-so-intuitive part is that the `hidden_states` are appended to `all_hidden_states` as the first thing in the loop. That means that in the at the end of the first iteration; `all_hidden_states` consists _only_ of the embeddings, and at the end of the last iteration, it does not contain the last hidden state yet (because appending happens _before_ getting the layer_outputs). Therefore, the hidden states of the last layer (iteration) have to be added manually still, on the lines that you mentioned.\r\n\r\nYou are right, thanks for the clarification!", "@thomwolf Thanks for the clarification. I was looking in all the wrong places, it appears. Particularly, I had expected this in the README's [migration part](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers). If you want I can do a small doc pull request for that.\r\n\r\nRe-opened. Will close after doc change if requested." ]
1,569
1,569
1,569
COLLABORATOR
null
## 📚 Migration <!-- Important information --> Model I am using (Bert, XLNet....): BertModel Language I am using the model on (English, Chinese....): English The problem arise when using: * [x] my own modified scripts: (give details) The tasks I am working on is: * [x] my own task or dataset: (give details) Details of the issue: I am using pytorch-transformers for the rather unconventional task of regression (one output). In my research I use BERT and I'm planning to try out the other transformers as well. When I started, I got good results with `pytorch-pretrained-bert`. However, running the same code with `pytorch-transformers` gives me results that are a lot worse. In the original code, I use the output of the model, and concatenate the last four layers - as was proposed in the BERT paper. The architecture that I used looks like this: ```python from pytorch_pretrained_bert.modeling import BertModel import torch from torch import nn class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.bert_model = BertModel.from_pretrained('bert-base-uncased') self.pre_classifier = nn.Linear(3072, 512) self.dropout = nn.Dropout(0.2) self.classifier = nn.Linear(512, 1) def forward(self, bert_ids, bert_mask): all_bert_layers, _ = self.bert_model(bert_ids, attention_mask=bert_mask) print('hidden_states', len(all_bert_layers)) # concat last four layers out = torch.cat(tuple([all_bert_layers[i] for i in [-1, -2, -3, -4]]), dim=-1) print('output', out.size()) # Pooling by also setting masked items to zero bert_mask = bert_mask.unsqueeze(2) # Multiply output with mask to only retain non-paddding tokens out = torch.mul(out, bert_mask) print('output', out.size()) # First item ['CLS'] is sentence representation out = out[:, 0, :] print('pooled_output', out.size()) out = self.pre_classifier(out) print('pre_classifier', out.size()) out = self.dropout(out) print('dropout', out.size()) out = self.classifier(out) print('classifier', out.size()) return out ``` When porting this to `pytorch-transformers`, the main thing was that now we get a tuple back from the model *and* we have to explicitly ask to get all hidden states back. As such, the converted code looks like this: ```python from pytorch_transformers import BertModel import torch from torch import nn class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.bert_model = BertModel.from_pretrained('bert-base-uncased', output_hidden_states=True) self.pre_classifier = nn.Linear(3072, 512) self.dropout = nn.Dropout(0.2) self.classifier = nn.Linear(512, 1) def forward(self, bert_ids, bert_mask): out, _ = self.bert_model(input_ids=bert_ids, attention_mask=bert_mask) hidden_states = out[2] print('hidden_states', len(hidden_states)) out = torch.cat(tuple([hidden_states[i] for i in [-1, -2, -3, -4]]), dim=-1) print('output', out.size()) # Pooling by also setting masked items to zero bert_mask = bert_mask.unsqueeze(2) # Multiply output with mask to only retain non-paddding tokens out = torch.mul(out, bert_mask) print('output', out.size()) # First item ['CLS'] is sentence representation out = out[:, 0, :] print('pooled_output', out.size()) out = self.pre_classifier(out) print('pre_classifier', out.size()) out = self.dropout(out) print('dropout', out.size()) out = self.classifier(out) print('classifier', out.size()) return out ``` As I said before, this leads to *very* different results. Seeding cannot be the issue, since I set all seeds manually in both cases, like this: ```python def set_seed(): torch.manual_seed(3) torch.cuda.manual_seed_all(3) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False np.random.seed(3) random.seed(3) os.environ['PYTHONHASHSEED'] = str(3) ``` I have added the print statements as a sort of debugging and I quickly found that there is a fundamental difference between the two architectures. The *hidden_states* print statement will yield `12` for pytorch-pretrained-bert and `13` for `pytorch-transformers`! I am not sure how that relates, but I would assume that this could be the starting point to start looking. I have tried comparing the created models, but in both cases the encoder consists of 12 layers, so I am not sure why `pytorch-transformers` returns 13? What's the extra one? Going through the source code, it seems that the first hidden_state (= last hidden_state from the embeddings) is included. Is that true? https://github.com/huggingface/pytorch-transformers/blob/7c0f2d0a6a8937063bb310fceb56ac57ce53811b/pytorch_transformers/modeling_bert.py#L340-L352 Even so, since the embeddings would be the first item in all_hidden_states, the last four layers should be the same still. Therefore, I am not sure why there is such a big difference in the results of the above two. If you spot any faults, please advise. ## Environment * OS: Win 10 * Python version: 3.7 * PyTorch version: 1.2 * PyTorch Transformers version (or branch): * Using GPU ? Yes, CUDA 10 * Distributed of parallel setup ? No ## Checklist - [x] I have read the migration guide in the readme.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1332/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1332/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1331
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1331/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1331/comments
https://api.github.com/repos/huggingface/transformers/issues/1331/events
https://github.com/huggingface/transformers/issues/1331
498,038,255
MDU6SXNzdWU0OTgwMzgyNTU=
1,331
Is the UI code for https://transformer.huggingface.co open source?
{ "login": "alvations", "id": 1050316, "node_id": "MDQ6VXNlcjEwNTAzMTY=", "avatar_url": "https://avatars.githubusercontent.com/u/1050316?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvations", "html_url": "https://github.com/alvations", "followers_url": "https://api.github.com/users/alvations/followers", "following_url": "https://api.github.com/users/alvations/following{/other_user}", "gists_url": "https://api.github.com/users/alvations/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvations/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvations/subscriptions", "organizations_url": "https://api.github.com/users/alvations/orgs", "repos_url": "https://api.github.com/users/alvations/repos", "events_url": "https://api.github.com/users/alvations/events{/privacy}", "received_events_url": "https://api.github.com/users/alvations/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[ { "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false } ]
[ "No we haven't open sourced the UI code.", "Are there plans to open source the UI or there's no plan for it?", "No short term plans to do it!" ]
1,569
1,569
1,569
NONE
null
## ❓ Questions & Help Is the UI code for https://transformer.huggingface.co open source?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1331/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1331/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1330
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1330/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1330/comments
https://api.github.com/repos/huggingface/transformers/issues/1330/events
https://github.com/huggingface/transformers/issues/1330
498,023,812
MDU6SXNzdWU0OTgwMjM4MTI=
1,330
Loading errors for BERT base on GPU with PyTorch 0.4.1
{ "login": "nguyenvo09", "id": 1012428, "node_id": "MDQ6VXNlcjEwMTI0Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/1012428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nguyenvo09", "html_url": "https://github.com/nguyenvo09", "followers_url": "https://api.github.com/users/nguyenvo09/followers", "following_url": "https://api.github.com/users/nguyenvo09/following{/other_user}", "gists_url": "https://api.github.com/users/nguyenvo09/gists{/gist_id}", "starred_url": "https://api.github.com/users/nguyenvo09/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nguyenvo09/subscriptions", "organizations_url": "https://api.github.com/users/nguyenvo09/orgs", "repos_url": "https://api.github.com/users/nguyenvo09/repos", "events_url": "https://api.github.com/users/nguyenvo09/events{/privacy}", "received_events_url": "https://api.github.com/users/nguyenvo09/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,569
1,569
1,569
NONE
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1330/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1330/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1329
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1329/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1329/comments
https://api.github.com/repos/huggingface/transformers/issues/1329/events
https://github.com/huggingface/transformers/pull/1329
498,007,097
MDExOlB1bGxSZXF1ZXN0MzIxMDI4ODI4
1,329
GLUE Script for Tensorflow
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1329?src=pr&el=h1) Report\n> Merging [#1329](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1329?src=pr&el=desc) into [tf2](https://codecov.io/gh/huggingface/pytorch-transformers/commit/e8e956dbb2a6df696d79e2f4dc154849a8e06611?src=pr&el=desc) will **decrease** coverage by `0.05%`.\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1329/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1329?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## tf2 #1329 +/- ##\n==========================================\n- Coverage 86.01% 85.95% -0.06% \n==========================================\n Files 79 79 \n Lines 12041 12028 -13 \n==========================================\n- Hits 10357 10339 -18 \n- Misses 1684 1689 +5\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1329?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1329/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfdXRpbHMucHk=) | `91.44% <0%> (-1.48%)` | :arrow_down: |\n| [pytorch\\_transformers/tests/modeling\\_common\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1329/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfY29tbW9uX3Rlc3QucHk=) | `73.61% <0%> (-0.41%)` | :arrow_down: |\n| [...orch\\_transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1329/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfdGZfY29tbW9uX3Rlc3QucHk=) | `94.73% <0%> (-0.27%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1329?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1329?src=pr&el=footer). Last update [e8e956d...cc73950](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1329?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,569
1,651
1,569
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1329/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1329/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1329", "html_url": "https://github.com/huggingface/transformers/pull/1329", "diff_url": "https://github.com/huggingface/transformers/pull/1329.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1329.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/1328
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1328/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1328/comments
https://api.github.com/repos/huggingface/transformers/issues/1328/events
https://github.com/huggingface/transformers/issues/1328
497,908,347
MDU6SXNzdWU0OTc5MDgzNDc=
1,328
Sequence Classification pooled output vs last hidden state
{ "login": "cformosa", "id": 13603877, "node_id": "MDQ6VXNlcjEzNjAzODc3", "avatar_url": "https://avatars.githubusercontent.com/u/13603877?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cformosa", "html_url": "https://github.com/cformosa", "followers_url": "https://api.github.com/users/cformosa/followers", "following_url": "https://api.github.com/users/cformosa/following{/other_user}", "gists_url": "https://api.github.com/users/cformosa/gists{/gist_id}", "starred_url": "https://api.github.com/users/cformosa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cformosa/subscriptions", "organizations_url": "https://api.github.com/users/cformosa/orgs", "repos_url": "https://api.github.com/users/cformosa/repos", "events_url": "https://api.github.com/users/cformosa/events{/privacy}", "received_events_url": "https://api.github.com/users/cformosa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Both would probably work, but I agree that streamlining is a good idea. In their paper, BERT gets the best results by concatenating the last four layers, so what I always use is something like this (from the top of my head):\r\n\r\n```python\r\noutputs = self.bert(input_ids,\r\n attention_mask=attention_mask,\r\n token_type_ids=token_type_ids,\r\n position_ids=position_ids, \r\n head_mask=head_mask)\r\n\r\nhidden_states = outputs[1]\r\npooled_output = torch.cat(tuple([hidden_states[i] for i in [-4, -3, -2, -1]]), dim=-1)\r\npooled_output = pooled_output[:, 0, :]\r\npooled_output = self.dropout(pooled_output)\r\n# classifier of course has to be 4 * hidden_dim, because we concat 4 layers\r\nlogits = self.classifier(pooled_output)\r\n```\r\n\r\nI might put a pre_classifier and an activation function before the drop out depending on the case.", "This is very helpful. Thanks @BramVanroy for the ideas", "@BramVanroy Thanks for the solution, but I think you meant writing `hidden_states = outputs[2]` instead of `pooled_output = outputs[1]`, right?", "@mkaze I think you are talking about `TFBertModel` which has `hidden_states` at index `2`, but OP is talking about `TFBertForSequenceClassification` which has `hidden_states` at index `1`, so we need to use index `1`. @BramVanroy is this correct?", "@BramVanroy also, is it useful to use `outputs[1]` as in your code example with the `RobertaForSequenceClassification` and `TFDistilBertForSequenceClassification` models?", "@mkaze @don-prog My variables were badly named, indeed. In BertForSequenceClassification, the hidden_states are at index 1 (if you provided the option to return all hidden_states) and if you are not using labels. At index 2 if you did pass the labels.\r\n\r\nI do not know the position of hidden states for the other models by heart. Just read through the documentation and look at the `forward` method. There you can see under \"returns\" what is returned at which index.", "@BramVanroy @don-prog The weird thing is that the documentation claims that the `pooler_output` of BERT model is not a good semantic representation of the input, one time in \"Returns\" section of `forward` method of `BertModel` ([here](https://huggingface.co/transformers/model_doc/bert.html#transformers.BertModel)):\r\n\r\n![pooler](https://user-images.githubusercontent.com/8656825/89106676-fb758580-d440-11ea-8485-00452ca34e15.png)\r\n\r\nand another one at the third tip in \"Tips\" section of \"Overview\" ([here](https://huggingface.co/transformers/model_doc/bert.html)):\r\n\r\n![poooler-tips](https://user-images.githubusercontent.com/8656825/89106704-5a3aff00-d441-11ea-9769-863950346057.png)\r\n\r\nHowever, despite these two tips, the pooler output is used in implementation of `BertForSequenceClassification` ([here](https://github.com/huggingface/transformers/blob/a39dfe4fb122c11be98a563fb8ca43b322e01036/src/transformers/modeling_bert.py#L1284-L1287)).\r\n\r\nInterestingly, when I used their suggestion, i.e. using the average of hidden-states for sequence classification instead of pooler output, I got a worse result. I asked about this a few months ago in issue #4048, but unfortunately no one provided an explanation.", "@BramVanroy Many thanks for the quick reply! So, this is my usage of the last `TFDistilBertModel` 4 hidden states in the TensorFlow:\r\n```\r\ndef create_model():\r\n input_ids = tf.keras.Input(shape=(100,), dtype='int32')\r\n\r\n transformer = TFDistilBertModel.from_pretrained('distilbert-base-uncased', output_hidden_states=True)(input_ids)\r\n \r\n print(len(transformer)) #2\r\n print(len(transformer[1])) #7\r\n\r\n hidden_states = transformer[1]\r\n\r\n merged = tf.keras.layers.concatenate(tuple([hidden_states[i] for i in [-4, -3, -2, -1]]))\r\n \r\n output = tf.keras.layers.Dense(32,activation='relu')(merged)\r\n output = tf.keras.layers.Dropout(0.1)(output)\r\n\r\n output = tf.keras.layers.Dense(1, activation='sigmoid')(output)\r\n model = tf.keras.models.Model(inputs = input_ids, outputs = output)\r\n model.compile(tf.keras.optimizers.Adam(lr=6e-6), loss='binary_crossentropy', metrics=['accuracy'])\r\n return model\r\n```\r\nIs this this correct representation of your PyTorch code in the TensorFlow(except for the difference in additional layers)?", "@mkaze Yes, this is always something that comes up for discussion. I think the only correct answer here is (as so often): try it out and see what works best in your scemario. Results will differ between different projects, depending on the task, training steps, dataset, and so on. There is no one right answer. You may even decide to use maxpooling rather than average pooling. There are loads of things to try if you really want to. But generally speaking, you should get good results with either CLS or averaging over tokens.\r\n\r\n@don-prog Unfortunately I am not very familiar with TF so I fear I cannot help you with that. Try it out, and keep track of the sizes of the tensors that are passed through (or just have a look at the graph of your model). If those are correct, then I think it's fine. You can ask your question on the [forums,](https://discuss.huggingface.co/) maybe someone can help you out there.", "I think the classification for robertaforsequenceclassification is the RobertaClassificationHead, which takes the CLS embedding for classification\r\n\r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/modeling_roberta.py#L957\r\n\r\nhttps://github.com/huggingface/transformers/blob/13c185771847370d695b8eee3cbf12f4edc2111c/src/transformers/modeling_roberta.py#L1205-L1221\r\n\r\nI also found that AlBERT takes pooler result as bert, but distillbert has something different\r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/modeling_distilbert.py#L607-L610\r\n\r\njust wondering if huggingface plans to consolidate this part for the sequence classification?", "@DanqingZ Probably not. Most often these implementation are specific to how the original paper implemented them for downstream tasks. In that sense, it is normal that they differ. If you want to create your own one, as I did before, you can simply create a custom SequenceClassificationHead that works with any `PretrainedModel`'s output. It is quite simple, so I don't think the library should provide this.", "@BramVanroy yeah, I can do that.\r\nBut imagine a scenario. If I want to inherit the AutoModelForSequenceClassification, and add my own components to different types of model(bert, roberta, distillbert). If huggingface could make classifier have the same meaning and usage, it will be easier for other people to make downstream changes for multiple models at the same time, like adding label attention layer etc. The classifier is a bit misleading now, like roberta has pooler within the classifier while bert has pooled output.\r\nYeah I agree that if one has enough time to dig into details then it should be easy for them to make changes, but it is just less intuitive for people who just start using huggingface transformers.", "@DanqingZ I understand what you mean, but these implementations are not necessarily chosen by HuggingFace. Those are the original implementations in the paper by the authors. It is therefore not possible that they are all the same and they will not be changed.\r\n\r\nIf you want to add the functionality that you want, I would recommend writing your own extension to transformers. The process will teach you a lot about how PyTorch models work in general and how this library functions specifically. Yes, it will take a while, but it is the only solution.", "> @BramVanroy Many thanks for the quick reply! So, this is my usage of the last `TFDistilBertModel` 4 hidden states in the TensorFlow:\r\n> \r\n> ```\r\n> def create_model():\r\n> input_ids = tf.keras.Input(shape=(100,), dtype='int32')\r\n> \r\n> transformer = TFDistilBertModel.from_pretrained('distilbert-base-uncased', output_hidden_states=True)(input_ids)\r\n> \r\n> print(len(transformer)) #2\r\n> print(len(transformer[1])) #7\r\n> \r\n> hidden_states = transformer[1]\r\n> \r\n> merged = tf.keras.layers.concatenate(tuple([hidden_states[i] for i in [-4, -3, -2, -1]]))\r\n> \r\n> output = tf.keras.layers.Dense(32,activation='relu')(merged)\r\n> output = tf.keras.layers.Dropout(0.1)(output)\r\n> \r\n> output = tf.keras.layers.Dense(1, activation='sigmoid')(output)\r\n> model = tf.keras.models.Model(inputs = input_ids, outputs = output)\r\n> model.compile(tf.keras.optimizers.Adam(lr=6e-6), loss='binary_crossentropy', metrics=['accuracy'])\r\n> return model\r\n> ```\r\n> \r\n> Is this this correct representation of your PyTorch code in the TensorFlow(except for the difference in additional layers)?\r\n\r\nit throwing some errors \r\n", "Hi, @mkaze, regarding your question: \r\n \r\n> @BramVanroy @don-prog The weird thing is that the documentation claims that the `pooler_output` of BERT model is not a good semantic representation of the input, one time in \"Returns\" section of `forward` method of `BertModel` ([here](https://huggingface.co/transformers/model_doc/bert.html#transformers.BertModel)):\r\n> However, despite these two tips, the pooler output is used in implementation of `BertForSequenceClassification` ([here](https://github.com/huggingface/transformers/blob/a39dfe4fb122c11be98a563fb8ca43b322e01036/src/transformers/modeling_bert.py#L1284-L1287)).\r\n> Interestingly, when I used their suggestion, i.e. using the average of hidden-states for sequence classification instead of pooler output, I got a worse result. I asked about this a few months ago in issue #4048, but unfortunately no one provided an explanation.\r\n\r\nThe BERT paper explicitly says the following:\r\n\r\n_The vector C is not a meaningful sentence representation **without fine-tuning**, since it was trained with NSP._ \r\n\r\nThat means, it only says the CLS output token (pooler output) is not useful on its own from the pre-trained model (used without funetuning), but if you fine tune the model, it is useful for classification purposes.", "> > @BramVanroy Many thanks for the quick reply! So, this is my usage of the last `TFDistilBertModel` 4 hidden states in the TensorFlow:\r\n> > ```\r\n> > def create_model():\r\n> > input_ids = tf.keras.Input(shape=(100,), dtype='int32')\r\n> > \r\n> > transformer = TFDistilBertModel.from_pretrained('distilbert-base-uncased', output_hidden_states=True)(input_ids)\r\n> > \r\n> > print(len(transformer)) #2\r\n> > print(len(transformer[1])) #7\r\n> > \r\n> > hidden_states = transformer[1]\r\n> > \r\n> > merged = tf.keras.layers.concatenate(tuple([hidden_states[i] for i in [-4, -3, -2, -1]]))\r\n> > \r\n> > output = tf.keras.layers.Dense(32,activation='relu')(merged)\r\n> > output = tf.keras.layers.Dropout(0.1)(output)\r\n> > \r\n> > output = tf.keras.layers.Dense(1, activation='sigmoid')(output)\r\n> > model = tf.keras.models.Model(inputs = input_ids, outputs = output)\r\n> > model.compile(tf.keras.optimizers.Adam(lr=6e-6), loss='binary_crossentropy', metrics=['accuracy'])\r\n> > return model\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > Is this this correct representation of your PyTorch code in the TensorFlow(except for the difference in additional layers)?\r\n> \r\n> it throwing some errors\r\n\r\n\"merged\" one would have a shape like [None(batch_size), max_seq_len, hidden_size].\r\nin order to follow concatenating the last four layers strategy, you may need to add the code something like \"merged = merged[:, 0, :]\" before the output dense layer.", "Hi,\r\nIn my small project, I got significantly better results by _flattening the last hidden states of all tokens_. I wonder if people have tried it, and what you think of this approach.\r\n\r\nI'm using an auto-regressive model (a.k.a \"decoder only\", or GPT-like), where each token can only pay attention to the past tokens.\r\nThe way the classification head is currently implemented in the huggingface (causal) models I looked at, is to take the hidden state of the last token, for example:\r\nhttps://github.com/huggingface/transformers/blob/849367ccf741d8c58aa88ccfe1d52d8636eaf2b7/src/transformers/models/llama/modeling_llama.py#L770-L771\r\n\r\nor \r\n\r\nhttps://github.com/huggingface/transformers/blob/849367ccf741d8c58aa88ccfe1d52d8636eaf2b7/src/transformers/models/gpt2/modeling_gpt2.py#L1364-L1365\r\nWhat worked best for me, is to flatten the last hidden state of all tokens.\r\nSo: \r\n* The pretrained model returns the last `hidden_states` for all tokens, with shape `(batch_size, seq_length, hidden_size)`.\r\n* I flatten it along the last 2 dimensions (`hidden_states.view(batch_size, seq_lenght*hidden_size)`), which results in one long vector for each batch - with the last hidden states of all the tokens in the sequence concatenated.\r\n* The classification head projects it back to the num_labels: `nn.Linear(seq_lenght*hidden_size, num_labels)`\r\n\r\nThe downside I can see is that the classifier is fixed to a specific sequence length, but this is not a problem in my case.\r\n\r\nWould love any comments about this approach.\r\n\r\nEdit:\r\nI should mention that I'm working with semi-structured data and tokens are not text, but instead coded items in patient's medical history.\r\nMy theory of why this approach works better in my case: the classification task is very different from the pre-training objective, so the pre-training (next token prediction) has no good reason to propagate the relevant context to the last token." ]
1,569
1,682
1,569
NONE
null
## ❓ Questions & Help Why in BertForSequenceClassification do we pass the pooled output to the classifier as below from the source code ```python outputs = self.bert(input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask) pooled_output = outputs[1] pooled_output = self.dropout(pooled_output) logits = self.classifier(pooled_output) ``` but in RobertaForSequenceClassification we do not seem to pass the pooler output? ```python outputs = self.roberta(input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask) sequence_output = outputs[0] logits = self.classifier(sequence_output) ``` I thought we would pass the pooled_output in both cases to the classifier?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1328/reactions", "total_count": 21, "+1": 21, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1328/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1327
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1327/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1327/comments
https://api.github.com/repos/huggingface/transformers/issues/1327/events
https://github.com/huggingface/transformers/pull/1327
497,869,766
MDExOlB1bGxSZXF1ZXN0MzIwOTE4MzI4
1,327
Pytorch/TF2 determinism
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1327?src=pr&el=h1) Report\n> Merging [#1327](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1327?src=pr&el=desc) into [tf2](https://codecov.io/gh/huggingface/pytorch-transformers/commit/128bdd4c3549e2a1401af87493ff6be467c79c14?src=pr&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `100%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1327/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1327?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## tf2 #1327 +/- ##\n==========================================\n+ Coverage 85.99% 86.01% +0.01% \n==========================================\n Files 79 79 \n Lines 12028 12041 +13 \n==========================================\n+ Hits 10344 10357 +13 \n Misses 1684 1684\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1327?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/tests/modeling\\_common\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1327/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfY29tbW9uX3Rlc3QucHk=) | `74.01% <100%> (+0.4%)` | :arrow_up: |\n| [...orch\\_transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1327/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfdGZfY29tbW9uX3Rlc3QucHk=) | `95% <100%> (+0.26%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1327?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1327?src=pr&el=footer). Last update [128bdd4...1761d20](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1327?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Yes, good!" ]
1,569
1,576
1,569
MEMBER
null
Check to see if the models have the same results when in eval mode (pt) or when training=False (tf)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1327/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1327/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1327", "html_url": "https://github.com/huggingface/transformers/pull/1327", "diff_url": "https://github.com/huggingface/transformers/pull/1327.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1327.patch", "merged_at": 1569358198000 }
https://api.github.com/repos/huggingface/transformers/issues/1326
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1326/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1326/comments
https://api.github.com/repos/huggingface/transformers/issues/1326/events
https://github.com/huggingface/transformers/issues/1326
497,737,187
MDU6SXNzdWU0OTc3MzcxODc=
1,326
RuntimeError: expected scalar type Half but found Float
{ "login": "jroakes", "id": 10191545, "node_id": "MDQ6VXNlcjEwMTkxNTQ1", "avatar_url": "https://avatars.githubusercontent.com/u/10191545?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jroakes", "html_url": "https://github.com/jroakes", "followers_url": "https://api.github.com/users/jroakes/followers", "following_url": "https://api.github.com/users/jroakes/following{/other_user}", "gists_url": "https://api.github.com/users/jroakes/gists{/gist_id}", "starred_url": "https://api.github.com/users/jroakes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jroakes/subscriptions", "organizations_url": "https://api.github.com/users/jroakes/orgs", "repos_url": "https://api.github.com/users/jroakes/repos", "events_url": "https://api.github.com/users/jroakes/events{/privacy}", "received_events_url": "https://api.github.com/users/jroakes/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I've encountered this problem as well", "Seems like an apex error (apex should be converting the tensors to half).\r\nMaybe try to update or reinstall apex following carefully the required step for installation? ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,569
1,581
1,581
NONE
null
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): bert-large-uncased Language I am using the model on (English, Chinese....): English The problem arise when using: * [ ] the official example scripts: (give details) * [x] my own modified scripts: (give details) I am using a modified version of the run_lm_finetuning.py and amp at optimization level o1. Level O2 runs without issue. The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details) I am finetuning on a dataset of sentences, with a min length of 10 tokens, using padding and the mask_tokens function given by the repo. Input and labels are padded per specs and of type=LongTensor (torch.Size([4, 200]) torch.Size([4, 200])) with batch size of 4. ## To Reproduce Steps to reproduce the behavior: 1. When I run without amp, training works as intended. If I train at amp level O2, training runs as intended. 2. Running with amp level O1 leads to an error: RuntimeError: expected scalar type Half but found Float 3. I have verified that there is no model.eval() and that scaled_loss and clip_grad_norm_ calls are the same as prescribed in the example. 3. Examples of model initialization, train loop, and error are below. <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> ## Expected behavior Training to complete without error. <!-- A clear and concise description of what you expected to happen. --> ## Environment * OS: Google Colab * Python version: 3.6 * PyTorch version: 1.2.0 * PyTorch Transformers version (or branch): 1.2.0 * Using GPU ? Y * Distributed of parallel setup ? No * Any other relevant information: ## Additional context <!-- Add any other context about the problem here. --> **Model/Optimizer code:** ``` model = model_class.from_pretrained(args.model_name_or_path, from_tf=bool('.ckpt' in args.model_name_or_path), config=config) model.to(args.device) # Prepare optimizer and schedule (linear warmup and decay) #no_decay = ['bias', 'LayerNorm.weight'] no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight'] optimizer_grouped_parameters = [ {'params': [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], 'weight_decay': args.weight_decay}, {'params': [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], 'weight_decay': 0.0} ] optimizer = AdamW(optimizer_grouped_parameters, lr=args.learning_rate, weight_decay=args.weight_decay) scheduler = WarmupLinearSchedule(optimizer, warmup_steps=args.warmup_steps, t_total=t_total) if args.fp16: model, optimizer = amp.initialize(model, optimizer, opt_level=args.fp16_opt_level) ``` **Import Part of Train Loop:** ``` for step, batch in enumerate(epoch_iterator): inputs, labels = mask_tokens(batch, tokenizer, args) if args.mlm else mask_labels(batch, tokenizer, args) inputs = inputs.to(args.device) labels = labels.to(args.device) model.train() loss, _ = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels) if args.n_gpu > 1: loss = loss.mean() # mean() to average on multi-gpu parallel training if args.gradient_accumulation_steps > 1: loss = loss / args.gradient_accumulation_steps if args.fp16: with amp.scale_loss(loss, optimizer) as scaled_loss: scaled_loss.backward() else: loss.backward() tr_loss += loss.item() if (step + 1) % args.gradient_accumulation_steps == 0: print('Clipping Grad Norm. fp16:', args.fp16) if args.fp16: torch.nn.utils.clip_grad_norm_(amp.master_params(optimizer), args.max_grad_norm) else: torch.nn.utils.clip_grad_norm_(model.parameters(), args.max_grad_norm) scheduler.step() # Update learning rate schedule optimizer.step() optimizer.zero_grad() global_step += 1 ``` **Error:** ``` RuntimeError Traceback (most recent call last) <ipython-input-7-627529edcd83> in <module>() 10 ft_data.fp16_opt_level = 'O1' 11 ---> 12 run_finetune(ft_data) 13 14 print('Finetuned BERT model loaded.') 12 frames <ipython-input-6-39df7e7a4180> in run_finetune(args) 337 torch.distributed.barrier() 338 --> 339 global_step, tr_loss = train(args, train_dataset, model, tokenizer) 340 logger.info(" global_step = %s, average loss = %s", global_step, tr_loss) 341 <ipython-input-6-39df7e7a4180> in train(args, train_dataset, model, tokenizer) 193 model.train() 194 --> 195 loss, _ = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels) 196 197 if args.n_gpu > 1: /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 545 result = self._slow_forward(*input, **kwargs) 546 else: --> 547 result = self.forward(*input, **kwargs) 548 for hook in self._forward_hooks.values(): 549 hook_result = hook(self, input, result) /usr/local/lib/python3.6/dist-packages/pytorch_transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, masked_lm_labels) 767 768 sequence_output = outputs[0] --> 769 prediction_scores = self.cls(sequence_output) 770 771 outputs = (prediction_scores,) + outputs[2:] # Add hidden states and attention if they are here /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 545 result = self._slow_forward(*input, **kwargs) 546 else: --> 547 result = self.forward(*input, **kwargs) 548 for hook in self._forward_hooks.values(): 549 hook_result = hook(self, input, result) /usr/local/lib/python3.6/dist-packages/pytorch_transformers/modeling_bert.py in forward(self, sequence_output) 417 418 def forward(self, sequence_output): --> 419 prediction_scores = self.predictions(sequence_output) 420 return prediction_scores 421 /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 545 result = self._slow_forward(*input, **kwargs) 546 else: --> 547 result = self.forward(*input, **kwargs) 548 for hook in self._forward_hooks.values(): 549 hook_result = hook(self, input, result) /usr/local/lib/python3.6/dist-packages/pytorch_transformers/modeling_bert.py in forward(self, hidden_states) 406 407 def forward(self, hidden_states): --> 408 hidden_states = self.transform(hidden_states) 409 hidden_states = self.decoder(hidden_states) + self.bias 410 return hidden_states /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 545 result = self._slow_forward(*input, **kwargs) 546 else: --> 547 result = self.forward(*input, **kwargs) 548 for hook in self._forward_hooks.values(): 549 hook_result = hook(self, input, result) /usr/local/lib/python3.6/dist-packages/pytorch_transformers/modeling_bert.py in forward(self, hidden_states) 388 hidden_states = self.dense(hidden_states) 389 hidden_states = self.transform_act_fn(hidden_states) --> 390 hidden_states = self.LayerNorm(hidden_states) 391 return hidden_states 392 /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 545 result = self._slow_forward(*input, **kwargs) 546 else: --> 547 result = self.forward(*input, **kwargs) 548 for hook in self._forward_hooks.values(): 549 hook_result = hook(self, input, result) /usr/local/lib/python3.6/dist-packages/apex/normalization/fused_layer_norm.py in forward(self, input) 157 if self.elementwise_affine: 158 return FusedLayerNormAffineFunction.apply( --> 159 input, self.weight, self.bias, self.normalized_shape,self.eps) 160 else: 161 return FusedLayerNormFunction.apply(input, self.normalized_shape, self.eps) /usr/local/lib/python3.6/dist-packages/apex/normalization/fused_layer_norm.py in forward(ctx, input, weight, bias, normalized_shape, eps) 23 bias_ = bias.contiguous() 24 output, mean, invvar = fused_layer_norm_cuda.forward_affine( ---> 25 input_, ctx.normalized_shape, weight_, bias_, ctx.eps) 26 ctx.save_for_backward(input_, weight_, bias_, mean, invvar) 27 return output RuntimeError: expected scalar type Half but found Float (data<c10::Half> at /usr/local/lib/python3.6/dist-packages/torch/include/ATen/core/TensorMethods.h:1821) frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7f0840be9273 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so) frame #1: c10::Half* at::Tensor::data<c10::Half>() const + 0x3ee (0x7f08298ccf8e in /usr/local/lib/python3.6/dist-packages/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so) frame #2: cuda_layer_norm(at::Tensor*, at::Tensor*, at::Tensor*, at::Tensor*, int, int, c10::ArrayRef<long>, at::Tensor*, at::Tensor*, double) + 0x4c5 (0x7f08298ca745 in /usr/local/lib/python3.6/dist-packages/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so) ``` There are a total of 63 frames which output in the error that have been truncated here.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1326/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1325
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1325/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1325/comments
https://api.github.com/repos/huggingface/transformers/issues/1325/events
https://github.com/huggingface/transformers/pull/1325
497,700,800
MDExOlB1bGxSZXF1ZXN0MzIwNzc5NzI1
1,325
[Proposal] GLUE processors included in library
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1325?src=pr&el=h1) Report\n> Merging [#1325](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1325?src=pr&el=desc) into [glue-example](https://codecov.io/gh/huggingface/pytorch-transformers/commit/a6981076eca5494b9d230f13217c14b93443888a?src=pr&el=desc) will **decrease** coverage by `1.63%`.\n> The diff coverage is `34.48%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1325/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1325?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## glue-example #1325 +/- ##\n================================================\n- Coverage 81.07% 79.44% -1.64% \n================================================\n Files 57 62 +5 \n Lines 8207 8489 +282 \n================================================\n+ Hits 6654 6744 +90 \n- Misses 1553 1745 +192\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1325?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/data/processors/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1325/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvZGF0YS9wcm9jZXNzb3JzL19faW5pdF9fLnB5) | `100% <100%> (ø)` | |\n| [pytorch\\_transformers/data/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1325/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvZGF0YS9fX2luaXRfXy5weQ==) | `100% <100%> (ø)` | |\n| [pytorch\\_transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1325/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvZGF0YS9wcm9jZXNzb3JzL2dsdWUucHk=) | `27.45% <15.78%> (ø)` | |\n| [pytorch\\_transformers/data/metrics/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1325/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvZGF0YS9tZXRyaWNzL19faW5pdF9fLnB5) | `34.88% <34.88%> (ø)` | |\n| [pytorch\\_transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1325/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvZGF0YS9wcm9jZXNzb3JzL3V0aWxzLnB5) | `42.85% <42.85%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1325?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1325?src=pr&el=footer). Last update [a698107...789ea72](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1325?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n" ]
1,569
1,578
1,569
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1325/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1325", "html_url": "https://github.com/huggingface/transformers/pull/1325", "diff_url": "https://github.com/huggingface/transformers/pull/1325.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1325.patch", "merged_at": 1569354011000 }
https://api.github.com/repos/huggingface/transformers/issues/1324
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1324/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1324/comments
https://api.github.com/repos/huggingface/transformers/issues/1324/events
https://github.com/huggingface/transformers/issues/1324
497,644,703
MDU6SXNzdWU0OTc2NDQ3MDM=
1,324
A Micro BERT
{ "login": "aditya-malte", "id": 20294625, "node_id": "MDQ6VXNlcjIwMjk0NjI1", "avatar_url": "https://avatars.githubusercontent.com/u/20294625?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aditya-malte", "html_url": "https://github.com/aditya-malte", "followers_url": "https://api.github.com/users/aditya-malte/followers", "following_url": "https://api.github.com/users/aditya-malte/following{/other_user}", "gists_url": "https://api.github.com/users/aditya-malte/gists{/gist_id}", "starred_url": "https://api.github.com/users/aditya-malte/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aditya-malte/subscriptions", "organizations_url": "https://api.github.com/users/aditya-malte/orgs", "repos_url": "https://api.github.com/users/aditya-malte/repos", "events_url": "https://api.github.com/users/aditya-malte/events{/privacy}", "received_events_url": "https://api.github.com/users/aditya-malte/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I am using a much smaller dataset with my project, but it doesn't mean I need a bert with lesser layers. Otherwise, I have no way to utilize the pre-trained model.\r\n\r\nWhat is the problem you have with the smaller dataset?", "My dataset is very esoteric, in the sense that BERTs pretrained weights will almost be like noise.", "YOU NEED ALBERT", "Einstein?", "They are referring to this new [ALBERT paper](https://old.reddit.com/r/MachineLearning/comments/d9tdfo/albert_a_lite_bert_for_selfsupervised_learning_of/). No weights are available however so give it a few months.\r\n\r\nDefinitely try fine-tuning a pre-trained BERT first, you can also just edit the BertConfig class to get a smaller network, but you probably can't train it from scratch on a small amount of data.", "Interesting. Can't I train a very small BERT as you said(maybe 2 layers) on like 4million tokens", "I'm not sure what the minimum tokens and layers are, I'm not sure anyone has published that. Best to try it out.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,569
1,575
1,575
NONE
null
## ❓ Questions & Help Hello, Has anyone solved a problem like this, or knows of a solution: I want to pre-train BERT on a custom dataset, but this data is much smaller than the one used by Google. So is it possible to train it on a "micro" bert with much lesser layers, etc. Thanks in advance
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1324/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1324/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1323
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1323/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1323/comments
https://api.github.com/repos/huggingface/transformers/issues/1323/events
https://github.com/huggingface/transformers/issues/1323
497,572,484
MDU6SXNzdWU0OTc1NzI0ODQ=
1,323
How to build a Text-to-Feature Extractor based on Fine-Tuned BERT Model
{ "login": "pvester", "id": 45792866, "node_id": "MDQ6VXNlcjQ1NzkyODY2", "avatar_url": "https://avatars.githubusercontent.com/u/45792866?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pvester", "html_url": "https://github.com/pvester", "followers_url": "https://api.github.com/users/pvester/followers", "following_url": "https://api.github.com/users/pvester/following{/other_user}", "gists_url": "https://api.github.com/users/pvester/gists{/gist_id}", "starred_url": "https://api.github.com/users/pvester/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pvester/subscriptions", "organizations_url": "https://api.github.com/users/pvester/orgs", "repos_url": "https://api.github.com/users/pvester/repos", "events_url": "https://api.github.com/users/pvester/events{/privacy}", "received_events_url": "https://api.github.com/users/pvester/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The explanation for fine-tuning is in the README https://github.com/huggingface/pytorch-transformers#quick-tour-of-the-fine-tuningusage-scripts.", "Thanks, but as far as i understands its about \"Fine-tuning on GLUE tasks for **sequence classification**\". I want to do \"Fine-tuning on My Data for **word-to-features extraction**\". I am not interested in building a classifier, just a fine-tuned word-to-features extraction. I am not sure how to get there, from the GLUE example?? I need to somehow do the fine-tuning and then find a way to extract the output from e.g. the last four layers in evalution mode for each sentence i want to extract features from. But how to do that?\r\n", "You can only fine-tune a model if you have a task, of course, otherwise the model doesn't know whether it is improving over some baseline or not. Since 'feature extraction', as you put it, doesn't come with a predefined correct result, that doesn't make since. In your case it might be better to fine-tune the masked LM on your dataset. https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/modeling_bert.py#L713 ", "But wouldnt it be possible to proceed like thus:\r\n\r\n1) fine-tune the BERT model on my labelled data by adding a layer with two nodes (for 0 and 1) [ALREADY DONE]\r\n2) Run all my data/sentences through the fine-tuned model in evalution, and use the output of the last layers (before the classification layer) as the word-embeddings instead of the predictons? Then I can use that feature vector in my further analysis of my problem and I have created a feature extractor fine-tuned on my data.\r\n\r\nWhat do you think of that approach?\r\n\r\n\r\n\r\n", "But what do you wish to use these word representations for? It's a bit odd using word representations from deep learning as features in other kinds of systems.\r\n\r\nBut, yes, what you say is theoretically possible. But take into account that those are *not* word embeddings what you are extracting. They are the final *task specific* representation of words. In other words, if you finetune the model on another task, you'll get other word representations.", "The idea is that I have several columns in my dataset. Most of them have numerical values and then I have ONE text column. The idea is to extract features from the text, so I can represent the text fields as numerical values.\r\n\r\nNow that all my columns have numerical values (after feature extraction) I can use e.g. a neural network or random forest algorithm to do the predictions based on both the text column and the other columns with numerical values\r\n\r\nBy the way, do you know - after I fine-tune the model - how do I get the output from the last four layers in evalution mode? My model is BertForSequenceClassification.from_pretrained(\"bert-base-uncased\", num_labels=2) but i can only figure out how to get the final predictions (model.eval() -> predictions = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask), not the output from all the layers...\r\n\r\n", "If I were you, I would just extend BERT and add the features there, so that everything is optimised in one go. That will give you the cleanest pipeline and most reproducible. But of course you can do what you want. I also once tried Sent2Vec as features in SVR and that worked pretty well. So what I'm saying is, it might _work_ but the pipeline might get messy. So make sure that your code is well structured and easy to follow along. The more broken up your pipeline, the easier it is for errors the sneak in.\r\n\r\nI advise you to read through the whole BERT process. Especially its config counterpart. Down the line you'll find that there's this option that can be used:\r\n\r\nhttps://github.com/huggingface/pytorch-transformers/blob/7c0f2d0a6a8937063bb310fceb56ac57ce53811b/pytorch_transformers/configuration_utils.py#L55\r\n\r\nWhen you enable `output_hidden_states` all layers' final states will be returned. \r\n\r\n```python\r\nbert = BertModel.from_pretrained('bert-base-uncased', output_hidden_states=True)\r\nout = bert.(input_ids=input_ids, attention_mask=attention_mask\r\n# out is a tuple, the hidden states are the third element (cf. source code)\r\nhidden_states = out[2]\r\n```", "Thanks alot! Now my only problem is that, when I do:\r\nmodel = BertForSequenceClassification.from_pretrained(\"bert-base-uncased\", num_labels=2, output_hidden_states=True)\r\n\r\nI get: \r\nTypeError: __init__() got an unexpected keyword argument 'output_hidden_states'", "@pvester what version of pytorch-transformers are you using? I'm on 1.2.0 and it seems to be working with output_hidden_states = True.", "@cformosa I am using 1.2.0\r\n\r\nThis is the full output\r\n\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-39-06d5140bbc0a> in <module>()\r\n 1 \r\n----> 2 model = BertForSequenceClassification.from_pretrained(\"bert-base-uncased\", num_labels=2, output_hidden_states=True)\r\n 3 model.cuda()\r\n 4 \r\n\r\n/usr/local/lib/python3.6/dist-packages/pytorch_pretrained_bert/modeling.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)\r\n 598 logger.info(\"Model config {}\".format(config))\r\n 599 # Instantiate model.\r\n--> 600 model = cls(config, *inputs, **kwargs)\r\n 601 if state_dict is None and not from_tf:\r\n 602 weights_path = os.path.join(serialization_dir, WEIGHTS_NAME)\r\n\r\nTypeError: __init__() got an unexpected keyword argument 'output_hidden_states'", "@pvester perhaps this will help?\r\n[#1073 ](https://github.com/huggingface/pytorch-transformers/issues/1073)", "thanks @cformosa\r\n\r\nI think I got more confused than before. I hope you guys are able to help me making this work. My latest try is:\r\n\r\nconfig = BertConfig.from_pretrained(\"bert-base-uncased\", output_hidden_states=True)\r\nmodel = BertForSequenceClassification.from_pretrained(\"bert-base-uncased\", num_labels=2, config=config)\r\n\r\nERROR:\r\nAttributeError: type object 'BertConfig' has no attribute 'from_pretrained'", "No, don't do it like that. Your first approach was correct. (You don't need to use config manually when using a pre-trained model.) So\r\n\r\n```python\r\nmodel = BertForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=2, output_hidden_states=True)\r\n```\r\n\r\nis correct. I tested it and it works. I would assume that you are on an older version of pytorch-transformers. Try updating the package to the latest pip release.\r\n\r\nEDIT: I just read the reference by cformosa. Apparently there are different ways. But if they don't work, it might indicate a version issue.", "Are you sure you have a recent version of pytorch_transformers ?\n\n```\nimport pytorch_transformers\npytorch_transformers.__version__\n```\n\nOn Wed, 25 Sep 2019 at 15:47, pvester <[email protected]> wrote:\n\n> I think I got more confused than before. I hope you guys are able to help\n> me making this work. My latest try is:\n>\n> config = BertConfig.from_pretrained(\"bert-base-uncased\",\n> output_hidden_states=True)\n> model = BertForSequenceClassification.from_pretrained(\"bert-base-uncased\",\n> num_labels=2, config=config)\n>\n> ERROR:\n> AttributeError: type object 'BertConfig' has no attribute 'from_pretrained'\n>\n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/pytorch-transformers/issues/1323?email_source=notifications&email_token=ABYDIHOSVHXKBF5PTRPEYHDQLNTWBA5CNFSM4IZ5GVFKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7R64AY#issuecomment-535031299>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/ABYDIHPW7ZATNPB2MYISKVTQLNTWBANCNFSM4IZ5GVFA>\n> .\n>\n", "@BramVanroy, @thomwolf \r\n\r\npytorch_transformers.__version__ gives me \"1.2.0\"\r\n\r\n\r\nEverything works when i do a it **without** output_hidden_states=True\r\n\r\nI do a pip install of pytorch-transformers right before, with the output\r\nRequirement already satisfied: pytorch-transformers in /usr/local/lib/python3.6/dist-packages (1.2.0)\r\nRequirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from pytorch-transformers) (2.21.0)\r\nRequirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from pytorch-transformers) (4.28.1)\r\nRequirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from pytorch-transformers) (2019.8.19)\r\nRequirement already satisfied: torch>=1.0.0 in /usr/local/lib/python3.6/dist-packages (from pytorch-transformers) (1.1.0)\r\nRequirement already satisfied: sacremoses in /usr/local/lib/python3.6/dist-packages (from pytorch-transformers) (0.0.34)\r\nRequirement already satisfied: sentencepiece in /usr/local/lib/python3.6/dist-packages (from pytorch-transformers) (0.1.83)\r\nRequirement already satisfied: boto3 in /usr/local/lib/python3.6/dist-packages (from pytorch-transformers) (1.9.224)\r\nRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from pytorch-transformers) (1.16.5)\r\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->pytorch-transformers) (2019.6.16)\r\nRequirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->pytorch-transformers) (3.0.4)\r\nRequirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->pytorch-transformers) (2.8)\r\nRequirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->pytorch-transformers) (1.24.3)\r\nRequirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from sacremoses->pytorch-transformers) (1.12.0)\r\nRequirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from sacremoses->pytorch-transformers) (7.0)\r\nRequirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from sacremoses->pytorch-transformers) (0.13.2)\r\nRequirement already satisfied: botocore<1.13.0,>=1.12.224 in /usr/local/lib/python3.6/dist-packages (from boto3->pytorch-transformers) (1.12.224)\r\nRequirement already satisfied: s3transfer<0.3.0,>=0.2.0 in /usr/local/lib/python3.6/dist-packages (from boto3->pytorch-transformers) (0.2.1)\r\nRequirement already satisfied: jmespath<1.0.0,>=0.7.1 in /usr/local/lib/python3.6/dist-packages (from boto3->pytorch-transformers) (0.9.4)\r\nRequirement already satisfied: python-dateutil<3.0.0,>=2.1; python_version >= \"2.7\" in /usr/local/lib/python3.6/dist-packages (from botocore<1.13.0,>=1.12.224->boto3->pytorch-transformers) (2.5.3)\r\nRequirement already satisfied: docutils<0.16,>=0.10 in /usr/local/lib/python3.6/dist-packages (from botocore<1.13.0,>=1.12.224->boto3->pytorch-transformers) (0.15.2)", "I tried with two different python setups now and always the same error:\r\n\r\nTypeError: __init__() got an unexpected keyword argument 'output_hidden_states'\r\n\r\nI can upload a Google Colab notesbook, if it helps to find the error??", "You're sure that you are passing in the keyword argument *after* the 'bert-base-uncased' argument, right? Yes, you can try a Colab.", "@BramVanroy\r\n\r\nOkat thanks, the Colab link is here:\r\n\r\nhttps://colab.research.google.com/drive/1tIFeHITri6Au8jb4c64XyVH7DhyEOeMU\r\n\r\nscroll down to the end for the error message", "You're loading it from the old pytorch_pretrained_bert, not from the new pytorch_transformers. Why are you importing `pytorch_pretrained_bert` in the first place? Using both at the same time will definitely lead to mistakes or at least confusion. Stick to one.\r\n\r\nThis line\r\n\r\n```python\r\nfrom pytorch_pretrained_bert import BertAdam, BertForSequenceClassification\r\n```\r\n\r\nshould be\r\n\r\n```python\r\nfrom pytorch_transformers import BertAdam, BertForSequenceClassification\r\n```", "@BramVanroy \r\n\r\nNow i get \r\n\r\n\r\nImportError: cannot import name 'BertAdam'", "I'm sorry but this is getting annoying. If you'd just _read_, you'd understand what's wrong. In the README it is stated that there have been changes to the optimizers. Now you can use AdamW and it's in optimizer.py. It's not hard to find out why an import goes wrong. Just look through the source code here.", "@BramVanroy @thomwolf @cformosa \r\n\r\nThanks for your help. I now managed to do my task as intended with a quite good performance and I am very happy with the results.\r\n\r\nThank to all of you for your valuable help and patience\r\n\r\nI am sorry I did not understand everything in the documentation right away - it has been a learning experience for as well for me :) I now feel more at ease with these packages and manipulating an existing neural network.", "No worries. Just remember that reading the documentation and particularly the source code will help you a lot. Not only for your current problem, but also for better understanding the bigger picture. \r\n\r\nGlad that your results are as good as you expected.", "I'm trying to extract the features from FlaubertForSequenceClassification. My concern is the huge size of embeddings being extracted. Is there any work you can point me to which involves compressing the embeddings/features extracted from the model.\r\nThanks in advance! ", "> I'm trying to extract the features from FlaubertForSequenceClassification. My concern is the huge size of embeddings being extracted. Is there any work you can point me to which involves compressing the embeddings/features extracted from the model.\r\n> Thanks in advance!\r\n\r\nYou can use pooling for this. Typically average or maxpooling. You'll find a lot of info if you google it.", "> If I were you, I would just extend BERT and add the features there, so that everything is optimised in one go. That will give you the cleanest pipeline and most reproducible. But of course you can do what you want. I also once tried Sent2Vec as features in SVR and that worked pretty well. So what I'm saying is, it might _work_ but the pipeline might get messy. So make sure that your code is well structured and easy to follow along. The more broken up your pipeline, the easier it is for errors the sneak in.\r\n> \r\n> I advise you to read through the whole BERT process. Especially its config counterpart. Down the line you'll find that there's this option that can be used:\r\n> \r\n> https://github.com/huggingface/pytorch-transformers/blob/7c0f2d0a6a8937063bb310fceb56ac57ce53811b/pytorch_transformers/configuration_utils.py#L55\r\n> \r\n> When you enable `output_hidden_states` all layers' final states will be returned.\r\n> \r\n> ```python\r\n> bert = BertModel.from_pretrained('bert-base-uncased', output_hidden_states=True)\r\n> out = bert.(input_ids=input_ids, attention_mask=attention_mask\r\n> # out is a tuple, the hidden states are the third element (cf. source code)\r\n> hidden_states = out[2]\r\n> ```\r\n\r\nHi @BramVanroy , I'm relatively new to neural network and I'm using 🤗transformer to fine-tune a BERT for my research thesis. \r\n\r\nThe major challenge I'm having now happens to be mentioned in your comment here, that's _\"extend BERT and add features\"_. Is it possible to integrate the fine-tuned BERT model into a bigger network? Something like appending some more features in the output layer of BERT then continue forward to the next layer in the bigger network.\r\n\r\nI know it's more of a ML question than a specific question toward this package, but it would be MUCH MUCH appreciated if you can refer some material/blog that explain similar practice. Thanks!", "@BenjiTheC I don't have any blog post to link to, but I wrote a small snippet that could help get you started. You just have to make sure the dimensions are correct for the features that you want to include. For more help you may want to get in touch via [the forum](https://discuss.huggingface.co/). You can tag me there as well.\r\n\r\n```python\r\nimport torch\r\nimport torch.nn as nn\r\nfrom torch.nn import GELU\r\nfrom transformers import BertModel\r\n\r\n\r\nclass ExtendedBert(nn.Module):\r\n def __init__(self):\r\n super().__init__()\r\n\r\n self.bert = BertModel.from_pretrained(\"bert-base-cased\")\r\n self.linear = nn.Linear(1024, 1024)\r\n self.act = GELU()\r\n # regression problem: one label\r\n self.classifier = nn.Linear(1024, 1)\r\n\r\n def forward(self, encoded, other_feats):\r\n # get the hidden state of the last layer\r\n last_hidden = self.bert(**encoded)[0]\r\n # concatenate with the other given features\r\n cat = torch.cat([last_hidden, other_feats], dim=-1)\r\n # pass through linear layer\r\n output = self.linear(cat)\r\n # pass through non-linear activation and final classifier layer\r\n return self.classifier(self.act(output))\r\n\r\n```", "> @BenjiTheC I don't have any blog post to link to, but I wrote a small smippet that could help get you started. You just have to make sure the dimensions are correct for the features that you want to include. For more help you may want to get in touch via [the forum](https://discuss.huggingface.co/). You can tag me there as well.\r\n> \r\n> ```python\r\n> import torch\r\n> import torch.nn as nn\r\n> from torch.nn import GELU\r\n> from transformers import BertModel\r\n> \r\n> \r\n> class ExtendedBert(nn.Module):\r\n> def __init__(self):\r\n> super().__init__()\r\n> \r\n> self.bert = BertModel.from_pretrained(\"bert-base-cased\")\r\n> self.linear = nn.Linear(1024, 1024)\r\n> self.act = GELU()\r\n> # regression problem: one label\r\n> self.classifier = nn.Linear(1024, 1)\r\n> \r\n> def forward(self, encoded, other_feats):\r\n> # get the hidden state of the last layer\r\n> last_hidden = self.bert(**encoded)[0]\r\n> # concatenate with the other given features\r\n> cat = torch.cat([last_hidden, other_feats], dim=-1)\r\n> # pass through linear layer\r\n> output = self.linear(cat)\r\n> # pass through non-linear activation and final classifier layer\r\n> return self.classifier(self.act(output))\r\n> ```\r\n\r\nThank you so much for such a timely response!\r\nI'm a TF2 user but your snippet definitely point me to the right direction - to concat the last layer's state and new features to forward. One more follow up question though: I saw in the previous discussion, to get the hidden state of the model, you need to set `output_hidden_state` to `True`, do I need this flag to be True to get what I want?", "@BenjiTheC That flag is needed if you want the hidden states of _all_ layers. If you just want the last layer's hidden state (as in my example), then you do not need that flag.", "> @BenjiTheC That flag is needed if you want the hidden states of _all_ layers. If you just want the last layer's hidden state (as in my example), then you do not need that flag.\r\n\r\nThanks so much! Will stay tuned in the forum and continue the discussion there if needed." ]
1,569
1,659
1,569
NONE
null
I have now tried for several days to solve an issue I have... I need to make a feature extractor for a project I am doing, so I am able to translate a given sentence e.g. "My hat is blue" into a vector of a given length e.g. 768. That vector will then later on be combined with several other values for the final prediction in e.g. a random forest algorithm. My dataset contains a text column + a label column (with 0 and 1 values) + several other columns that are not of interest for this problem. I know how to do make that feature extractor using word2vec, Glove, FastText and pre-trained BERT/Elmo Models. That works okay. Now I want to improve the text-to-feature extractor by using a FINE-TUNED BERT model, instead of a PRE-TRAINED BERT MODEL. I want to fine-tune the BERT model on my dataset and then use that new BERT model to do the feature extraction. I am NOT INTERESTED in using the bert model for the predictions themselves! Only for the feature extraction. How can i do that? I think i need the run_lm_finetuning.py somehow, but simply cant figure out how to do it. I could really need some help... P.S. I have already created a binary classifier using the text information to predict the label (0/1), by adding an additional layer. Could I in principle use the output of the previous layers, in evaluation mode, as word embeddings? If I can, then I am not sure how to get the output of those in evaluation mode.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1323/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1323/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1322
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1322/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1322/comments
https://api.github.com/repos/huggingface/transformers/issues/1322/events
https://github.com/huggingface/transformers/issues/1322
497,547,516
MDU6SXNzdWU0OTc1NDc1MTY=
1,322
parameter never_split not added in BasicTokenizer's tokenize
{ "login": "jjyunlp", "id": 29971305, "node_id": "MDQ6VXNlcjI5OTcxMzA1", "avatar_url": "https://avatars.githubusercontent.com/u/29971305?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jjyunlp", "html_url": "https://github.com/jjyunlp", "followers_url": "https://api.github.com/users/jjyunlp/followers", "following_url": "https://api.github.com/users/jjyunlp/following{/other_user}", "gists_url": "https://api.github.com/users/jjyunlp/gists{/gist_id}", "starred_url": "https://api.github.com/users/jjyunlp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jjyunlp/subscriptions", "organizations_url": "https://api.github.com/users/jjyunlp/orgs", "repos_url": "https://api.github.com/users/jjyunlp/repos", "events_url": "https://api.github.com/users/jjyunlp/events{/privacy}", "received_events_url": "https://api.github.com/users/jjyunlp/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,569
1,575
1,575
NONE
null
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): Bert Language I am using the model on (English, Chinese....): English The problem arise when using: * [ ] my own modified scripts: I need to add some special tokens that will not been split during tokenizing. And my special tokens contain punc, like [E1]. This will be split in _run_split_on_punc() if omit parameter never_split in this function. ## To Reproduce Steps to reproduce the behavior: 1. omit parameter never_split when invoke self._run_split_on_punc(token) <!-- If you have a code sample, error messages, stack traces, please provide it here as well. --> ```python # tokenization_bert.py def tokenize(self, text, never_split=None): .... orig_tokens = whitespace_tokenize(text) split_tokens = [] for token in orig_tokens: if self.do_lower_case and token not in never_split: token = token.lower() token = self._run_strip_accents(token) split_tokens.extend(self._run_split_on_punc(token)) output_tokens = whitespace_tokenize(" ".join(split_tokens)) return output_tokens ``` Modify ```python split_tokens.extend(self._run_split_on_punc(token)) ``` to ```python split_tokens.extend(self._run_split_on_punc(token, never_split)) ``` wolud solve this problem.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1322/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1322/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1321
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1321/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1321/comments
https://api.github.com/repos/huggingface/transformers/issues/1321/events
https://github.com/huggingface/transformers/issues/1321
497,321,954
MDU6SXNzdWU0OTczMjE5NTQ=
1,321
Using pytorch-transformer to reimplement the "Attention is all you need" paper
{ "login": "roholazandie", "id": 7584674, "node_id": "MDQ6VXNlcjc1ODQ2NzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/7584674?v=4", "gravatar_id": "", "url": "https://api.github.com/users/roholazandie", "html_url": "https://github.com/roholazandie", "followers_url": "https://api.github.com/users/roholazandie/followers", "following_url": "https://api.github.com/users/roholazandie/following{/other_user}", "gists_url": "https://api.github.com/users/roholazandie/gists{/gist_id}", "starred_url": "https://api.github.com/users/roholazandie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/roholazandie/subscriptions", "organizations_url": "https://api.github.com/users/roholazandie/orgs", "repos_url": "https://api.github.com/users/roholazandie/repos", "events_url": "https://api.github.com/users/roholazandie/events{/privacy}", "received_events_url": "https://api.github.com/users/roholazandie/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, this repository's objective is mainly to host **pretrained** models, not really to build a model from scratch.\r\n\r\nYou could use some of this library's components though, like multi-headed attention, to help you in your endeavor." ]
1,569
1,569
1,569
NONE
null
## ❓ Questions & Help I use this repo for a long time but I realized even though the name is PyTorch transformers I can't find an easy way to re-implement the original paper of "Attention is all you need" with pretrained model. Can someone help me?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1321/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1320
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1320/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1320/comments
https://api.github.com/repos/huggingface/transformers/issues/1320/events
https://github.com/huggingface/transformers/issues/1320
497,268,694
MDU6SXNzdWU0OTcyNjg2OTQ=
1,320
Why does padding affect the embedding results for XLNet? Pre-padding returns different embeddings than post-padding. Which one should be used?
{ "login": "osmanbaskaya", "id": 222624, "node_id": "MDQ6VXNlcjIyMjYyNA==", "avatar_url": "https://avatars.githubusercontent.com/u/222624?v=4", "gravatar_id": "", "url": "https://api.github.com/users/osmanbaskaya", "html_url": "https://github.com/osmanbaskaya", "followers_url": "https://api.github.com/users/osmanbaskaya/followers", "following_url": "https://api.github.com/users/osmanbaskaya/following{/other_user}", "gists_url": "https://api.github.com/users/osmanbaskaya/gists{/gist_id}", "starred_url": "https://api.github.com/users/osmanbaskaya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osmanbaskaya/subscriptions", "organizations_url": "https://api.github.com/users/osmanbaskaya/orgs", "repos_url": "https://api.github.com/users/osmanbaskaya/repos", "events_url": "https://api.github.com/users/osmanbaskaya/events{/privacy}", "received_events_url": "https://api.github.com/users/osmanbaskaya/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I might be wrong, but intuitively I would say that that makes things easier. XLNet expects single sequences that look like this `tok1 tok2 ... SEP CLS`. So in contrast with BERT, the classification token is at the end of a sentence rather than beginning. This is before padding. So if you use post-padding, the position of the CLS element can differ for each element in your batch, but if you use pre-padding, then you can access the CLS element by its `-1` index.\r\n\r\nThat's not to say that it's not possible to retrieve the CLS element in XLNet when you've used post-padding. Something like this should work. Find the position (indices) where the input IDs are the classification token, then use those indices to slice the output.\r\n\r\n```python\r\noutput = output[torch.where(input_ids == tokenizer.cls_token_id)]\r\n```\r\n\r\nIf you've used pre-padding, this can be simplified to\r\n\r\n```python\r\noutput = output[:, -1]\r\n```", "Hey @BramVanroy, thanks for the answer. You may be right that pre padding for XLNet makes things easier (i.e., getting `[cls]` token from the last index) but the question I want to be answered is not why we would like \"pre\" padding but why pre padding and post padding gives different answers. Maybe I should change the title further. If you see the notebook I shared, depending on padding you're getting different results.", "Sorry, I fear I can't help with that. I am also wondering when padding is necessary and when it isn't. ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Maybe too late for this thread, but any answers to this issue?", "Any answer on this issue?", "Any answer on this issue?" ]
1,569
1,592
1,578
NONE
null
## ❓ Questions & Help Hello, I am confused with different results of XLNet depending on padding. For Bert, padding doesn't affect the outputs, but for XLNet with **pre** padding (which I saw in https://github.com/huggingface/pytorch-transformers/blob/master/examples/run_glue.py#L281), returns very different results for the same sentence, with and without padding. The difference between no padding and post padding for XLNet returns similar results. Why did "pre" padding is used for run_glue.py? Does XLNet expect post padding or pre padding? Is there any document I am missing to clarify those important distinctions? Here is a demonstration of the differences for pre, post padding for BERT and XLNet: https://colab.research.google.com/drive/1PCiU3icdfUB-nrLbrKAhgpePoIcFRlqX Thanks, Osman
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1320/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1319
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1319/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1319/comments
https://api.github.com/repos/huggingface/transformers/issues/1319/events
https://github.com/huggingface/transformers/issues/1319
497,135,436
MDU6SXNzdWU0OTcxMzU0MzY=
1,319
BertForQuestionAnswering output to predict text
{ "login": "binnz", "id": 31803225, "node_id": "MDQ6VXNlcjMxODAzMjI1", "avatar_url": "https://avatars.githubusercontent.com/u/31803225?v=4", "gravatar_id": "", "url": "https://api.github.com/users/binnz", "html_url": "https://github.com/binnz", "followers_url": "https://api.github.com/users/binnz/followers", "following_url": "https://api.github.com/users/binnz/following{/other_user}", "gists_url": "https://api.github.com/users/binnz/gists{/gist_id}", "starred_url": "https://api.github.com/users/binnz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/binnz/subscriptions", "organizations_url": "https://api.github.com/users/binnz/orgs", "repos_url": "https://api.github.com/users/binnz/repos", "events_url": "https://api.github.com/users/binnz/events{/privacy}", "received_events_url": "https://api.github.com/users/binnz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,569
1,571
1,571
NONE
null
<!-- A clear and concise description of the question. --> In predict mode, BertForQuestionAnswering model output a tuple like below, how to get the text answer interactively ``` tensor([[ 0.4691, 0.3912, -0.3447, 0.9756, 0.7171, 0.3746, 0.5273, 0.3756, 0.2083, 0.4130, 0.2145, 0.1327, 0.7265, 0.4678, 0.6294, 0.3284]]) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1319/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1319/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1318
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1318/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1318/comments
https://api.github.com/repos/huggingface/transformers/issues/1318/events
https://github.com/huggingface/transformers/issues/1318
497,029,786
MDU6SXNzdWU0OTcwMjk3ODY=
1,318
A sequence with no special tokens has been passed to the RoBERTa model. This model requires special tokens in order to work. Please specify add_special_tokens=True in your encoding.
{ "login": "gr8Adakron", "id": 16715364, "node_id": "MDQ6VXNlcjE2NzE1MzY0", "avatar_url": "https://avatars.githubusercontent.com/u/16715364?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gr8Adakron", "html_url": "https://github.com/gr8Adakron", "followers_url": "https://api.github.com/users/gr8Adakron/followers", "following_url": "https://api.github.com/users/gr8Adakron/following{/other_user}", "gists_url": "https://api.github.com/users/gr8Adakron/gists{/gist_id}", "starred_url": "https://api.github.com/users/gr8Adakron/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gr8Adakron/subscriptions", "organizations_url": "https://api.github.com/users/gr8Adakron/orgs", "repos_url": "https://api.github.com/users/gr8Adakron/repos", "events_url": "https://api.github.com/users/gr8Adakron/events{/privacy}", "received_events_url": "https://api.github.com/users/gr8Adakron/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "@yaroslavvb @cynthia @myleott ", "Hi, this error springs when you're passing an input to the model which doesn't have the special tokens it needs (cls token and sep token).\r\n\r\nThe `encode` method accepts the argument `add_special_tokens`, which will take care of adding the special tokens to your sequence.", "I have exactly the same problem, when running on a single GPU it works well, but on the 2-GPUS it got this warning and an index error then caused cuDNN error: CUDNN_STATUS_NOT_INITIALIZED", "I found the problem, when running on multi-gpus, all inputs in forward well divided into n-gpus, for an input tensor with shape (batch, x, y), it will divided into (n-gpus, batch/n-gpus, x, y), if tensor doesn't have the batch dim, then caused this error.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "I have the same issue on just loading the model\r\n\r\n```\r\ntokenizer = RobertaTokenizer.from_pretrained('roberta-base',add_special_tokens=True)\r\nmodel = TFRobertaForSequenceClassification.from_pretrained('roberta-base')\r\n\r\n```\r\nReturns\r\n```\r\nA sequence with no special tokens has been passed to the RoBERTa model. This model requires special tokens in order to work. Please specify add_special_tokens=True in your encoding.\r\nA sequence with no special tokens has been passed to the RoBERTa model. This model requires special tokens in order to work. Please specify add_special_tokens=True in your encoding.\r\n\r\n```", "Can you try specifying it in `encode()` not in `from_pretrained()`?", "Yes but it surprises me that it throws the warning and i did not pass any data to the model. So far in the code there is nothing to encode.", "TensorFlow models need to be \"built\" by first passing inputs through their layers. This warning occurs then.\r\n\r\nThis warning was removed in the recent versions of transformers." ]
1,569
1,579
1,576
NONE
null
This is my code for Roberta: ``` # coding: utf-8 # In[4]: import pandas as pd import numpy as np import json, re from tqdm import tqdm_notebook from uuid import uuid4 ## Torch Modules import torch import torch.optim as optim import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable from torch.utils.data import Dataset, DataLoader ## PyTorch Transformer from pytorch_transformers import RobertaModel, RobertaTokenizer from pytorch_transformers import RobertaForSequenceClassification, RobertaConfig # In[21]: import pandas as pd train_data = pd.read_csv("walmart_input/test/train.csv") test_data = pd.read_csv("walmart_input/test/test.csv") dataset = pd.concat([train_data, test_data]) test_data.head() # In[46]: total_length = len(dataset) # In[47]: label_to_ix = {} for label in dataset.PT: if label not in label_to_ix: label_to_ix[label]=len(label_to_ix) total_pt_count = len(list(set(list(train_data["PT"])))) # In[48]: config = RobertaConfig.from_pretrained('roberta-base') config.num_labels = len(list(set(list(train_data["PT"])))) print(f"Total length of dataset: {total_length} \n Total PT count: {total_pt_count} \n {config}") # In[36]: tokenizer = RobertaTokenizer.from_pretrained('roberta-base') model = RobertaForSequenceClassification(config) def prepare_features(seq_1, max_seq_length = 300, zero_pad = False, include_CLS_token = True, include_SEP_token = True): ## Tokenzine Input tokens_a = tokenizer.tokenize(seq_1) ## Truncate if len(tokens_a) > max_seq_length - 2: tokens_a = tokens_a[0:(max_seq_length - 2)] ## Initialize Tokens tokens = [] if include_CLS_token: tokens.append(tokenizer.cls_token) ## Add Tokens and separators for token in tokens_a: tokens.append(token) if include_SEP_token: tokens.append(tokenizer.sep_token) input_ids = tokenizer.convert_tokens_to_ids(tokens) ## Input Mask input_mask = [1] * len(input_ids) ## Zero-pad sequence lenght if zero_pad: while len(input_ids) < max_seq_length: input_ids.append(0) input_mask.append(0) return torch.tensor(input_ids).unsqueeze(0), input_mask # In[38]: class Intents(Dataset): def __init__(self, dataframe): self.len = len(dataframe) self.data = dataframe def __getitem__(self, index): title = self.data.title[index] label = self.data.PT[index] X, _ = prepare_features(title) y = label_to_ix[self.data.PT[index]] return X, y def __len__(self): return self.len print("FULL Dataset: {}".format(dataset.shape)) print("TRAIN Dataset: {}".format(train_data.shape)) print("TEST Dataset: {}".format(test_data.shape)) training_set = Intents(train_data) testing_set = Intents(test_data) training_set.__getitem__(0)[0].shape model(training_set.__getitem__(0)[0]) # In[65]: ## Training Params if torch.cuda.device_count() > 1: print("Let's use", torch.cuda.device_count(), "GPUs!") # dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs #model = nn.DataParallel(model) ## Training Params device = torch.device("cuda:0,1" if torch.cuda.is_available() else "cpu") model = model.cuda() model = nn.DataParallel(model,device_ids=[0,1],dim=1) model.to(device) # Parameters params = {'batch_size': 1, 'shuffle': True, 'num_workers': 2} training_loader = DataLoader(training_set, **params) testing_loader = DataLoader(testing_set, **params) # In[66]: loss_function = nn.CrossEntropyLoss() learning_rate = 1e-02 optimizer = optim.Adam(params = model.parameters(), lr=learning_rate) ## Test Forward Pass inp = training_set.__getitem__(0)[0].cuda() #print(inp) output = model(inp)[0] torch.max(output.data, 1) # In[ ]: import time start_time = time.time() max_epochs = 2 model = model.train() for epoch in tqdm_notebook(range(max_epochs)): print("EPOCH -- {}".format(epoch)) for i, (sent, label) in enumerate(training_loader): optimizer.zero_grad() sent = sent.squeeze(0) if torch.cuda.is_available(): sent = sent.cuda() label = label.cuda() print("CUDA detail:") print(sent) print(label) output = model.forward(sent)[0] _, predicted = torch.max(output, 1) print(f" - {i}.) {predicted}") loss = loss_function(output, label) loss.backward() optimizer.step() if i%100 == 0: correct = 0 total = 0 for sent, label in testing_loader: sent = sent.squeeze(0) if torch.cuda.is_available(): sent = sent.cuda() label = label.cuda() output = model.forward(sent)[0] _, predicted = torch.max(output.data, 1) total += label.size(0) correct += (predicted.cpu() == label.cpu()).sum() accuracy = 100.00 * correct.numpy() / total print('Iteration: {}. Loss: {}. Accuracy: {}%'.format(i, loss.item(), accuracy)) timetaken = format(float((time.time() - start_time)),'.3f') print(timetaken) torch.save(model.state_dict(), 'roberta_state_dict_on_new_data_MAY2019.pth') ``` Here I am trying to run the code on Multiple GPUs(2-P100). But I keep getting this warning: ``` A sequence with no special tokens has been passed to the RoBERTa model. This model requires special tokens in order to work. Please specify add_special_tokens=True in your encoding. ``` I am not sure, what causing this issue. But when I run it using single (i.e: after removing DataParallel wrapper), it doesn't give this warning. Any help would be appreciated. -thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1318/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1317
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1317/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1317/comments
https://api.github.com/repos/huggingface/transformers/issues/1317/events
https://github.com/huggingface/transformers/issues/1317
496,950,002
MDU6SXNzdWU0OTY5NTAwMDI=
1,317
BertTokenizer provides wrong encode function for Japanese BERT
{ "login": "khaimaitien", "id": 7542979, "node_id": "MDQ6VXNlcjc1NDI5Nzk=", "avatar_url": "https://avatars.githubusercontent.com/u/7542979?v=4", "gravatar_id": "", "url": "https://api.github.com/users/khaimaitien", "html_url": "https://github.com/khaimaitien", "followers_url": "https://api.github.com/users/khaimaitien/followers", "following_url": "https://api.github.com/users/khaimaitien/following{/other_user}", "gists_url": "https://api.github.com/users/khaimaitien/gists{/gist_id}", "starred_url": "https://api.github.com/users/khaimaitien/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/khaimaitien/subscriptions", "organizations_url": "https://api.github.com/users/khaimaitien/orgs", "repos_url": "https://api.github.com/users/khaimaitien/repos", "events_url": "https://api.github.com/users/khaimaitien/events{/privacy}", "received_events_url": "https://api.github.com/users/khaimaitien/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I had discovered that this phenomenon is due to function: _run_strip_accents(token) in class: BasicTokenizer. Perhaps, the authors should give an option to choose whether to remove accents or not because in some language such as Japanese, removing accents makes a new word", "Hi, I have trained this Japanese BERT model, and made it public.\r\nPlease set `do_lower_case` option to false so that the function `_run_strip_accents` is disabled." ]
1,569
1,569
1,569
NONE
null
## 🐛 Bug <!-- Important information --> Model I am using (Bert, XLNet....): BertTokenizer Language I am using the model on (English, Chinese....): Japanese I tried to load the tokenizer for Bert from pretrained [Bert for Japanese](http://nlp.ist.i.kyoto-u.ac.jp/index.php?BERT%E6%97%A5%E6%9C%AC%E8%AA%9EPretrained%E3%83%A2%E3%83%87%E3%83%AB). The tokenizer recognize "が" the same as "か" although both appears in vocab file. Here is example code: ``` from pytorch_transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained('vocab.txt') token_ids = tokenizer.encode('が') print('token_ids: ', token_ids) print(tokenizer.decode(token_ids)) token_ids = tokenizer.encode('か') print('token_ids: ', token_ids) print(tokenizer.decode(token_ids)) ``` the result: ``` token_ids: [90] か token_ids: [90] か ``` The vocab.txt file can be downloaded from [here](https://drive.google.com/open?id=1f3k9GcyqEIjjFo8EgqqaOQmiSXxT1hqF): I also found that BertTokenizer also mis-recognized: 'て' and 'で', 'ば' and 'は'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1317/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1317/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1316
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1316/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1316/comments
https://api.github.com/repos/huggingface/transformers/issues/1316/events
https://github.com/huggingface/transformers/issues/1316
496,870,631
MDU6SXNzdWU0OTY4NzA2MzE=
1,316
How to predict missing word [MASK] using Robert
{ "login": "Oxi84", "id": 25420033, "node_id": "MDQ6VXNlcjI1NDIwMDMz", "avatar_url": "https://avatars.githubusercontent.com/u/25420033?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Oxi84", "html_url": "https://github.com/Oxi84", "followers_url": "https://api.github.com/users/Oxi84/followers", "following_url": "https://api.github.com/users/Oxi84/following{/other_user}", "gists_url": "https://api.github.com/users/Oxi84/gists{/gist_id}", "starred_url": "https://api.github.com/users/Oxi84/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Oxi84/subscriptions", "organizations_url": "https://api.github.com/users/Oxi84/orgs", "repos_url": "https://api.github.com/users/Oxi84/repos", "events_url": "https://api.github.com/users/Oxi84/events{/privacy}", "received_events_url": "https://api.github.com/users/Oxi84/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Basically, the problem is that the model is called masked language model, but it does not mask anything.\r\n\r\nIf i want to get token distribution for word \"dog\", but the model sees word dog, because its not masked, so it use the word in prediction. Input should not be \"Hello, my dog is cute\", but something like that \"Hello, my [MASK] is cute\".\r\n\r\nHow do I do this?\r\n\r\nMaybe there is another way to specify that the word is masked by index or something?\r\n\r\n\r\nWhen I put:\r\n\r\n input_ids = torch.tensor(tokenizer.encode(\"Hello, my dog is cute\")).unsqueeze(0)\r\n top_predicted_words: dog puppy pet pup Dog dogs guy husband kid housedog brother girl friend job cat post\r\n\r\n input_ids = torch.tensor(tokenizer.encode(\"Hello, my wife is cute\")).unsqueeze(0)\r\n top_predicted_words: wife spouse marriage Wife bride husband head house family wives culture throat\r\n\r\n\r\n", "Ok, i got it, I should use <mask> instead of [MASK] and <pad> instead of [PAD].\r\n\r\nI find robera-base to be around 7% faster than bert-base, but a bit less precise.", "> Ok, i got it, I should use instead of [MASK] and instead of [PAD].\r\n> \r\n> I find robera-base to be around 7% faster than bert-base, but a bit less precise.\r\n\r\n@Oxi84 Hi, do you mean should use [PAD] instead of [MASK]? I am having the same problem here. Could you plz share the whole code snippet? Thanks very much." ]
1,569
1,581
1,569
NONE
null
I am reading the docs and I still cannot figure out how to I predict missing word in a sentence using Robert. With bert this is described at https://huggingface.co/pytorch-transformers/quickstart.html # Mask a token that we will try to predict back with `BertForMaskedLM` masked_index = 8 tokenized_text[masked_index] = '[MASK]' assert tokenized_text == ['[CLS]', 'who', 'was', 'jim', 'henson', '?', '[SEP]', 'jim', '[MASK]', 'was', 'a', 'puppet', '##eer', '[SEP]'] Robert example is something I do not understand. What is the output of that? import torch from pytorch_transformers import RobertaTokenizer, RobertaForMaskedLM tokenizer = RobertaTokenizer.from_pretrained('roberta- base',cache_dir="/var/software/Models/robert/") model = RobertaForMaskedLM.from_pretrained('roberta- base',cache_dir="/var/software/Models/robert/") input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1 outputs = model(input_ids, masked_lm_labels=input_ids) loss, prediction_scores = outputs[:2] print("prediction_scores",prediction_scores,len(prediction_scores)) output: A sequence with no special tokens has been passed to the RoBERTa model. This model requires special tokens in order to work. Please specify add_special_tokens=True in your encoding. prediction_scores tensor([[[33.6519, -3.9080, 24.2591, ..., 2.8165, 4.9966, 12.8938], [ 5.8086, -4.2237, 16.1383, ..., -1.0431, -0.8348, 3.5343], [ 0.3336, -4.1881, 10.7825, ..., 0.7295, 0.9056, 3.7928], [ 0.2897, -4.4614, 8.1219, ..., -3.9978, 0.1261, -1.4313], [ 3.3684, -4.0727, 10.7862, ..., 1.7704, -2.2975, 3.9174], [ 2.0526, -4.9519, 18.1501, ..., -4.2190, -5.0759, 1.4990]]], grad_fn=<AddBackward0>) 1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1316/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1315
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1315/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1315/comments
https://api.github.com/repos/huggingface/transformers/issues/1315/events
https://github.com/huggingface/transformers/pull/1315
496,853,826
MDExOlB1bGxSZXF1ZXN0MzIwMDk4MTM0
1,315
Remove unnecessary use of FusedLayerNorm
{ "login": "bryant1410", "id": 3905501, "node_id": "MDQ6VXNlcjM5MDU1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bryant1410", "html_url": "https://github.com/bryant1410", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "repos_url": "https://api.github.com/users/bryant1410/repos", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1315?src=pr&el=h1) Report\n> Merging [#1315](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1315?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/a2d4950f5c909f7bb4ea7c06afa6cdecde7e8750?src=pr&el=desc) will **decrease** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1315/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1315?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1315 +/- ##\n==========================================\n- Coverage 80.77% 80.76% -0.01% \n==========================================\n Files 57 57 \n Lines 8092 8091 -1 \n==========================================\n- Hits 6536 6535 -1 \n Misses 1556 1556\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1315?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1315/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfYmVydC5weQ==) | `88.3% <100%> (-0.03%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1315?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1315?src=pr&el=footer). Last update [a2d4950...98dd19b](https://codecov.io/gh/huggingface/pytorch-transformers/pull/1315?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Note this change makes the codebase to be compatible with apex amp O1.", "Ok great, thanks @bryant1410!" ]
1,569
1,570
1,569
CONTRIBUTOR
null
Fix #1172
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1315/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1315/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1315", "html_url": "https://github.com/huggingface/transformers/pull/1315", "diff_url": "https://github.com/huggingface/transformers/pull/1315.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1315.patch", "merged_at": 1569480603000 }
https://api.github.com/repos/huggingface/transformers/issues/1314
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1314/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1314/comments
https://api.github.com/repos/huggingface/transformers/issues/1314/events
https://github.com/huggingface/transformers/issues/1314
496,817,334
MDU6SXNzdWU0OTY4MTczMzQ=
1,314
How to preprocess my own data to use RoBERTa of Multiple GPUs
{ "login": "gr8Adakron", "id": 16715364, "node_id": "MDQ6VXNlcjE2NzE1MzY0", "avatar_url": "https://avatars.githubusercontent.com/u/16715364?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gr8Adakron", "html_url": "https://github.com/gr8Adakron", "followers_url": "https://api.github.com/users/gr8Adakron/followers", "following_url": "https://api.github.com/users/gr8Adakron/following{/other_user}", "gists_url": "https://api.github.com/users/gr8Adakron/gists{/gist_id}", "starred_url": "https://api.github.com/users/gr8Adakron/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gr8Adakron/subscriptions", "organizations_url": "https://api.github.com/users/gr8Adakron/orgs", "repos_url": "https://api.github.com/users/gr8Adakron/repos", "events_url": "https://api.github.com/users/gr8Adakron/events{/privacy}", "received_events_url": "https://api.github.com/users/gr8Adakron/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "@spolu @cynthia @myleott ", "Hi, you can follow the `run_glue` example which is better for text classification.\r\nBut you will have to modify it for your needs, it's not plug and play.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,569
1,575
1,575
NONE
null
Hey, I am bit naive using deep learning of text-classification, my data **(.csv)** consist of basically two columns: - Text - Labels As per basic objective, model should take unseen text and predict label _(variable y)_ from the trained labels. **I followed this tutorial to train RoBERTa algorithm:** - [https://colab.research.google.com/drive/1xg4UMQmXjDik3v9w-dAsk4kq7dXX_0Fm](https://colab.research.google.com/drive/1xg4UMQmXjDik3v9w-dAsk4kq7dXX_0Fm) Here the input format is universal (Train.tsv and Test.tsv) with 2 columns (which I mentioned above). The only problem is that this code doesn't utilize the multiple GPUs **(I even tried the DataParallel wrapper)**. Somehow, I found the repository of the pytorch-transformer where they have given an example of how to utilize multiple GPUs and train the RoBERTa model i.e: - [https://github.com/huggingface/pytorch-transformers/tree/master/examples](https://github.com/huggingface/pytorch-transformers/tree/master/examples) This repository example takes the input-data in the wiki.text data format and they have provided the link [ Pretraining RoBERTa using your own data ](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.pretraining.md#pretraining-roberta-using-your-own-data) and **yet I don't find it of any use as it just talks about the format what they think is right not about the transforming of the standard format (i.e: .CSV)** Here is the sample: ``` = Valkyria Chronicles III = Senjō no Valkyria 3 : <unk> Chronicles ( Japanese : 戦場のヴァルキュリア3 , lit . Valkyria of the Battlefield 3 ) , commonly referred to as Valkyria Chronicles III outside Japan , is a tactical role @-@ playing video game developed by Sega and Media.Vision for the PlayStation Portable . Released in January 2011 in Japan , it is the third game in the Valkyria series . <unk> the same fusion of tactical and real @-@ time gameplay as its predecessors , the story runs parallel to the first game and follows the " Nameless " , a penal military unit serving the nation of Gallia during the Second Europan War who perform secret black operations and are pitted against the Imperial unit " <unk> Raven " . ``` I mean, why? do they have to use this format, why can't they go with standard format of classification algorithm? and if they want to then why they don't provide the detail regarding the data and adjusting the personal data corresponding to the algorithm intake? Just wanted to know, how to preprocess the data according to multiple GPUs RoBERTa example, any help would be appreciated. thanks.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1314/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1314/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1313
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1313/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1313/comments
https://api.github.com/repos/huggingface/transformers/issues/1313/events
https://github.com/huggingface/transformers/pull/1313
496,780,679
MDExOlB1bGxSZXF1ZXN0MzIwMDQ2NDU1
1,313
Add option to use a 'stop token'
{ "login": "enzoampil", "id": 39557688, "node_id": "MDQ6VXNlcjM5NTU3Njg4", "avatar_url": "https://avatars.githubusercontent.com/u/39557688?v=4", "gravatar_id": "", "url": "https://api.github.com/users/enzoampil", "html_url": "https://github.com/enzoampil", "followers_url": "https://api.github.com/users/enzoampil/followers", "following_url": "https://api.github.com/users/enzoampil/following{/other_user}", "gists_url": "https://api.github.com/users/enzoampil/gists{/gist_id}", "starred_url": "https://api.github.com/users/enzoampil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/enzoampil/subscriptions", "organizations_url": "https://api.github.com/users/enzoampil/orgs", "repos_url": "https://api.github.com/users/enzoampil/repos", "events_url": "https://api.github.com/users/enzoampil/events{/privacy}", "received_events_url": "https://api.github.com/users/enzoampil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1313?src=pr&el=h1) Report\n> :exclamation: No coverage uploaded for pull request base (`master@ecc4f1b`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).\n> The diff coverage is `n/a`.\n\n[![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/1313/graphs/tree.svg?width=650&token=9qOlN6Hb1c&height=150&src=pr)](https://codecov.io/gh/huggingface/transformers/pull/1313?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1313 +/- ##\n=========================================\n Coverage ? 84.72% \n=========================================\n Files ? 84 \n Lines ? 12591 \n Branches ? 0 \n=========================================\n Hits ? 10668 \n Misses ? 1923 \n Partials ? 0\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1313?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1313?src=pr&el=footer). Last update [ecc4f1b...d3f24df](https://codecov.io/gh/huggingface/transformers/pull/1313?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n", "Great, thanks!" ]
1,569
1,570
1,570
CONTRIBUTOR
null
This will be used to truncate the output text to everything till right before the 'stop token'. If the 'stop token' is not found, then the whole text will be returned based on the specified 'length'.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1313/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1313/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1313", "html_url": "https://github.com/huggingface/transformers/pull/1313", "diff_url": "https://github.com/huggingface/transformers/pull/1313.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1313.patch", "merged_at": 1570142637000 }
https://api.github.com/repos/huggingface/transformers/issues/1312
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1312/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1312/comments
https://api.github.com/repos/huggingface/transformers/issues/1312/events
https://github.com/huggingface/transformers/issues/1312
496,750,626
MDU6SXNzdWU0OTY3NTA2MjY=
1,312
In BertForSequenceClassification, why is loss initialised in every forward?
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Also it will be nice if the user gets to use the loss_func itself, Like currently i am using that class with slight modifications to match the pipeline with different losses rather than only CrossEntropy loss. (plus add class_weights etc as well to it)\r\n\r\nThough this is what i did actually to use a different loss function, just grab the logits from the model and apply your own..", "You can always subclass the class, to make it your own.\r\n\r\nSome extra information for this issue: in an issue over at pytorch, it came to light that loss functions are actually meant to be imported as functions (from nn.functional) rather than modules (from nn). ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "Unstale. ", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,569
1,586
1,586
COLLABORATOR
null
Looking at [the source](https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/modeling_bert.py#L902-L910) I can see that the correct loss function is initialized in each call to forward. https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/modeling_bert.py#L902-L910 Can you explain why? Why isn't the loss function set up as part of `init()`? Is there any advantage of always re-initialising it on each forward? Edit: I see that you do this in other parts as well, e.g. the ReLU layer in distilbert: https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/modeling_distilbert.py#L598
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1312/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1312/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/1311
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1311/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1311/comments
https://api.github.com/repos/huggingface/transformers/issues/1311/events
https://github.com/huggingface/transformers/issues/1311
496,743,901
MDU6SXNzdWU0OTY3NDM5MDE=
1,311
RoBERTa : add_special_tokens=True
{ "login": "HongyanJiao", "id": 44488820, "node_id": "MDQ6VXNlcjQ0NDg4ODIw", "avatar_url": "https://avatars.githubusercontent.com/u/44488820?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HongyanJiao", "html_url": "https://github.com/HongyanJiao", "followers_url": "https://api.github.com/users/HongyanJiao/followers", "following_url": "https://api.github.com/users/HongyanJiao/following{/other_user}", "gists_url": "https://api.github.com/users/HongyanJiao/gists{/gist_id}", "starred_url": "https://api.github.com/users/HongyanJiao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HongyanJiao/subscriptions", "organizations_url": "https://api.github.com/users/HongyanJiao/orgs", "repos_url": "https://api.github.com/users/HongyanJiao/repos", "events_url": "https://api.github.com/users/HongyanJiao/events{/privacy}", "received_events_url": "https://api.github.com/users/HongyanJiao/received_events", "type": "User", "site_admin": false }
[ { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "I'm getting the same warning, also here: https://github.com/huggingface/pytorch-transformers/issues/1318", "I think you should add < s > without spaces before as well as after sentences.", "Please share a self contained script exhibiting the behavior and allthe information on the python/pytorch/pytorch-transformers versions you are using.", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n" ]
1,569
1,575
1,575
NONE
null
I set add_special_tokens=True but I still get: A sequence with no special tokens has been passed to the RoBERTa model. This model requires special tokens in order to work. Please specify add_special_tokens=True in your encoding.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1311/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1311/timeline
completed
null
null