url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/3612 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3612/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3612/comments | https://api.github.com/repos/huggingface/transformers/issues/3612/events | https://github.com/huggingface/transformers/issues/3612 | 593,251,872 | MDU6SXNzdWU1OTMyNTE4NzI= | 3,612 | training GPT2 from scratch : implement causal attention mask? | {
"login": "CNelias",
"id": 34754896,
"node_id": "MDQ6VXNlcjM0NzU0ODk2",
"avatar_url": "https://avatars.githubusercontent.com/u/34754896?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CNelias",
"html_url": "https://github.com/CNelias",
"followers_url": "https://api.github.com/users/CNelias/followers",
"following_url": "https://api.github.com/users/CNelias/following{/other_user}",
"gists_url": "https://api.github.com/users/CNelias/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CNelias/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CNelias/subscriptions",
"organizations_url": "https://api.github.com/users/CNelias/orgs",
"repos_url": "https://api.github.com/users/CNelias/repos",
"events_url": "https://api.github.com/users/CNelias/events{/privacy}",
"received_events_url": "https://api.github.com/users/CNelias/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"You'd want to look at the `run_language_modeling.py` script which implements causal language modeling. (do not pass the `--mlm` flag)",
"I'm thinking some edit to run_language_modeling.py script maybe would make it work. I don't think just to not pass the --mlm flag you solve the problem @julien-c. Have you found any solution @johncwok? I'm searching the same thing.",
"@johncwok GPT2 always uses a causal mask. It's quite hidden in the code. This line https://github.com/huggingface/transformers/blob/0a4b1068e1d6c46525082b91a4ba00a09c9270ac/src/transformers/modeling_gpt2.py#L145 creates the causal mask that is then applied to the weights. The naming can definitely be improved here! So no matter what mask you insert it will only be applied in combination with the causal mask.\r\nAlso take a look at this line that creates the mask:\r\nhttps://github.com/huggingface/transformers/blob/0a4b1068e1d6c46525082b91a4ba00a09c9270ac/src/transformers/modeling_gpt2.py#L107\r\n",
"After https://github.com/huggingface/transformers/pull/2715/files is merged, I will do some renaming in the code - seems like a lot of people look for the causal mask in GPT2, CTRL and GPT"
] | 1,585 | 1,591 | 1,591 | NONE | null | # ❓ Questions & Help
I am trying to train a ```GPT2``` model from scratch but I noticed, by looking into the code here https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_gpt2.py that there doesn’t seem to be an implementation for a causal mask. Maybe it is in another repo and I missed it, I also couldn't find ressources on this in the docs.
I could write an ugly for loop and feed each of my sequences one token at a time to the network which would be super unefficient. I could also chop up each of my examples token by token, pad them and feed it like a batch, which is probably faster but doesn’t feel super satisfacting.
Do you know if there is a standard implementation of casal mask that I missed, or another way to do what I am describing ?
PS : I have already read huggingface’s blogpost on training from scratch, but unfortunately it doesn't say much about the implementation of said training :/. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3612/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3612/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3611 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3611/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3611/comments | https://api.github.com/repos/huggingface/transformers/issues/3611/events | https://github.com/huggingface/transformers/pull/3611 | 593,222,163 | MDExOlB1bGxSZXF1ZXN0Mzk4MDQzNDE0 | 3,611 | corrected mistake in polish model cards | {
"login": "borhenryk",
"id": 35457598,
"node_id": "MDQ6VXNlcjM1NDU3NTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35457598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borhenryk",
"html_url": "https://github.com/borhenryk",
"followers_url": "https://api.github.com/users/borhenryk/followers",
"following_url": "https://api.github.com/users/borhenryk/following{/other_user}",
"gists_url": "https://api.github.com/users/borhenryk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borhenryk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borhenryk/subscriptions",
"organizations_url": "https://api.github.com/users/borhenryk/orgs",
"repos_url": "https://api.github.com/users/borhenryk/repos",
"events_url": "https://api.github.com/users/borhenryk/events{/privacy}",
"received_events_url": "https://api.github.com/users/borhenryk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3611/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3611/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3611",
"html_url": "https://github.com/huggingface/transformers/pull/3611",
"diff_url": "https://github.com/huggingface/transformers/pull/3611.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3611.patch",
"merged_at": 1585919236000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3610 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3610/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3610/comments | https://api.github.com/repos/huggingface/transformers/issues/3610/events | https://github.com/huggingface/transformers/issues/3610 | 593,203,724 | MDU6SXNzdWU1OTMyMDM3MjQ= | 3,610 | How can u make sure that my transformer model should only one GPU, though the serve has multiple GPU cards. | {
"login": "tiru1930",
"id": 12211287,
"node_id": "MDQ6VXNlcjEyMjExMjg3",
"avatar_url": "https://avatars.githubusercontent.com/u/12211287?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tiru1930",
"html_url": "https://github.com/tiru1930",
"followers_url": "https://api.github.com/users/tiru1930/followers",
"following_url": "https://api.github.com/users/tiru1930/following{/other_user}",
"gists_url": "https://api.github.com/users/tiru1930/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tiru1930/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tiru1930/subscriptions",
"organizations_url": "https://api.github.com/users/tiru1930/orgs",
"repos_url": "https://api.github.com/users/tiru1930/repos",
"events_url": "https://api.github.com/users/tiru1930/events{/privacy}",
"received_events_url": "https://api.github.com/users/tiru1930/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You should have used the template, because now we don't have enough information to help you: how are you running the script (torch launch utility? Which command?), which script are you using (your own (give details) or one of the example scripts)?\r\n\r\nBy default, PyTorch will only use one GPU unless you specify it to go DDP.",
"Use tags please. Read through this guide. https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks",
"@BramVanroy it was my bad, I am in bit hurry so that I was not able to provide my code path. I have just pushed my code to github. Please check below for path \r\n[https://github.com/tiru1930/bert_intent_classification](url)\r\n\r\nIn this PATH I use src/Train.py to train the model.\r\n\r\n",
"As you see, you are just wasting your own time and mine when you are _in a hurry_. In the future, take your time to write a good starting posts so that we _want_ to help you and _can_ help you quickly.\r\n\r\nIn your code, you are calling DataParallel on your mode, which will automatically run your model over multiple GPU's (but under a single process). Remove this line.\r\n\r\nhttps://github.com/tiru1930/bert_intent_classification/blob/master/src/train.py#L80"
] | 1,585 | 1,586 | 1,586 | NONE | null | I have transformer BERT model and I am trying to train on lambda server which has 8 GPU cards, How can u make sure that this model should use only once GPU out of 8, by default , it is using all GPUs. even after setting
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3610/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3610/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3609 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3609/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3609/comments | https://api.github.com/repos/huggingface/transformers/issues/3609/events | https://github.com/huggingface/transformers/issues/3609 | 593,198,219 | MDU6SXNzdWU1OTMxOTgyMTk= | 3,609 | Filling more than 1 masked token at a time | {
"login": "p-christ",
"id": 26346243,
"node_id": "MDQ6VXNlcjI2MzQ2MjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/26346243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/p-christ",
"html_url": "https://github.com/p-christ",
"followers_url": "https://api.github.com/users/p-christ/followers",
"following_url": "https://api.github.com/users/p-christ/following{/other_user}",
"gists_url": "https://api.github.com/users/p-christ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/p-christ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/p-christ/subscriptions",
"organizations_url": "https://api.github.com/users/p-christ/orgs",
"repos_url": "https://api.github.com/users/p-christ/repos",
"events_url": "https://api.github.com/users/p-christ/events{/privacy}",
"received_events_url": "https://api.github.com/users/p-christ/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, this is not supported right now. We'd welcome a PR though :)",
"Before somebody starts on a PR, we need to consider what exactly this should do.\r\n\r\nFor `top_k = 1`, most users probably expect a single forward pass and picking the top prediction for each token. For greater `top_k`, however, picking the k-best prediction at each mask position has increasingly high risk of yielding an inconsistent sequence. A beam search over all possible sequences with some overall objective and returning the overall `top_k` best sequences will be more desirable, but also more work to implement.\r\n\r\nA naive objective could simply multiply the probabilities of each candidate replacement obtained from a single forward pass. However, these probabilities are not conditional on the specific choice for the other mask positions. What exactly these probabilities are when there is more than 1 mask token is not clear to me but I think a reasonable assumption is that the network produces some kind of weighted average of all the probability distributions one would get if one fixes the other mask tokens and makes a forward pass with just one mask token.\r\n\r\nTherefore, I think one must make multiple forward passes to get the probability of each decision step in the gap filling process. It is not clear though in what order to make decisions. Even in the simplest case of continuous mask positions we could proceed left-to-right, right-to-left, from both sides simultaneously, start in the middle or in some other way. The order could also be influenced by the probabilities, e.g. condensating the most confidently predicted token first.\r\n\r\nIt may also be desirable to have a [MASK*] that is expanded to multiple tokens as needed. Then, one may want to have a brevity penalty or normalise by length as otherwise the model will prefer short answers as their probability is higher. One may also want to have a callback to filter candidate substitutions, e.g. for a cloze test one may want to check that the sequence does not start with '##' and that it detokenises to a single word of the target language.",
"Please see the following issue https://github.com/huggingface/transformers/issues/10158 and PR https://github.com/huggingface/transformers/pull/10222 for an attempt to take a crack at this",
"@jowagner Has made some very valid points. In fact, these are the same concerns I have had previously with how multiple mask filling even works when done simultaneously. However, there are some issues with all of the approaches and I am not quite sure yet as to how it could be resolved.\r\n\r\nTake for example you have 3 mask positions and we follow the method that gives preference first to the most confidently predicted token. There is an intrinsic issue as to what the most confident token would even mean here in the first place given that the other 2 masks are still empty and not filled. My point being, the probability of which word needs to be filled in a particular slot is not necessarily indicative of whether that SHOULD be the first one to be filled. \r\n\r\nDo have a look at https://arxiv.org/abs/2002.03079 's work on Blank Language Model. Most of the valuable suggestions that you provided here start spilling into this paper's realm. \r\n\r\nI would be very happy to discuss further about this with you Joachim",
"Hi, I've implemented right to left, left to right, and random mask filling in PyTorch for top k ids that the model thinks are the most probable tokens in a sentence in one of my projects. In this implementation, each time we want to generate a mask, the model looks at the previously generated sentences and decides what is the most probable for the next masked position. So if we have 2 masks in a sentence, by setting top_k=5, we'll have 25 sentences (5 tokens for the first position, and for each of these 5 sentences with one mask we have another 5 tokens for the second mask). It'll output something like this:(I used Persian models for this. I hope you can see how the masks are being filled)\r\n\r\nThen in the next step, we implemented a beam search to choose the most probable sequence of all between all these sentences.\r\n\r\nI'd be glad to help HuggingFace on this issue, I can send my code or send a pull request.\r\n\r\n",
"The idea in https://github.com/huggingface/transformers/pull/10222/commits/80a113641a49c73f7680289219096ee5cf7ca620#r605659735 may point to how one can combine left and right direction or even average over all possible sequences of crystallisation.",
"Hi, This is the function for different orders of prediction. I hope it helps. \r\nAlso, In the beam search section, we constructed a dictionary of bi tri and four grams in a specific corpus related to our work and scored predictions based on those. I won't include this extensive part here but tell me if it can be useful.\r\n\r\n```\r\ndef predict_seqs_dict(sequence, model, tokenizer, top_k=5, order='right-to-left'):\r\n\r\n\r\n ids_main = tokenizer.encode(sequence,\r\n return_tensors=\"pt\",\r\n add_special_tokens=False)\r\n\r\n ids_ = ids_main.detach().clone()\r\n position = torch.where(ids_main == tokenizer.mask_token_id)\r\n\r\n positions_list = position[1].numpy().tolist()\r\n\r\n if order =='left-to-right':\r\n positions_list.reverse()\r\n\r\n elif order=='random':\r\n random.shuffle(positions_list)\r\n\r\n # print(positions_list)\r\n predictions_ids = {}\r\n predictions_detokenized_sents = {}\r\n\r\n for i in range(len(positions_list)):\r\n predictions_ids[i] = []\r\n predictions_detokenized_sents[i] = []\r\n\r\n \r\n # if it was the first prediction, \r\n # just go on and predict the first predictions\r\n \r\n\r\n if i==0:\r\n model_logits = model(ids_main)['logits'][0][positions_list[0]]\r\n top_k_tokens = torch.topk(model_logits, top_k, dim=0).indices.tolist()\r\n \r\n for j in range(len(top_k_tokens)):\r\n #print(j)\r\n ids_t_ = ids_.detach().clone()\r\n ids_t_[0][positions_list[0]] = top_k_tokens[j]\r\n predictions_ids[i].append(ids_t_)\r\n \r\n pred = tokenizer.decode(ids_t_[0])\r\n predictions_detokenized_sents[i].append(pred)\r\n\r\n # append the sentences and ids of this masked token\r\n\r\n\r\n # if we already have some predictions, go on and fill the rest of the masks\r\n # by continuing the previous predictions\r\n if i!=0:\r\n for pred_ids in predictions_ids[i-1]:\r\n \r\n # get the logits\r\n model_logits = model(pred_ids)['logits'][0][positions_list[i]]\r\n # get the top 5 of this prediction and masked token\r\n top_k_tokens = torch.topk(model_logits, top_k, dim=0).indices.tolist()\r\n\r\n for top_id in top_k_tokens:\r\n \r\n ids_t_i = pred_ids.detach().clone()\r\n ids_t_i[0][positions_list[i]] = top_id\r\n\r\n pred = tokenizer.decode(ids_t_i[0])\r\n\r\n # append the sentences and ids of this masked token\r\n\r\n predictions_ids[i].append(ids_t_i)\r\n predictions_detokenized_sents[i].append(pred)\r\n \r\n return predictions_detokenized_sents\r\n \r\n```\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"While an external scoring model may produce higher quality results, such an approach would move quite far away from letting the BERT model make the predictions. For example, consider a users who is evaluating the quality of a BERT model using a cloze test. They don't want issues of the BERT model to be smoothed / repaired by the external scoring model.\r\n\r\nFor finding the most confidently predicted token, I don't see why the fact that 3 or more masks may include a mask that has only masked neighbours is a problem. What we need is a measure of confidence that can be derived from the class probability distribution of the MLM head (its softmax layer). BERT gives us a class probability distribution for each masked token. The most confident token is then simply the one for which the confidence measure gives the greatest value.\r\n\r\nI didn't yet find time to read https://arxiv.org/abs/2002.03079 ",
"@jowagner Just to reconfirm, your proposition was to fill the slots not in an arbitrary left to right or right to left fashion, but to fill the one that has the highest value in the softmax layer and then utilize that while regenerating clozes for the rest of the masks, correct?\r\n\r\n The high confidence for the position could be by virtue of there not being any other better suitable candidates for that position rather than being an indicator that the model is most confident about that prediction (for us to be filling that prediction first before using that as the seed to move on and fill the rest in a similar fashion). Right? "
] | 1,585 | 1,625 | 1,625 | NONE | null | I am able to use hugging face's mask filling pipeline to predict 1 masked token in a sentence using the below:
```
!pip install -q transformers
from __future__ import print_function
import ipywidgets as widgets
from transformers import pipeline
nlp_fill = pipeline('fill-mask')
nlp_fill("I am going to guess <mask> in this sentence")
```
But does anyone have an opinion on what is the best way to do this if I want to predict 2 masked tokens? e.g. if the sentence is instead `"I am going to <mask> <mask> in this sentence"`?
If i try and put this exact sentence into nlp_fill I get the error "ValueError: only one element tensors can be converted to Python scalars" so it doesn't work automatically.
Any help would be much appreciated!
Stack overflow question [link](https://stackoverflow.com/questions/60990897/best-way-of-using-hugging-faces-mask-filling-for-more-than-1-masked-token-at-a) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3609/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3609/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3608 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3608/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3608/comments | https://api.github.com/repos/huggingface/transformers/issues/3608/events | https://github.com/huggingface/transformers/issues/3608 | 593,190,875 | MDU6SXNzdWU1OTMxOTA4NzU= | 3,608 | RobertaTokenizer corner case with empty string | {
"login": "boy2000-007man",
"id": 4197489,
"node_id": "MDQ6VXNlcjQxOTc0ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4197489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/boy2000-007man",
"html_url": "https://github.com/boy2000-007man",
"followers_url": "https://api.github.com/users/boy2000-007man/followers",
"following_url": "https://api.github.com/users/boy2000-007man/following{/other_user}",
"gists_url": "https://api.github.com/users/boy2000-007man/gists{/gist_id}",
"starred_url": "https://api.github.com/users/boy2000-007man/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/boy2000-007man/subscriptions",
"organizations_url": "https://api.github.com/users/boy2000-007man/orgs",
"repos_url": "https://api.github.com/users/boy2000-007man/repos",
"events_url": "https://api.github.com/users/boy2000-007man/events{/privacy}",
"received_events_url": "https://api.github.com/users/boy2000-007man/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"A PR is welcome!",
"created PR #3621 ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"cc @mfuntowicz @n1t0 \r\n\r\nThe problem is that the tokenizer does not allow empty strings to be passed (will lead to index out of bounds error when `text[0].isspace()`). An empty string is possible, according to OP, when using the QQP task which has such a format. OP added a PR here that you can have a look at https://github.com/huggingface/transformers/pull/3621",
"This issue speaks about fixing https://github.com/huggingface/transformers/blob/81484b447b7d8504ff5e1cfff38ec35918383963/src/transformers/tokenization_roberta.py#L239 which seems totally reasonable to me, but #3621 does a lot more than that, half of which I don't even understand.\r\n\r\n@boy2000-007man could you update your PR to only fix the relevant line, and maybe add a test?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,598 | 1,598 | CONTRIBUTOR | null | https://github.com/huggingface/transformers/blob/81484b447b7d8504ff5e1cfff38ec35918383963/src/transformers/tokenization_roberta.py#L239
this will introduce issue if `text == ""`, which will occur if anyone follows `run_glue.py` with QQP task as the `train.tsv` has two lines contains the empty column.
can be corrected to `if add_prefix_space and (not text or not text[0].isspace()):` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3608/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3607 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3607/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3607/comments | https://api.github.com/repos/huggingface/transformers/issues/3607/events | https://github.com/huggingface/transformers/pull/3607 | 593,181,329 | MDExOlB1bGxSZXF1ZXN0Mzk4MDEwMTgw | 3,607 | Allow the creation of "entity groups" for NerPipeline #3548 | {
"login": "enzoampil",
"id": 39557688,
"node_id": "MDQ6VXNlcjM5NTU3Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/39557688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enzoampil",
"html_url": "https://github.com/enzoampil",
"followers_url": "https://api.github.com/users/enzoampil/followers",
"following_url": "https://api.github.com/users/enzoampil/following{/other_user}",
"gists_url": "https://api.github.com/users/enzoampil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enzoampil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enzoampil/subscriptions",
"organizations_url": "https://api.github.com/users/enzoampil/orgs",
"repos_url": "https://api.github.com/users/enzoampil/repos",
"events_url": "https://api.github.com/users/enzoampil/events{/privacy}",
"received_events_url": "https://api.github.com/users/enzoampil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3607?src=pr&el=h1) Report\n> Merging [#3607](https://codecov.io/gh/huggingface/transformers/pull/3607?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/81484b447b7d8504ff5e1cfff38ec35918383963&el=desc) will **decrease** coverage by `1.04%`.\n> The diff coverage is `43.33%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3607?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3607 +/- ##\n==========================================\n- Coverage 78.06% 77.02% -1.05% \n==========================================\n Files 100 100 \n Lines 17134 17159 +25 \n==========================================\n- Hits 13375 13216 -159 \n- Misses 3759 3943 +184 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3607?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/3607/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `72.76% <43.33%> (-2.19%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3607/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0.00%> (-81.21%)` | :arrow_down: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3607/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0.00%> (-10.00%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3607/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `95.61% <0.00%> (-2.64%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3607/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.48% <0.00%> (-2.30%)` | :arrow_down: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3607/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.20% <0.00%> (-1.35%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3607/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.28% <0.00%> (-0.33%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3607/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.10% <0.00%> (+0.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3607?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3607?src=pr&el=footer). Last update [81484b4...34623f3](https://codecov.io/gh/huggingface/transformers/pull/3607?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thank you @enzoampil!",
"FYI that I will apply the entity grouping functionality explained above in this same PR",
"**This pull request now applies the entity group transformation illustrated above by setting the parameter: `group`=True**.\r\n\r\nThis was done by reflecting the transformation inside `NerPipeline`. I've changed the name of the pull request to better reflect the feature being proposed.\r\n\r\ncc @julien-c @mfuntowicz @petulla \r\n\r\nSample code:\r\n```\r\n# Install branch\r\n# Make sure to restart runtime after installing if using Google Colab\r\n!pip install -e git+git://github.com/enzoampil/transformers.git@add_index_to_ner_pipeline#egg=transformers\r\n\r\n# Grouped NER\r\nfrom transformers import pipeline\r\nnlp = pipeline('ner', group=True)\r\nnlp(\"Enzo works at the Australian National University (AUN)\")\r\n\r\n# [{'entity_group': 'I-PER', 'score': 0.9968132972717285, 'word': 'Enzo'},\r\n# {'entity_group': 'I-ORG', 'score': 0.9970400333404541, 'word': 'Australian National University'},\r\n# {'entity_group': 'I-ORG', 'score': 0.9831967651844025, 'word': 'AUN'}]\r\n\r\n# Ungrouped NER\r\nnlp = pipeline('ner', group=False)\r\nnlp(\"Enzo works at the Australian National University (AUN)\")\r\n\r\n# [{'entity': 'I-PER', 'index': 1, 'score': 0.9983270168304443, 'word': 'En'},\r\n# {'entity': 'I-PER', 'index': 2, 'score': 0.9952995777130127, 'word': '##zo'},\r\n# {'entity': 'I-ORG', 'index': 6, 'score': 0.9984350204467773, 'word': 'Australian'},\r\n# {'entity': 'I-ORG','index': 7, 'score': 0.9967807531356812, 'word': 'National'},\r\n# {'entity': 'I-ORG', 'index': 8 'score': 0.9959043264389038, 'word': 'University'},\r\n# {'entity': 'I-ORG', 'index': 10, 'score': 0.9900023937225342, 'word': 'AU'},\r\n# {'entity': 'I-ORG', 'index': 11, 'score': 0.9763911366462708, 'word': '##N'}]\r\n```\r\n\r\nTutorial on how to do Entity Grouping w/ `NerPipeline` [here](https://colab.research.google.com/drive/1CVLP0n3Q5t5qiWpode7jyhUNZpmLg0mS)\r\n\r\nI'm very keen to get feedback for the above, so please let me know if I should change anything, or perform additional steps to bring its quality to an acceptable level.",
"I accidentally deleted the fork for this, so I've recreated this pull request [here](https://github.com/huggingface/transformers/pull/3957). Apologies for any inconvenience caused by this.\r\n\r\nI will close this PR so please refer to the one linked above."
] | 1,585 | 1,587 | 1,587 | CONTRIBUTOR | null | This pull request adds an `index` key to the dictionary returned by `NerPipeline`. The index will be necessary in order to identify **entity groups**, where an entity group is a contiguous series of tokens, having the same **entity type**.
Details of what I want to be able to do can be found in issue #3548.
If this PR gets merged, I would also like to ask if you guys would recommend that I implement the **entity group** transformation in `NerPipeline` itself.
Possibly, I can set the parameter `group` at initialization, where if `True`, the *grouped* version of the output will be returned.
E.g.
Instead of the following *ungrouped* output:
```
[{'entity': 'I-PER', 'score': 0.9983270168304443, 'word': 'En'},
{'entity': 'I-PER', 'score': 0.9952995777130127, 'word': '##zo'},
{'entity': 'I-ORG', 'score': 0.9984350204467773, 'word': 'Australian'},
{'entity': 'I-ORG', 'score': 0.9967807531356812, 'word': 'National'},
{'entity': 'I-ORG', 'score': 0.9959043264389038, 'word': 'University'},
{'entity': 'I-ORG', 'score': 0.9900023937225342, 'word': 'AU'},
{'entity': 'I-ORG', 'score': 0.9763911366462708, 'word': '##N'}]
```
We get something like the following *grouped* output:
```
[{'entity_group': 'I-PER', 'score': 0.9983270168304443, 'word': 'Enzo'},
{'entity_group': 'I-ORG', 'score': 0.9984350204467773, 'word': 'Australian National University'},
{'entity_group': 'I-ORG', 'score': 0.9900023937225342, 'word': 'AUN'}]
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3607/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3607/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3607",
"html_url": "https://github.com/huggingface/transformers/pull/3607",
"diff_url": "https://github.com/huggingface/transformers/pull/3607.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3607.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3606 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3606/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3606/comments | https://api.github.com/repos/huggingface/transformers/issues/3606/events | https://github.com/huggingface/transformers/pull/3606 | 593,154,947 | MDExOlB1bGxSZXF1ZXN0Mzk3OTg3MzM0 | 3,606 | Fix typo in FeatureExtractionPipeline docstring | {
"login": "enzoampil",
"id": 39557688,
"node_id": "MDQ6VXNlcjM5NTU3Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/39557688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enzoampil",
"html_url": "https://github.com/enzoampil",
"followers_url": "https://api.github.com/users/enzoampil/followers",
"following_url": "https://api.github.com/users/enzoampil/following{/other_user}",
"gists_url": "https://api.github.com/users/enzoampil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enzoampil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enzoampil/subscriptions",
"organizations_url": "https://api.github.com/users/enzoampil/orgs",
"repos_url": "https://api.github.com/users/enzoampil/repos",
"events_url": "https://api.github.com/users/enzoampil/events{/privacy}",
"received_events_url": "https://api.github.com/users/enzoampil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,585 | 1,586 | 1,586 | CONTRIBUTOR | null | Fixed a typo in the docstring of `FeatureExtractionPipeline` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3606/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3606/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3606",
"html_url": "https://github.com/huggingface/transformers/pull/3606",
"diff_url": "https://github.com/huggingface/transformers/pull/3606.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3606.patch",
"merged_at": 1586351337000
} |
https://api.github.com/repos/huggingface/transformers/issues/3605 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3605/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3605/comments | https://api.github.com/repos/huggingface/transformers/issues/3605/events | https://github.com/huggingface/transformers/issues/3605 | 593,154,374 | MDU6SXNzdWU1OTMxNTQzNzQ= | 3,605 | 🐛 Summarization pipeline : T5-base much slower than BART-large | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @Colanim, thanks a lot for your speed comparison :-). \r\n\r\nIt might be possible that the pipelines used different default parameters for `T5` and `Bart` under the hood which strongly influence their running times.\r\nBesides `min_length` and `max_length` could you also insert those parameters into both `T5` and `Bart` to overwrite the default parameters: \r\n\r\n```\r\n \"early_stopping\": True\r\n \"length_penalty\": 2.0\r\n \"no_repeat_ngram_size\": 3\r\n \"num_beams\": 4\r\n```\r\n\r\nIf there is still a big difference in time, then I guess we have to take a closer look! \r\n\r\n",
"Thanks for your fast answer @patrickvonplaten \r\n\r\nHere is the link to the modified notebook, with the parameters you mentioned :\r\nhttps://colab.research.google.com/drive/1kCm5ew8qDQqguZjbsC6Ujs9KZBaSfafi\r\n\r\n---\r\n\r\nUnfortunately, there is still a **huge** difference...\r\n\r\n```\r\nBART = 66s\r\nT5 = 226s\r\n```",
"Ok, good to know! thanks for doing the comparison @Colanim. This might interest you as well @sshleifer :-) \r\n\r\nOh actually I just remember that Bart caches the decoder hidden key/value outputs when doing auto-regressive decoding (similar to GPT2 - check Visuals under \"GPT-2 Masked Self-Attention\" in this [post](http://jalammar.github.io/illustrated-gpt2/)) and I think T5 does not. \r\n\r\nBut T5 could cache the decoder key/value outputs to speed up decoding as well since it uses a causal mask for the decoder. This could definitely be a Feature Request. What do you think\r\n@sshleifer @craffel @thomwolf ?",
"Sounds worth it!"
] | 1,585 | 1,586 | 1,586 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model : `bart-large-cnn` and `t5-base`
Language : English
The problem arises when using : [this colab notebook](https://colab.research.google.com/drive/1iAIFX1QQiFm1F01vMmnAgFh4oH1H-K8W), using both BART and T5 with pipeline for Summarization.
Dataset : CNN/DM
## To reproduce
Run the notebook and measure time for inference between the 2 models. On my run, I have :
```
BART = 73s
T5 = 369s
```
## Expected behavior
I expected T5 to be at least as fast as BART, since there is less parameters (for the base version at least). Instead it takes much longer with T5...
@patrickvonplaten Do you happen to know why T5 is so slow ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3605/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3605/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3604 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3604/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3604/comments | https://api.github.com/repos/huggingface/transformers/issues/3604/events | https://github.com/huggingface/transformers/pull/3604 | 593,076,440 | MDExOlB1bGxSZXF1ZXN0Mzk3OTM2MTc2 | 3,604 | Update README.md | {
"login": "ahotrod",
"id": 44321615,
"node_id": "MDQ6VXNlcjQ0MzIxNjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/44321615?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahotrod",
"html_url": "https://github.com/ahotrod",
"followers_url": "https://api.github.com/users/ahotrod/followers",
"following_url": "https://api.github.com/users/ahotrod/following{/other_user}",
"gists_url": "https://api.github.com/users/ahotrod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahotrod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahotrod/subscriptions",
"organizations_url": "https://api.github.com/users/ahotrod/orgs",
"repos_url": "https://api.github.com/users/ahotrod/repos",
"events_url": "https://api.github.com/users/ahotrod/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahotrod/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3604?src=pr&el=h1) Report\n> Merging [#3604](https://codecov.io/gh/huggingface/transformers/pull/3604?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/81484b447b7d8504ff5e1cfff38ec35918383963&el=desc) will **not change** coverage by `%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3604?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3604 +/- ##\n=======================================\n Coverage 78.06% 78.06% \n=======================================\n Files 100 100 \n Lines 17134 17134 \n=======================================\n Hits 13375 13375 \n Misses 3759 3759 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3604?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3604/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.45% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3604/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.10% <0.00%> (+0.12%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3604?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3604?src=pr&el=footer). Last update [81484b4...8845212](https://codecov.io/gh/huggingface/transformers/pull/3604?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | Update AutoModel & AutoTokernizer loading. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3604/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3604/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3604",
"html_url": "https://github.com/huggingface/transformers/pull/3604",
"diff_url": "https://github.com/huggingface/transformers/pull/3604.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3604.patch",
"merged_at": 1585920505000
} |
https://api.github.com/repos/huggingface/transformers/issues/3603 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3603/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3603/comments | https://api.github.com/repos/huggingface/transformers/issues/3603/events | https://github.com/huggingface/transformers/pull/3603 | 593,074,543 | MDExOlB1bGxSZXF1ZXN0Mzk3OTM0NTg0 | 3,603 | Update README.md | {
"login": "ahotrod",
"id": 44321615,
"node_id": "MDQ6VXNlcjQ0MzIxNjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/44321615?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahotrod",
"html_url": "https://github.com/ahotrod",
"followers_url": "https://api.github.com/users/ahotrod/followers",
"following_url": "https://api.github.com/users/ahotrod/following{/other_user}",
"gists_url": "https://api.github.com/users/ahotrod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahotrod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahotrod/subscriptions",
"organizations_url": "https://api.github.com/users/ahotrod/orgs",
"repos_url": "https://api.github.com/users/ahotrod/repos",
"events_url": "https://api.github.com/users/ahotrod/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahotrod/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3603?src=pr&el=h1) Report\n> Merging [#3603](https://codecov.io/gh/huggingface/transformers/pull/3603?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/81484b447b7d8504ff5e1cfff38ec35918383963&el=desc) will **not change** coverage by `%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3603?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3603 +/- ##\n=======================================\n Coverage 78.06% 78.06% \n=======================================\n Files 100 100 \n Lines 17134 17134 \n=======================================\n Hits 13375 13375 \n Misses 3759 3759 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3603?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3603?src=pr&el=footer). Last update [81484b4...32340ca](https://codecov.io/gh/huggingface/transformers/pull/3603?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | Update AutoModel & AutoTokenizer loading. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3603/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3603",
"html_url": "https://github.com/huggingface/transformers/pull/3603",
"diff_url": "https://github.com/huggingface/transformers/pull/3603.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3603.patch",
"merged_at": 1585920478000
} |
https://api.github.com/repos/huggingface/transformers/issues/3602 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3602/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3602/comments | https://api.github.com/repos/huggingface/transformers/issues/3602/events | https://github.com/huggingface/transformers/pull/3602 | 592,991,279 | MDExOlB1bGxSZXF1ZXN0Mzk3ODY3ODU4 | 3,602 | Multilingual BART - | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3602?src=pr&el=h1) Report\n> Merging [#3602](https://codecov.io/gh/huggingface/transformers/pull/3602?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1a16d9d94a81e95463b166adfce4a8e02cdc47eb&el=desc) will **not change** coverage by `%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3602?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3602 +/- ##\n=======================================\n Coverage 78.06% 78.06% \n=======================================\n Files 100 100 \n Lines 17181 17181 \n=======================================\n Hits 13413 13413 \n Misses 3768 3768 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3602?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3602?src=pr&el=footer). Last update [1a16d9d...1a16d9d](https://codecov.io/gh/huggingface/transformers/pull/3602?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi, is there any reason that cc-25 was removed and only the fine-tuned one is kept? any way I can quickly enable that? thanks",
"When the authors released the CC25 checkpoint, it was shaped differently than `mbart-large-en-ro` and I am not clear on whether that is fixed yet.\r\nSee https://github.com/pytorch/fairseq/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+mbart"
] | 1,585 | 1,591 | 1,586 | CONTRIBUTOR | null | This adds `mbart-en-ro` model, a BART variant finetuned on english-romanian translation.
### TODO
- [x] (docs) pretrained_model.rst
- [ ] (docs) README.md
- [ ] (docs) bart.rst
### Differences with Bart
`config.normalize_before`: all the `LayerNorm` calls happen before attention calls
`config.add_final_layer_norm`: There is one extra layer_norm in the decoder
`config.scale_embedding`: embeddings are multiplied by 32 (`sqrt(d_model=1024)`)
### Future PRs
- The model returns the same variables as fairseq, but the tokenizer is not yet at parity with fairseq. This is the next PR in the pipeline.
- the `mbart-large-cc25` (no finetuning) model has a very different state dict. Also WIP.
### Misc
- the link_tester got angry about me not typing out URLs in this PR. Unclear why it didn't happen earlier.
Needs documentation but unclear where to put it.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3602/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3602/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3602",
"html_url": "https://github.com/huggingface/transformers/pull/3602",
"diff_url": "https://github.com/huggingface/transformers/pull/3602.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3602.patch",
"merged_at": 1586532339000
} |
https://api.github.com/repos/huggingface/transformers/issues/3601 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3601/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3601/comments | https://api.github.com/repos/huggingface/transformers/issues/3601/events | https://github.com/huggingface/transformers/pull/3601 | 592,908,693 | MDExOlB1bGxSZXF1ZXN0Mzk3ODAxMjQ1 | 3,601 | [Generate, Test] Split generate test function into beam search, no beam search | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3601?src=pr&el=h1) Report\n> Merging [#3601](https://codecov.io/gh/huggingface/transformers/pull/3601?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f68d22850ced09bb194b30068ff94ca3409f0879&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3601?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3601 +/- ##\n==========================================\n- Coverage 78.06% 78.05% -0.01% \n==========================================\n Files 100 100 \n Lines 17134 17134 \n==========================================\n- Hits 13375 13374 -1 \n- Misses 3759 3760 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3601?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.28% <0.00%> (-0.17%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3601/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.10% <0.00%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3601?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3601?src=pr&el=footer). Last update [f68d228...857e77e](https://codecov.io/gh/huggingface/transformers/pull/3601?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,586 | 1,586 | MEMBER | null | - Clean the generate testing functions
- Also should fix flaky behaviour of bad_word_tokens test (see #3367 and https://circleci.com/gh/huggingface/transformers/27997?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3601/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3601/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3601",
"html_url": "https://github.com/huggingface/transformers/pull/3601",
"diff_url": "https://github.com/huggingface/transformers/pull/3601.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3601.patch",
"merged_at": 1586162225000
} |
https://api.github.com/repos/huggingface/transformers/issues/3600 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3600/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3600/comments | https://api.github.com/repos/huggingface/transformers/issues/3600/events | https://github.com/huggingface/transformers/issues/3600 | 592,905,236 | MDU6SXNzdWU1OTI5MDUyMzY= | 3,600 | Why isn't there a SequenceClassificationModel for GPT-2 (and some other models)? | {
"login": "c-flaherty",
"id": 37087066,
"node_id": "MDQ6VXNlcjM3MDg3MDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/37087066?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/c-flaherty",
"html_url": "https://github.com/c-flaherty",
"followers_url": "https://api.github.com/users/c-flaherty/followers",
"following_url": "https://api.github.com/users/c-flaherty/following{/other_user}",
"gists_url": "https://api.github.com/users/c-flaherty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/c-flaherty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/c-flaherty/subscriptions",
"organizations_url": "https://api.github.com/users/c-flaherty/orgs",
"repos_url": "https://api.github.com/users/c-flaherty/repos",
"events_url": "https://api.github.com/users/c-flaherty/events{/privacy}",
"received_events_url": "https://api.github.com/users/c-flaherty/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
<!-- Description of your issue -->
Why isn't there a SequenceClassificationModel (like there is for BERT) for GPT-2? I was able to implement this pretty easily by adding a "[CLS]" token to the vocabulary (like in the GPT2DoubleHeadsModel), appending sequences with "[CLS]", and then adding a linear layer that maps from the embedding of "[CLS]" to a vector of logits corresponding to the classes. After training, this model worked comparably to BertSequenceClassificationModel for my use-case. It would be nice to have this model in the transformers library and not have to code it up from scratch.
If this sounds like a good idea, I can make a pull request with a GPT2SequenceClassificationModel added. If not, why is it not a good idea? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3600/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3599 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3599/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3599/comments | https://api.github.com/repos/huggingface/transformers/issues/3599/events | https://github.com/huggingface/transformers/issues/3599 | 592,905,220 | MDU6SXNzdWU1OTI5MDUyMjA= | 3,599 | Why is there not a SequenceClassification model for GPT-2? | {
"login": "c-flaherty",
"id": 37087066,
"node_id": "MDQ6VXNlcjM3MDg3MDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/37087066?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/c-flaherty",
"html_url": "https://github.com/c-flaherty",
"followers_url": "https://api.github.com/users/c-flaherty/followers",
"following_url": "https://api.github.com/users/c-flaherty/following{/other_user}",
"gists_url": "https://api.github.com/users/c-flaherty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/c-flaherty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/c-flaherty/subscriptions",
"organizations_url": "https://api.github.com/users/c-flaherty/orgs",
"repos_url": "https://api.github.com/users/c-flaherty/repos",
"events_url": "https://api.github.com/users/c-flaherty/events{/privacy}",
"received_events_url": "https://api.github.com/users/c-flaherty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"My apologies, my computer glitched and posted twice. Please close this issue and refer to: https://github.com/huggingface/transformers/issues/3600"
] | 1,585 | 1,585 | 1,585 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Why isn't there a SequenceClassificationModel (like there is for BERT) for GPT-2? I was able to implement this pretty easily by adding a "[CLS]" token to the vocabulary (like in the GPT2DoubleHeadsModel), appending sequences with "[CLS]", and then adding a linear layer that maps from the embedding of "[CLS]" to a vector of logits corresponding to the classes. After training, this model worked comparably to BertSequenceClassificationModel for my use-case. It would be nice to have this model in the transformers library and not have to code it up from scratch.
If this sounds like a good idea, I can make a pull request with a GPT2SequenceClassificationModel added. If not, why is it not a good idea?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3599/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3599/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3598 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3598/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3598/comments | https://api.github.com/repos/huggingface/transformers/issues/3598/events | https://github.com/huggingface/transformers/issues/3598 | 592,904,004 | MDU6SXNzdWU1OTI5MDQwMDQ= | 3,598 | After enable fp16, torch.save model has error | {
"login": "Charonnnnn",
"id": 37766299,
"node_id": "MDQ6VXNlcjM3NzY2Mjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/37766299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Charonnnnn",
"html_url": "https://github.com/Charonnnnn",
"followers_url": "https://api.github.com/users/Charonnnnn/followers",
"following_url": "https://api.github.com/users/Charonnnnn/following{/other_user}",
"gists_url": "https://api.github.com/users/Charonnnnn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Charonnnnn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Charonnnnn/subscriptions",
"organizations_url": "https://api.github.com/users/Charonnnnn/orgs",
"repos_url": "https://api.github.com/users/Charonnnnn/repos",
"events_url": "https://api.github.com/users/Charonnnnn/events{/privacy}",
"received_events_url": "https://api.github.com/users/Charonnnnn/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Please use the template in the future. It is there for a reason. As mentioned in the template, don't post a screenshot. Use code blocks, post your code or the example script that you used, and the error trace. Also provide your version of PyTorch and Python.\r\n\r\nhttps://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | NONE | null | # 🐛 Bug
After complete training, the model cannot be saved.
## Information


| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3598/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3598/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3597 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3597/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3597/comments | https://api.github.com/repos/huggingface/transformers/issues/3597/events | https://github.com/huggingface/transformers/issues/3597 | 592,898,580 | MDU6SXNzdWU1OTI4OTg1ODA= | 3,597 | CTRL generates French text when I want English texts | {
"login": "AdaUchendu",
"id": 32556160,
"node_id": "MDQ6VXNlcjMyNTU2MTYw",
"avatar_url": "https://avatars.githubusercontent.com/u/32556160?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AdaUchendu",
"html_url": "https://github.com/AdaUchendu",
"followers_url": "https://api.github.com/users/AdaUchendu/followers",
"following_url": "https://api.github.com/users/AdaUchendu/following{/other_user}",
"gists_url": "https://api.github.com/users/AdaUchendu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AdaUchendu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AdaUchendu/subscriptions",
"organizations_url": "https://api.github.com/users/AdaUchendu/orgs",
"repos_url": "https://api.github.com/users/AdaUchendu/repos",
"events_url": "https://api.github.com/users/AdaUchendu/events{/privacy}",
"received_events_url": "https://api.github.com/users/AdaUchendu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834059054,
"node_id": "MDU6TGFiZWwxODM0MDU5MDU0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Generation",
"name": "Ex: Generation",
"color": "06EFF8",
"default": false,
"description": "Natural Language Generation"
}
] | closed | false | null | [] | [
"CTRL uses control codes, as is mentioned in our documentation, with examples on the [original repository](https://github.com/salesforce/ctrl#generations). Have you tried using these control codes?",
"How do I specify which control code I want to use? Do I have to do that in the command line and if yes, how? This is the Control code I want to use 16360 (i.e. politics). \r\n\r\nThank you ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,593 | 1,593 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
CTRL generates French texts when I want English texts.
I run this command: **python examples/run_generation.py --model_type ctrl --model_name_or_path ctrl --prompt "Looking well today" --length 500 --temperature 0.8 --repetition 1.2**
What do I need to add or change to generate English texts only?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3597/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3597/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3596 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3596/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3596/comments | https://api.github.com/repos/huggingface/transformers/issues/3596/events | https://github.com/huggingface/transformers/issues/3596 | 592,884,772 | MDU6SXNzdWU1OTI4ODQ3NzI= | 3,596 | batch_encode_plus with pad_to_max_length but no max_length is not padding the output | {
"login": "muggin",
"id": 4559861,
"node_id": "MDQ6VXNlcjQ1NTk4NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4559861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muggin",
"html_url": "https://github.com/muggin",
"followers_url": "https://api.github.com/users/muggin/followers",
"following_url": "https://api.github.com/users/muggin/following{/other_user}",
"gists_url": "https://api.github.com/users/muggin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muggin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muggin/subscriptions",
"organizations_url": "https://api.github.com/users/muggin/orgs",
"repos_url": "https://api.github.com/users/muggin/repos",
"events_url": "https://api.github.com/users/muggin/events{/privacy}",
"received_events_url": "https://api.github.com/users/muggin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"Hi,\r\n\r\nIs anyone working on this or is it open for someone to take?\r\nI was able to reproduce the issue.\r\nIf not being worked on by anyone, I would like to take it up.\r\n\r\nThanks",
"I'm facing similar issues with batch_encode_plus:\r\n```\r\ntokenizer = transformers.BertTokenizer.from_pretrained('bert-base-cased')\r\na = ['short sentence', 'Larger sentence than short sentence']\r\ninput_ids = torch.tensor(tokenizer.batch_encode_plus(a, pad_to_max_length=True)['input_ids'])\r\n```\r\nIt doesn't work for me, it return this error:\r\n`ValueError: expected sequence of length 2 at dim 1 (got 6)`\r\n\r\nThanks",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This should now be fixed on master with the updated tokenizer API"
] | 1,585 | 1,592 | 1,592 | NONE | null | # 🐛 Bug
## Information
Model I am using BERT:
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
import torch
import numpy as np
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
seq1 = "This is a short sequence"
seq2 = "This will be a much longer sequence, so the short one requires padding"
input = [seq1, seq2]
# Explicitly specified padding length
max_len = 20
tck_temp = tokenizer.batch_encode_plus(input, max_length=max_len, pad_to_max_length=True)
inp_ids = tck_temp['input_ids']
assert len(inp_ids[0]) == len(inp_ids[1]) == max_len, "Both inputs should have length equal to 20"
# Implicit padding length set to models max length
model_max_len = tokenizer.max_len
tck_temp = tokenizer.batch_encode_plus(input, pad_to_max_length=True)inp_ids = tck_temp['input_ids']
assert len(inp_ids[0]) == len(inp_ids[1]) == model_max_len, "Both inputs should have length equal to %d" % model_max_len
```
## Expected behavior
According to the documentation, `batch_encode_plus` with `pad_to_max_length=True` should pad sequence to models maximal length, if the `max_length` is not explicitly specified.
The attached script should run without raising Exception.
From documentation
"If no max length is specified, the padding is done up to the model’s max length."
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.7.0
- Platform: Linux-4.15.0-74-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): N/A
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3596/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3596/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3595 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3595/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3595/comments | https://api.github.com/repos/huggingface/transformers/issues/3595/events | https://github.com/huggingface/transformers/pull/3595 | 592,878,153 | MDExOlB1bGxSZXF1ZXN0Mzk3Nzc3MjU1 | 3,595 | [Generation] delete print statement | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | MEMBER | null | Somehow forgot to delete it from PR #3550. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3595/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3595",
"html_url": "https://github.com/huggingface/transformers/pull/3595",
"diff_url": "https://github.com/huggingface/transformers/pull/3595.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3595.patch",
"merged_at": 1585856975000
} |
https://api.github.com/repos/huggingface/transformers/issues/3594 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3594/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3594/comments | https://api.github.com/repos/huggingface/transformers/issues/3594/events | https://github.com/huggingface/transformers/issues/3594 | 592,836,851 | MDU6SXNzdWU1OTI4MzY4NTE= | 3,594 | Wrong tokenization for distilbert-base-multilingual-cased | {
"login": "ricardorei",
"id": 17256847,
"node_id": "MDQ6VXNlcjE3MjU2ODQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/17256847?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ricardorei",
"html_url": "https://github.com/ricardorei",
"followers_url": "https://api.github.com/users/ricardorei/followers",
"following_url": "https://api.github.com/users/ricardorei/following{/other_user}",
"gists_url": "https://api.github.com/users/ricardorei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ricardorei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ricardorei/subscriptions",
"organizations_url": "https://api.github.com/users/ricardorei/orgs",
"repos_url": "https://api.github.com/users/ricardorei/repos",
"events_url": "https://api.github.com/users/ricardorei/events{/privacy}",
"received_events_url": "https://api.github.com/users/ricardorei/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"Can you show how you initialize `tokenizer`? Which vocab are you using?",
"> Can you show how you initialize `tokenizer`? Which vocab are you using?\r\n\r\nsorry I forgot that... I updated the code in the issue already.\r\n\r\nI was using `distilbert-base-multilingual-cased`\r\n\r\n`tokenizer = DistilBertTokenizer.from_pretrained(\"distilbert-base-multilingual-cased\")`\r\n\r\n",
"This behaviour seems to have been solved in v2.7.0 as running your code yields the correct result in my environment.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | NONE | null | # 🐛 Bug
## Information
Model I am using (DistillBert):
The problem arises when using:
* my own modified scripts: (give details below)
The tasks I am working on is:
* my own task or dataset: (give details below)
## To reproduce
with transformers 2.3.0:
```python
import torch
from transformers import DistilBertTokenizer
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-multilingual-cased")
result = torch.tensor(tokenizer.encode("Hello, my dog is cute"))
print (result)
itos = tokenizer.ids_to_tokens
print (itos[61694])
print (itos[10133])
# The original token for 'Hello' exits but for some reason it's not used?
print (itos[31178])
```
Output:
```bash
[101, 61694, 10133, 117, 15127, 17835, 10124, 21610, 10112, 102]
'hell'
'##o'
'Hello'
```
## Expected behavior
```python
import torch
from transformers import DistilBertTokenizer
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-multilingual-cased")
result = torch.tensor(tokenizer.encode("Hello, my dog is cute"))
print (result)
```
Output:
```bash
[101, 31178, 117, 15127, 17835, 10124, 21610, 10112, 102]
```
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.3.0
- Python version: >3.6
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3594/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3593 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3593/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3593/comments | https://api.github.com/repos/huggingface/transformers/issues/3593/events | https://github.com/huggingface/transformers/issues/3593 | 592,759,313 | MDU6SXNzdWU1OTI3NTkzMTM= | 3,593 | Transformers and BERT: dealing with possessives and apostrophes when encode | {
"login": "al-yakubovich",
"id": 12928778,
"node_id": "MDQ6VXNlcjEyOTI4Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/12928778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/al-yakubovich",
"html_url": "https://github.com/al-yakubovich",
"followers_url": "https://api.github.com/users/al-yakubovich/followers",
"following_url": "https://api.github.com/users/al-yakubovich/following{/other_user}",
"gists_url": "https://api.github.com/users/al-yakubovich/gists{/gist_id}",
"starred_url": "https://api.github.com/users/al-yakubovich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/al-yakubovich/subscriptions",
"organizations_url": "https://api.github.com/users/al-yakubovich/orgs",
"repos_url": "https://api.github.com/users/al-yakubovich/repos",
"events_url": "https://api.github.com/users/al-yakubovich/events{/privacy}",
"received_events_url": "https://api.github.com/users/al-yakubovich/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Any idea about how to solve this apostrophe seperating word token with different word id in bert rokenizer ?"
] | 1,585 | 1,681 | 1,591 | NONE | null | Let's consider two sentences:
"why isn't Alex's text tokenizing? The house on the left is the Smiths' house"
Now let's tokenize and decode:
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
tokenizer.decode(tokenizer.convert_tokens_to_ids(tokenizer.tokenize("why isn't Alex's text tokenizing? The house on the left is the Smiths' house")))
We get:
"why isn't alex's text tokenizing? the house on the left is the smiths'house"
**My question is how dealing with missing space in some possessives like *smiths'house*?**
For me, it seems that the process of tokenization in Transformers is done not right. Let's consider output of
tokenizer.tokenize("why isn't Alex's text tokenizing? The house on the left is the Smiths' house")
we get:
['why', 'isn', "'", 't', 'alex', "'", 's', 'text', 'token', '##izing', '?', 'the', 'house', 'on', 'the', 'left', 'is', 'the', 'smith', '##s', "'", 'house']
So in this step, we already have lost important information about the last apostrophe. It would be much better if tokenization was done in the another way:
['why', 'isn', "##'", '##t', 'alex', "##'", '##s', 'text', 'token', '##izing', '?', 'the', 'house', 'on', 'the', 'left', 'is', 'the', 'smith', '##s', "##'", 'house']
In this way, tokenization keeps all information about apostrophes, and we will not have problems with possessives. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3593/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3593/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3592 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3592/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3592/comments | https://api.github.com/repos/huggingface/transformers/issues/3592/events | https://github.com/huggingface/transformers/issues/3592 | 592,716,038 | MDU6SXNzdWU1OTI3MTYwMzg= | 3,592 | Issues with using SciBERT for Summarizer | {
"login": "WeiyangSun",
"id": 34964824,
"node_id": "MDQ6VXNlcjM0OTY0ODI0",
"avatar_url": "https://avatars.githubusercontent.com/u/34964824?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WeiyangSun",
"html_url": "https://github.com/WeiyangSun",
"followers_url": "https://api.github.com/users/WeiyangSun/followers",
"following_url": "https://api.github.com/users/WeiyangSun/following{/other_user}",
"gists_url": "https://api.github.com/users/WeiyangSun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WeiyangSun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WeiyangSun/subscriptions",
"organizations_url": "https://api.github.com/users/WeiyangSun/orgs",
"repos_url": "https://api.github.com/users/WeiyangSun/repos",
"events_url": "https://api.github.com/users/WeiyangSun/events{/privacy}",
"received_events_url": "https://api.github.com/users/WeiyangSun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I believe only BART and T5 can do summarization for now. See the [documentation regarding the checkpoints that may summarize](https://huggingface.co/transformers/main_classes/pipelines.html#summarizationpipeline).",
"Here is [a notebook](https://github.com/Nikoschenk/bert-extractive-summarizer/blob/master/colab/scibert-summaries.ipynb) using the scibert model based on the [great code](https://github.com/dmmiller612/bert-extractive-summarizer) that Derek provided.\r\n\r\n"
] | 1,585 | 1,587 | 1,585 | NONE | null | Hi All, I am not sure if I am doing this right? But i want my summarizer to use SciBERT SciVocab instead of the traditional BERT vocab. Appreciate any help and advice! This is what I am currently using:
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
from transformers import BertTokenizer, BertModel
model_version = 'scibert_scivocab_uncased'
do_lower_case = True
model = BertModel.from_pretrained(model_version)
tokenizer = BertTokenizer.from_pretrained(model_version, do_lower_case=do_lower_case)
summarizer = pipeline(task="summarization", model = model, tokenizer = tokenizer)
summary = summarizer(readin_df['Text'][0])
I am facing this error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-39-83180e8b1c13> in <module>
1 summarizer = pipeline(task="summarization", model = model, tokenizer = tokenizer)
2
----> 3 summary = summarizer(readin_df['Text'][0])
/opt/conda/lib/python3.7/site-packages/transformers/pipelines.py in __call__(self, return_tensors, return_text, clean_up_tokenization_spaces, *documents, **generate_kwargs)
1251
1252 summaries = self.model.generate(
-> 1253 inputs["input_ids"], attention_mask=inputs["attention_mask"], **generate_kwargs,
1254 )
1255
/opt/conda/lib/python3.7/site-packages/torch/autograd/grad_mode.py in decorate_no_grad(*args, **kwargs)
47 def decorate_no_grad(*args, **kwargs):
48 with self:
---> 49 return func(*args, **kwargs)
50 return decorate_no_grad
51
/opt/conda/lib/python3.7/site-packages/transformers/modeling_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, attention_mask, decoder_start_token_id)
788 if self.get_output_embeddings() is None:
789 raise AttributeError(
--> 790 "You tried to generate sequences with a model that does not have a LM Head."
791 "Please use another model class (e.g. `OpenAIGPTLMHeadModel`, `XLNetLMHeadModel`, `GPT2LMHeadModel`, `CTRLLMHeadModel`, `T5WithLMHeadModel`, `TransfoXLLMHeadModel`, `XLMWithLMHeadModel`, `BartForConditionalGeneration` )"
792 )
AttributeError: You tried to generate sequences with a model that does not have a LM Head.Please use another model class (e.g. `OpenAIGPTLMHeadModel`, `XLNetLMHeadModel`, `GPT2LMHeadModel`, `CTRLLMHeadModel`, `T5WithLMHeadModel`, `TransfoXLLMHeadModel`, `XLMWithLMHeadModel`, `BartForConditionalGeneration` ) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3592/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3592/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3591 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3591/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3591/comments | https://api.github.com/repos/huggingface/transformers/issues/3591/events | https://github.com/huggingface/transformers/issues/3591 | 592,693,446 | MDU6SXNzdWU1OTI2OTM0NDY= | 3,591 | Cannot load model in tranformers | {
"login": "gdet",
"id": 49757110,
"node_id": "MDQ6VXNlcjQ5NzU3MTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/49757110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gdet",
"html_url": "https://github.com/gdet",
"followers_url": "https://api.github.com/users/gdet/followers",
"following_url": "https://api.github.com/users/gdet/following{/other_user}",
"gists_url": "https://api.github.com/users/gdet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gdet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gdet/subscriptions",
"organizations_url": "https://api.github.com/users/gdet/orgs",
"repos_url": "https://api.github.com/users/gdet/repos",
"events_url": "https://api.github.com/users/gdet/events{/privacy}",
"received_events_url": "https://api.github.com/users/gdet/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1843377584,
"node_id": "MDU6TGFiZWwxODQzMzc3NTg0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Version%20mismatch",
"name": "Version mismatch",
"color": "ddea7c",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"It seems you have a very old version of this repository (when it was still named `pytorch_transformers`). Please update to the latest version or you won't have all the features (like access to the model hub).",
"I fixed the version and the error was fixed. But now I get \r\n\r\n AttributeError: 'BertTokenizer' object has no attribute 'encoder'\r\nDo I have another problem with version? I also used \r\n\r\n pip3 install --upgrade pytorch-pretrained-bert\r\nThank you",
"Please post the code and full trace that gave you this error.",
" tokenizer = AutoTokenizer.from_pretrained(\"nlpaueb/bert-base-greek-uncased-v1\")\r\n model = AutoModelWithLMHead.from_pretrained(\"nlpaueb/bert-base-greek-uncased-v1\")\r\n model.to(args.device)\r\n # Add special tokens if they are not already added\r\n add_special_tokens_(model, tokenizer)\r\n\r\n def add_special_tokens_(model, tokenizer):\r\n \"\"\" Add special tokens to the tokenizer and the model if they have not already been added. \"\"\"\r\n orig_num_tokens = len(tokenizer.encoder)\r\n num_added_tokens = tokenizer.add_special_tokens(ATTR_TO_SPECIAL_TOKEN) # doesn't add if they are already there\r\n if num_added_tokens > 0:\r\n model.resize_token_embeddings(new_num_tokens=orig_num_tokens + num_added_tokens)\r\n\r\n\r\n\r\n INFO:transformers.modeling_utils:loading weights file https://s3.amazonaws.com /models.huggingface.co/bert/nlpaueb/bert-base-greek-uncased-v1/pytorch_model.bin from cache at /home/hatzimin/.cache/torch/transformers/3a685f5fa6f50a35a4efc31e9cdc74cfe8e2956002ee5c2df350e5e6c54deaf2.2aad66b9b70b2aa069cb5a695a371c8289c0fc672a34efff6188126824ef3b60\r\n INFO:transformers.modeling_utils:Weights from pretrained model not used in BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']\r\n Traceback (most recent call last):\r\n File \"./traingreek.py\", line 268, in <module>\r\n train()\r\n File \"./traingreek.py\", line 161, in train\r\n add_special_tokens_(model, tokenizer)\r\n File \"./traingreek.py\", line 51, in add_special_tokens_\r\n orig_num_tokens = len(tokenizer.encoder)\r\n AttributeError: 'BertTokenizer' object has no attribute 'encoder'\r\n",
"That's because the tokenizer does not have an `encoder` attribute. If you're looking to get the size of the tokenizer, you can do it with `len(tokenizer)`.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | NONE | null | Hello, I tried to upload your model and I get this error:
tokenizer = AutoTokenizer.from_pretrained("nlpaueb/bert-base-greek-uncased-v1")
model = AutoModelWithLMHead.from_pretrained("nlpaueb/bert-base-greek-uncased-v1")
ERROR:pytorch_transformers.modeling_utils:Model name 'nlpaueb/bert-base-greek-uncased-v1'
was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased,
bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese,
bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking,
bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc).
We assumed 'nlpaueb/bert-base-greek-uncased-v1'
was a path or url but couldn't find any file associated to this path or url.
If I download the model and upload it from a folder in the system I get this error:
tokenizer = AutoTokenizer.from_pretrained("/home/transformers/huggingface/greekaueb")
model = AutoModelWithLMHead.from_pretrained("/home/transformers/huggingface/greekaueb")
ValueErrorValueError: : Unrecognized model identifier in /home/hatzimin/transformers
/huggingface/greek_transfer_learning/greekaueb.
Should contains one of 'bert', 'openai-gpt', 'gpt2', 'transfo-xl', 'xlnet', 'xlm', 'roberta'
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3591/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3590 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3590/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3590/comments | https://api.github.com/repos/huggingface/transformers/issues/3590/events | https://github.com/huggingface/transformers/issues/3590 | 592,691,560 | MDU6SXNzdWU1OTI2OTE1NjA= | 3,590 | min_length parameter in default pipeline summarization produces output smaller than min_length | {
"login": "Weilin37",
"id": 5770543,
"node_id": "MDQ6VXNlcjU3NzA1NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5770543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Weilin37",
"html_url": "https://github.com/Weilin37",
"followers_url": "https://api.github.com/users/Weilin37/followers",
"following_url": "https://api.github.com/users/Weilin37/following{/other_user}",
"gists_url": "https://api.github.com/users/Weilin37/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Weilin37/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Weilin37/subscriptions",
"organizations_url": "https://api.github.com/users/Weilin37/orgs",
"repos_url": "https://api.github.com/users/Weilin37/repos",
"events_url": "https://api.github.com/users/Weilin37/events{/privacy}",
"received_events_url": "https://api.github.com/users/Weilin37/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @Weilin37,\r\n\r\nHow did you count the length of your output? The `min_length` corresponds to the minimum number of tokens in the output which is `<=` number of words.\r\nTo count the number of tokens you have in your output text, you could use `tokenizer.tokenize(OUTPUT_TEXT)`",
"> Hi @Weilin37,\r\n> \r\n> How did you count the length of your output? The `min_length` corresponds to the minimum number of tokens in the output which is `<=` number of words.\r\n> To count the number of tokens you have in your output text, you could use `tokenizer.tokenize(OUTPUT_TEXT)`\r\n\r\nHi. Ah ok thanks for clarifying. I had mistakenly thought it was the # of words.",
"Hi @patrickvonplaten,\r\n\r\nIs it possible to set the minimum number of words instead of tokens?"
] | 1,585 | 1,619 | 1,585 | NONE | null | Hi, I am using pipeline summarization. My code is below:
```
from transformers import pipeline, AutoTokenizer, AutoModel
summarizer = pipeline("summarization")
abstract_dictionary = {'Introduction':'','Methods':'','Results':'','Discussion':''}
for section in article_dictionary:
if section == 'Introduction':
min_length = 100
elif section == 'Methods':
min_length = 200
elif section == 'Results':
min_length = 250
elif section == 'Discussion':
min_length = 100
summary = summarizer(article_dictionary[section], min_length=min_length)[0]['summary_text']
abstract_dictionary[section] = abstract_dictionary[section]+' '+summary
for section in abstract_dictionary:
print(section)
print(abstract_dictionary[section])
print(" ")
```
and I get the following summary. You will notice that each section is smaller than the min length specified.
Introduction
Renal-cell carcinoma is characterized by susceptibility to both immunotherapeutic and antiangiogenic treatment approaches and resistance to cytotoxic chemotherapy. Agents such as sunitinib that target the vascular endothelial growth factor (VEGF) pathway are standard first-line therapy for advanced disease. We conducted the KEYNOTE-426 trial to determine whether pembrolizumab plus axit inib would result in better outcomes than sunit in patients with previously untreated advanced renal- cell carcinoma.
Methods
Pembrolizumab (Keytruda, Merck Sharp & Dohme) plus axitinib (Inlyta, Pfizer) or sunitinIB (Sutent, Pfizers) was used in an open-label, phase 3 trial. Eligible patients were 18 years of age or older; had newly diagnosed or recurrent stage IV clear-cell renal-cell carcinoma; had received no previous systemic therapy for advanced disease; and had a Karnofsky performance-status score of 70 or more. Patients were excluded if they had symptomatic central nervous system metastases, active autoimmune disease, or poorly controlled hypertension. Data on adverse events were collected regularly,
Results
A total of 1062 patients at 129 sites in 16 countries were screened for eligibility. Of these, 861 patients at 124 sites underwent randomization from October 24, 2016, to January 24, 2018. A total of 432 patients were assigned to the pembrolizumab–axitinib group, and 429 patients to the sunit inib group. The median duration of any treatment was 10.4 months in both groups. The estimated percentage of patients who were alive at 12 months was 89.9% (95% CI, 86.4 to 92.4) in the pembrozumab group and 78.3% (75.8 to 82.,
Discussion
Treatment with pembrolizumab plus axitinib resulted in a 47% lower risk of death. The objective response rate was 23.6 percentage points higher in the pembrozumab–axit inib group than in the sunitin ib group. The benefit of pembrology plus ax itinib was observed across all subgroups tested. No deaths related to hepatic adverse events were reported in this trial. However, the overall frequency of toxic effects was similar in the two groups. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3590/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3590/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3589 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3589/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3589/comments | https://api.github.com/repos/huggingface/transformers/issues/3589/events | https://github.com/huggingface/transformers/issues/3589 | 592,676,070 | MDU6SXNzdWU1OTI2NzYwNzA= | 3,589 | Evaluation - Output False Positive and False Negative Sentences | {
"login": "sunyangfu",
"id": 4896069,
"node_id": "MDQ6VXNlcjQ4OTYwNjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4896069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sunyangfu",
"html_url": "https://github.com/sunyangfu",
"followers_url": "https://api.github.com/users/sunyangfu/followers",
"following_url": "https://api.github.com/users/sunyangfu/following{/other_user}",
"gists_url": "https://api.github.com/users/sunyangfu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sunyangfu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sunyangfu/subscriptions",
"organizations_url": "https://api.github.com/users/sunyangfu/orgs",
"repos_url": "https://api.github.com/users/sunyangfu/repos",
"events_url": "https://api.github.com/users/sunyangfu/events{/privacy}",
"received_events_url": "https://api.github.com/users/sunyangfu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | NONE | null | # ❓ Questions & Help
Hi, regarding the sequential classification task, after the evaluation on test data, how could I output the actual false positive and false negative sentences? Basically convert BERT embedding back to the actual sentences for error analysis?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3589/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3588 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3588/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3588/comments | https://api.github.com/repos/huggingface/transformers/issues/3588/events | https://github.com/huggingface/transformers/pull/3588 | 592,637,578 | MDExOlB1bGxSZXF1ZXN0Mzk3NTgzNTAw | 3,588 | added model_cards for polish squad models | {
"login": "borhenryk",
"id": 35457598,
"node_id": "MDQ6VXNlcjM1NDU3NTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35457598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borhenryk",
"html_url": "https://github.com/borhenryk",
"followers_url": "https://api.github.com/users/borhenryk/followers",
"following_url": "https://api.github.com/users/borhenryk/following{/other_user}",
"gists_url": "https://api.github.com/users/borhenryk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borhenryk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borhenryk/subscriptions",
"organizations_url": "https://api.github.com/users/borhenryk/orgs",
"repos_url": "https://api.github.com/users/borhenryk/repos",
"events_url": "https://api.github.com/users/borhenryk/events{/privacy}",
"received_events_url": "https://api.github.com/users/borhenryk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"model pages: https://huggingface.co/models?filter=polish,question-answering\r\n\r\nThank you!"
] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3588/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3588/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3588",
"html_url": "https://github.com/huggingface/transformers/pull/3588",
"diff_url": "https://github.com/huggingface/transformers/pull/3588.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3588.patch",
"merged_at": 1585878017000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3587 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3587/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3587/comments | https://api.github.com/repos/huggingface/transformers/issues/3587/events | https://github.com/huggingface/transformers/issues/3587 | 592,502,353 | MDU6SXNzdWU1OTI1MDIzNTM= | 3,587 | How to fine tune T5 like for translation tasks? | {
"login": "prabalbansal",
"id": 30004110,
"node_id": "MDQ6VXNlcjMwMDA0MTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/30004110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prabalbansal",
"html_url": "https://github.com/prabalbansal",
"followers_url": "https://api.github.com/users/prabalbansal/followers",
"following_url": "https://api.github.com/users/prabalbansal/following{/other_user}",
"gists_url": "https://api.github.com/users/prabalbansal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prabalbansal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prabalbansal/subscriptions",
"organizations_url": "https://api.github.com/users/prabalbansal/orgs",
"repos_url": "https://api.github.com/users/prabalbansal/repos",
"events_url": "https://api.github.com/users/prabalbansal/events{/privacy}",
"received_events_url": "https://api.github.com/users/prabalbansal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"see https://github.com/huggingface/transformers/issues/3576"
] | 1,585 | 1,586 | 1,586 | NONE | null | # ❓ Questions & Help
## Details
<!-- Description of your issue -->
I want to pass 100000's of training instances to this syntax, but it says its limit is only 512. I am passing list of strings.
input_ids = tokenizer.encode('translate English to German: The house is wonderful. </s>', return_tensors='pt')
lm_labels = tokenizer.encode('Das Haus ist wunderbar. </s>', return_tensors='pt')
model(input_ids=input_ids, lm_labels=lm_labels)
And for 512 instances, it is not training, it is finishing in seconds without loading weights and training.
Could you explain how to fine-tune it in the correct way and use the fine-tuned model for the generation?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3587/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3587/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3586 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3586/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3586/comments | https://api.github.com/repos/huggingface/transformers/issues/3586/events | https://github.com/huggingface/transformers/issues/3586 | 592,477,200 | MDU6SXNzdWU1OTI0NzcyMDA= | 3,586 | when I run transformers in Docker container, it appeared this error | {
"login": "xiongma",
"id": 30991932,
"node_id": "MDQ6VXNlcjMwOTkxOTMy",
"avatar_url": "https://avatars.githubusercontent.com/u/30991932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiongma",
"html_url": "https://github.com/xiongma",
"followers_url": "https://api.github.com/users/xiongma/followers",
"following_url": "https://api.github.com/users/xiongma/following{/other_user}",
"gists_url": "https://api.github.com/users/xiongma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiongma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiongma/subscriptions",
"organizations_url": "https://api.github.com/users/xiongma/orgs",
"repos_url": "https://api.github.com/users/xiongma/repos",
"events_url": "https://api.github.com/users/xiongma/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiongma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"We would need more information in order to help you debug, namely transformers version, python version, code example, etc.\r\n\r\nHave you seen the issue template? Respecting it will ensure you get help :slightly_smiling_face: "
] | 1,585 | 1,587 | 1,587 | NONE | null | ```
unknown exception: Model name 'bert-base-uncased' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-cased). We assumed 'bert-base-uncased' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
```
OS platform python 3.7.6 transformer 2.4 debian | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3586/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3586/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3585 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3585/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3585/comments | https://api.github.com/repos/huggingface/transformers/issues/3585/events | https://github.com/huggingface/transformers/issues/3585 | 592,469,508 | MDU6SXNzdWU1OTI0Njk1MDg= | 3,585 | Reason behind the layers taken for distilbert-multilingual | {
"login": "divyag11",
"id": 39218807,
"node_id": "MDQ6VXNlcjM5MjE4ODA3",
"avatar_url": "https://avatars.githubusercontent.com/u/39218807?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/divyag11",
"html_url": "https://github.com/divyag11",
"followers_url": "https://api.github.com/users/divyag11/followers",
"following_url": "https://api.github.com/users/divyag11/following{/other_user}",
"gists_url": "https://api.github.com/users/divyag11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/divyag11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/divyag11/subscriptions",
"organizations_url": "https://api.github.com/users/divyag11/orgs",
"repos_url": "https://api.github.com/users/divyag11/repos",
"events_url": "https://api.github.com/users/divyag11/events{/privacy}",
"received_events_url": "https://api.github.com/users/divyag11/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"With distillation, the initialization is really important in order to obtain good results. You can read the [paper](https://arxiv.org/pdf/1910.01108.pdf) to have more information, look for section 3 and \"Student Initialization\"!\r\n\r\nThe important part is the following:\r\n\r\n_Student initialization In addition to the previously described optimization and architectural choices,\r\nan important element in our training procedure is to find the right initialization for the sub-network to\r\nconverge. Taking advantage of the common dimensionality between teacher and student networks,\r\nwe initialize the student from the teacher by taking one layer out of two._",
"Why not [0, 2, 4, 6, 8, 10]?"
] | 1,585 | 1,602 | 1,586 | NONE | null | Hi,
I went across the training code for distilbert. I can see that the distillation process is used to train the distilbert model from bert model.
What is the reason only the **layers [0, 2, 4, 7, 9, 11]** have been taken to train distilbert model?Is there any idea about the layers behind this?
And,what is the **last two layers in distilbert layer** corresponds and are they equivalently similar to the **9 and 11** layer of the original bert model?
This was the training code i referred to:
https://github.com/huggingface/transformers/blob/master/examples/distillation/scripts/extract_distilbert.py | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3585/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3584 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3584/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3584/comments | https://api.github.com/repos/huggingface/transformers/issues/3584/events | https://github.com/huggingface/transformers/issues/3584 | 592,435,854 | MDU6SXNzdWU1OTI0MzU4NTQ= | 3,584 | cased -> uncased in BERT GLUE example | {
"login": "boy2000-007man",
"id": 4197489,
"node_id": "MDQ6VXNlcjQxOTc0ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4197489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/boy2000-007man",
"html_url": "https://github.com/boy2000-007man",
"followers_url": "https://api.github.com/users/boy2000-007man/followers",
"following_url": "https://api.github.com/users/boy2000-007man/following{/other_user}",
"gists_url": "https://api.github.com/users/boy2000-007man/gists{/gist_id}",
"starred_url": "https://api.github.com/users/boy2000-007man/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/boy2000-007man/subscriptions",
"organizations_url": "https://api.github.com/users/boy2000-007man/orgs",
"repos_url": "https://api.github.com/users/boy2000-007man/repos",
"events_url": "https://api.github.com/users/boy2000-007man/events{/privacy}",
"received_events_url": "https://api.github.com/users/boy2000-007man/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"additionally, the `xlnet-large-cased` should not combine with `--do_lower_case` as `xlnet` model only has a cased version."
] | 1,585 | 1,586 | 1,586 | CONTRIBUTOR | null | similar to https://github.com/huggingface/transformers/issues/3183, the GLUE readme also have this issue, the MRPC example use `bert-base-cased` while at the same time `--do_lower_case`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3584/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3583 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3583/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3583/comments | https://api.github.com/repos/huggingface/transformers/issues/3583/events | https://github.com/huggingface/transformers/issues/3583 | 592,367,116 | MDU6SXNzdWU1OTIzNjcxMTY= | 3,583 | Dict in the first positional arguments | {
"login": "celsofranssa",
"id": 11181748,
"node_id": "MDQ6VXNlcjExMTgxNzQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/11181748?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/celsofranssa",
"html_url": "https://github.com/celsofranssa",
"followers_url": "https://api.github.com/users/celsofranssa/followers",
"following_url": "https://api.github.com/users/celsofranssa/following{/other_user}",
"gists_url": "https://api.github.com/users/celsofranssa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/celsofranssa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/celsofranssa/subscriptions",
"organizations_url": "https://api.github.com/users/celsofranssa/orgs",
"repos_url": "https://api.github.com/users/celsofranssa/repos",
"events_url": "https://api.github.com/users/celsofranssa/events{/privacy}",
"received_events_url": "https://api.github.com/users/celsofranssa/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"I would assume that you need to unpack the dictionary as you pass it to the model:\r\n\r\n```python\r\nmodel(**features)\r\n```\r\n\r\nEDIT: I was wrong, since the TF version should be used differently than the PT version.",
"Thank you @BramVanroy, but another error arose:\r\n\r\n```python\r\nmodel(**features)\r\n```\r\n\r\n```python\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-20-6248eef0b628> in <module>()\r\n----> 1 model(**features)\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)\r\n 798 else:\r\n 799 raise ValueError(\r\n--> 800 'The first argument to `Layer.call` must always be passed.')\r\n 801 \r\n 802 call_context = base_layer_utils.call_context()\r\n\r\nValueError: The first argument to `Layer.call` must always be passed.\r\n\r\n---------------------------------------------------------------------------\r\nNOTE: Current TensorFlow version is 2.2.0-rc2. To use TF 1.x instead,\r\nrestart your runtime (Ctrl+M .) and run \"%tensorflow_version 1.x\" before\r\nyou run \"import tensorflow\".\r\n---------------------------------------------------------------------------\r\n```",
"I believe the error comes from the fact that you''re lacking a dimension in your features. All inputs to the model should have a shape of `[batch_size, sequence_length]`, whereas from your output:\r\n\r\n```py\r\n{\r\n 'attention_mask': <tf.Tensor: shape=(16,), dtype=int32, numpy=array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int32)>,\r\n 'input_ids': <tf.Tensor: shape=(16,), dtype=int32, numpy=array([ 101, 13366, 2131, 1035, 6819, 2094, 1035, 2013, 1035, 24471, 2140, 1006, 24471, 2140, 1007, 102], dtype=int32)>\r\n}\r\n```\r\n\r\nYour tensors are of shape `[sequence_length]`. You can unsqueeze those to add a batch dimension (or simply batch them, since you're already making use of the attention mask), and it should work.",
"@LysandreJik,\r\nYes, it was really missing a dimension in `features`. \r\nUsing `features = tokenized_dataset.batch(2)` produces the desired input to transformer:\r\n\r\n```python\r\n({'attention_mask': <tf.Tensor: shape=(2, 16), dtype=int32, numpy=\r\n array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],\r\n [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], dtype=int32)>,\r\n 'input_ids': <tf.Tensor: shape=(2, 16), dtype=int32, numpy=\r\n array([[ 101, 13366, 2131, 1035, 6819, 2094, 1035, 2013, 1035,\r\n 24471, 2140, 1006, 24471, 2140, 1007, 102],\r\n [ 101, 13366, 8254, 2050, 1035, 20950, 1035, 2000, 1035,\r\n 24471, 2140, 1035, 2862, 1006, 20950, 102]], dtype=int32)>},\r\n {'attention_mask': <tf.Tensor: shape=(2, 16), dtype=int32, numpy=\r\n array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],\r\n [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], dtype=int32)>,\r\n 'input_ids': <tf.Tensor: shape=(2, 16), dtype=int32, numpy=\r\n array([[ 101, 27059, 2678, 8909, 2013, 24471, 2140, 1012, 102,\r\n 0, 0, 0, 0, 0, 0, 0],\r\n [ 101, 2358, 2099, 1011, 1028, 2862, 10463, 20950, 2000,\r\n 24471, 2140, 2862, 1012, 2013, 12170, 102]], dtype=int32)>})\r\n```\r\nThank you."
] | 1,585 | 1,585 | 1,585 | NONE | null | Could someone help me figure out what is wrong in the TFBertModel below?
```python
features = next(iter(dataset))
features
```
which prints:
```python
{'attention_mask': <tf.Tensor: shape=(16,), dtype=int32, numpy=array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], dtype=int32)>,
'input_ids': <tf.Tensor: shape=(16,), dtype=int32, numpy=
array([ 101, 13366, 2131, 1035, 6819, 2094, 1035, 2013, 1035,
24471, 2140, 1006, 24471, 2140, 1007, 102], dtype=int32)>}
```
In turn, I loaded the `TFBertModel` and following the[ documentation page](https://huggingface.co/transformers/model_doc/bert.html#transformers.TFBertForPreTraining) I tried to use `features` as input:
```python
model = TFBertModel.from_pretrained('bert-base-uncased')
model(features)
```
But I'm getting the following error:
```python
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-19-8bcadb504daf> in <module>()
----> 1 model(features)
8 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
966 with base_layer_utils.autocast_context_manager(
967 self._compute_dtype):
--> 968 outputs = self.call(cast_inputs, *args, **kwargs)
969 self._handle_activity_regularization(inputs, outputs)
970 self._set_mask_metadata(inputs, outputs, input_masks)
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_bert.py in call(self, inputs, **kwargs)
706 last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
707 """
--> 708 outputs = self.bert(inputs, **kwargs)
709 return outputs
710
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
966 with base_layer_utils.autocast_context_manager(
967 self._compute_dtype):
--> 968 outputs = self.call(cast_inputs, *args, **kwargs)
969 self._handle_activity_regularization(inputs, outputs)
970 self._set_mask_metadata(inputs, outputs, input_masks)
/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_bert.py in call(self, inputs, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, training)
545 # this attention mask is more simple than the triangular masking of causal attention
546 # used in OpenAI GPT, we just need to prepare the broadcast dimension here.
--> 547 extended_attention_mask = attention_mask[:, tf.newaxis, tf.newaxis, :]
548
549 # Since attention_mask is 1.0 for positions we want to attend and 0.0 for
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py in _slice_helper(tensor, slice_spec, var)
982 ellipsis_mask=ellipsis_mask,
983 var=var,
--> 984 name=name)
985
986
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py in strided_slice(input_, begin, end, strides, begin_mask, end_mask, ellipsis_mask, new_axis_mask, shrink_axis_mask, var, name)
1148 ellipsis_mask=ellipsis_mask,
1149 new_axis_mask=new_axis_mask,
-> 1150 shrink_axis_mask=shrink_axis_mask)
1151
1152 parent_name = name
/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gen_array_ops.py in strided_slice(input, begin, end, strides, begin_mask, end_mask, ellipsis_mask, new_axis_mask, shrink_axis_mask, name)
10155 pass # Add nodes to the TensorFlow graph.
10156 except _core._NotOkStatusException as e:
> 10157 _ops.raise_from_not_ok_status(e, name)
10158 # Add nodes to the TensorFlow graph.
10159 if begin_mask is None:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in raise_from_not_ok_status(e, name)
6651 message = e.message + (" name: " + name if name is not None else "")
6652 # pylint: disable=protected-access
-> 6653 six.raise_from(core._status_to_exception(e.code, message), None)
6654 # pylint: enable=protected-access
6655
/usr/local/lib/python3.6/dist-packages/six.py in raise_from(value, from_value)
InvalidArgumentError: Index out of range using input dim 1; input has only 1 dims [Op:StridedSlice] name: tf_bert_model/bert/strided_slice/
```
note: edited to correct typo. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3583/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3583/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3582 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3582/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3582/comments | https://api.github.com/repos/huggingface/transformers/issues/3582/events | https://github.com/huggingface/transformers/issues/3582 | 592,316,295 | MDU6SXNzdWU1OTIzMTYyOTU= | 3,582 | Does the BART model support Chinese? Having the pre-trained Chinese model? | {
"login": "mtfelix",
"id": 1635065,
"node_id": "MDQ6VXNlcjE2MzUwNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1635065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mtfelix",
"html_url": "https://github.com/mtfelix",
"followers_url": "https://api.github.com/users/mtfelix/followers",
"following_url": "https://api.github.com/users/mtfelix/following{/other_user}",
"gists_url": "https://api.github.com/users/mtfelix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mtfelix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mtfelix/subscriptions",
"organizations_url": "https://api.github.com/users/mtfelix/orgs",
"repos_url": "https://api.github.com/users/mtfelix/repos",
"events_url": "https://api.github.com/users/mtfelix/events{/privacy}",
"received_events_url": "https://api.github.com/users/mtfelix/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Have the same problem and do you have any url to download the bart-large-cnn or and other pretrained model",
"No chinese support, yet. \r\nDownload:\r\n```bash\r\nwget https://s3.amazonaws.com/models.huggingface.co/bert/facebook/bart-large-cnn/pytorch_model.bin\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | NONE | null | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3582/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3582/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3581 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3581/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3581/comments | https://api.github.com/repos/huggingface/transformers/issues/3581/events | https://github.com/huggingface/transformers/issues/3581 | 592,312,163 | MDU6SXNzdWU1OTIzMTIxNjM= | 3,581 | Different outputs in using convert_roberta_original_pytorch_checkpoint_to_pytorch.py | {
"login": "lhaausing",
"id": 55363337,
"node_id": "MDQ6VXNlcjU1MzYzMzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/55363337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhaausing",
"html_url": "https://github.com/lhaausing",
"followers_url": "https://api.github.com/users/lhaausing/followers",
"following_url": "https://api.github.com/users/lhaausing/following{/other_user}",
"gists_url": "https://api.github.com/users/lhaausing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhaausing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhaausing/subscriptions",
"organizations_url": "https://api.github.com/users/lhaausing/orgs",
"repos_url": "https://api.github.com/users/lhaausing/repos",
"events_url": "https://api.github.com/users/lhaausing/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhaausing/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I had a similar problem. There is a bug in conversion script, this pull request fixes the issue: https://github.com/huggingface/transformers/pull/3642",
"@sdadas Thanks a lot. I've pulled the latest change and it works now.",
"Great to hear, closing!"
] | 1,585 | 1,586 | 1,586 | CONTRIBUTOR | null | # 🐛 Bug
We have an issue in converting roberta model from fairseq format to huggingface format. The conversion function provided in the transformers library gives you different outputs when you pass sample data through the fairseq and huggingface-transformers versions seperately.
⬇️Problem with our pretrained model
There's a difference between two output tensors. The actual number is⬇️
huggingface transformers output:
> tensor([[[ 2.1787e+01, -4.7770e+00, 6.1631e+00, ..., -4.6316e+00,
> -4.7297e+00, -3.9510e-01],
> [ 2.0051e+00, -2.7158e+00, 5.2598e+00, ..., -2.3681e+00,
> -2.0179e+00, -1.5263e-02],
> [-2.7891e+00, -4.7558e+00, 5.3717e+00, ..., -4.5290e+00,
> -3.8888e+00, -5.7892e-02],
> [ 1.3125e+00, -3.9378e+00, 6.7551e+00, ..., -3.6842e+00,
> -3.4968e+00, 5.4736e-01],
> [-3.4706e+00, -7.7992e+00, 1.6678e+01, ..., -6.1806e+00,
> -7.4419e+00, -8.5062e-02]]], grad_fn=<AddBackward0>)
fairseq output:
> tensor([[[21.2672, -4.8905, 6.2439, ..., -4.8653, -4.9650, -1.6207],
> [ 1.4856, -2.8294, 5.3406, ..., -2.6018, -2.2533, -1.2408],
> [-3.3087, -4.8693, 5.4525, ..., -4.7626, -4.1241, -1.2835],
> [ 0.7930, -4.0513, 6.8359, ..., -3.9179, -3.7322, -0.6782],
> [-3.9902, -7.9127, 16.7589, ..., -6.4142, -7.6773, -1.3106]]],
> grad_fn=<AddBackward0>)
abs difference:
> tensor([[[0.5195, 0.1135, 0.0808, ..., 0.2336, 0.2354, 1.2256],
> [0.5195, 0.1135, 0.0808, ..., 0.2336, 0.2354, 1.2256],
> [0.5195, 0.1135, 0.0808, ..., 0.2336, 0.2354, 1.2256],
> [0.5195, 0.1135, 0.0808, ..., 0.2336, 0.2354, 1.2256],
> [0.5195, 0.1135, 0.0808, ..., 0.2336, 0.2354, 1.2256]]],
> grad_fn=<AbsBackward>)
The same issue happens when we try to convert the default roberta-base model from fairseq format into transformers format.
We have change some source code from fairseq to register our model name and architecture(Just changes in some hyperparameters).
Our initial guess is that there are some parameters isn't or is wrongly loaded.
The error looks like this:
> max_absolute_diff = 1.2255859375
> Do both models output the same tensors? 💩
> Traceback (most recent call last):
> File "convert_roberta_original_pytorch_checkpoint_to_pytorch.py", line 181, in <module>
> args.roberta_checkpoint_path, args.pytorch_dump_folder_path, args.classification_head
> File "convert_roberta_original_pytorch_checkpoint_to_pytorch.py", line 160, in convert_roberta_checkpoint_to_pytorch
> raise Exception("Something went wRoNg")
> Exception: Something went wRoNg
Thanks a lot for helping.
## Information
Model I am using (Bert, XLNet ...): RoBERTa
Language I am using the model on (English, Chinese ...): English
## Expected behavior
Explanation about the function (or further contact in refining the function?)
## Environment info
- `transformers` version: the default version
- Platform: NYU Prince Cluster
- Python version: python 3.7
- PyTorch version (GPU?): No GPU
- Tensorflow version (GPU?): No GPU
- Using GPU in script?: No GPU
- Using distributed or parallel set-up in script?: No GPU
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3581/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3580 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3580/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3580/comments | https://api.github.com/repos/huggingface/transformers/issues/3580/events | https://github.com/huggingface/transformers/issues/3580 | 592,244,004 | MDU6SXNzdWU1OTIyNDQwMDQ= | 3,580 | wrong parameters order in TFTransfoXLMainLayer _update_mems call | {
"login": "dmytyar",
"id": 3531780,
"node_id": "MDQ6VXNlcjM1MzE3ODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3531780?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dmytyar",
"html_url": "https://github.com/dmytyar",
"followers_url": "https://api.github.com/users/dmytyar/followers",
"following_url": "https://api.github.com/users/dmytyar/following{/other_user}",
"gists_url": "https://api.github.com/users/dmytyar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dmytyar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dmytyar/subscriptions",
"organizations_url": "https://api.github.com/users/dmytyar/orgs",
"repos_url": "https://api.github.com/users/dmytyar/repos",
"events_url": "https://api.github.com/users/dmytyar/events{/privacy}",
"received_events_url": "https://api.github.com/users/dmytyar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Good catch @dmytyar :-) "
] | 1,585 | 1,586 | 1,586 | NONE | null | # 🐛 Bug
## Information
In file src/transformers/modeling_tf_transfo_xl.py present small typo in method parameters call.
In line 491 present method:
def _update_mems(self, hids, mems, qlen, mlen)
And in line 610 on it's call:
new_mems = self._update_mems(hids, mems, mlen, qlen)
As you can see qlen and mlen placed in wrong order. In result memory size can raise outside of specified value. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3580/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3579 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3579/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3579/comments | https://api.github.com/repos/huggingface/transformers/issues/3579/events | https://github.com/huggingface/transformers/issues/3579 | 592,224,844 | MDU6SXNzdWU1OTIyMjQ4NDQ= | 3,579 | Summarization pipeline max_length parameter seems to just cut the summary rather than generating a complete sentence within the max length | {
"login": "Weilin37",
"id": 5770543,
"node_id": "MDQ6VXNlcjU3NzA1NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5770543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Weilin37",
"html_url": "https://github.com/Weilin37",
"followers_url": "https://api.github.com/users/Weilin37/followers",
"following_url": "https://api.github.com/users/Weilin37/following{/other_user}",
"gists_url": "https://api.github.com/users/Weilin37/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Weilin37/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Weilin37/subscriptions",
"organizations_url": "https://api.github.com/users/Weilin37/orgs",
"repos_url": "https://api.github.com/users/Weilin37/repos",
"events_url": "https://api.github.com/users/Weilin37/events{/privacy}",
"received_events_url": "https://api.github.com/users/Weilin37/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"**Try using the T5 summarizer instead like below:**\r\n```python\r\nfrom transformers import T5Tokenizer, T5ForConditionalGeneration\r\n\r\ntokenizer = T5Tokenizer.from_pretrained('t5-small')\r\nmodel = T5ForConditionalGeneration.from_pretrained('t5-small')\r\ninputs = tokenizer.batch_encode_plus([\"summarize: \" + example_text], max_length=1024, return_tensors=\"pt\", pad_to_max_length=True) # Batch size 1\r\noutputs = model.generate(inputs['input_ids'], num_beams=4, max_length=50, early_stopping=True)\r\n\r\nprint([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in outputs])\r\n```\r\n\r\n**The above excerpt gave me a summary of:**\r\n\r\n*'the survival rate among patients with metastatic renal-cell carcinoma has plateaued . agents such as sunitinib that target the vascular endothelial growth factor pathway are standard first-line therapy for advanced disease'*\r\n\r\n**If you still want to use Bart:**\r\n\r\nMy assumption is that this is not a bug. I may be wrong, but it seems the Bart summarizer just has a bias towards pointing to the first couple sentences of the original text. It's still abstractive, as can be seen by subtle differences in the summary you're getting. If you specify `min_length` as a higher value, like 100, you start to see that there are pointers to sentences that are not just in the first couple sentences.\r\n\r\n**Trying a `min_length` of a 100 using `bart-large-cnn` gave me the below summary:**\r\n\r\n*'Renal-cell carcinoma is characterized by susceptibility to both immunotherapeutic and antiangiogenic treatment approaches and resistance to cytotoxic chemotherapy. Agents such as sunitinib that target the vascular endothelial growth factor (VEGF) pathway are standard first-line therapy for advanced disease. **We conducted the KEYNOTE-426 trial to determine whether pembrolizumab plus axit inib would result in better outcomes than sunit in patients with previously untreated advanced renal- cell carcinoma.**'`*\r\n\r\nYou can see that the last sentence is not a part of the initial text excerpt",
"As @aychang95 suggested you have to play around with the `generate` method arguments to see what works best for your example. Especially take a look at `num_beams`, `max_length`, `min_length`, `early_stopping` and `length_penalty`. \r\n\r\nI just noticed that I forget to add a good default setting to the Bart summarization pipeline. Just uploaded it - see here: https://s3.amazonaws.com/models.huggingface.co/bert/facebook/bart-large-cnn/config.json\r\n\r\nThe summarization pipeline should work better now :-) ",
"> As @aychang95 suggested you have to play around with the `generate` method arguments to see what works best for your example. Especially take a look at `num_beams`, `max_length`, `min_length`, `early_stopping` and `length_penalty`.\r\n> \r\n> I just noticed that I forget to add a good default setting to the Bart summarization pipeline. Just uploaded it - see here: https://s3.amazonaws.com/models.huggingface.co/bert/facebook/bart-large-cnn/config.json\r\n> \r\n> The summarization pipeline should work better now :-)\r\n\r\nThank you! How do I go about updating the model? My code is below but I receive an error:\r\n\r\n```\r\nfrom transformers import pipeline, AutoTokenizer, AutoModel\r\ntokenizer = AutoTokenizer.from_pretrained(\"facebook/bart-large-cnn\")\r\nmodel = AutoModel.from_pretrained(\"facebook/bart-large-cnn\")\r\nsummarizer = pipeline(\"summarization\", model = model, tokenizer = tokenizer)\r\n```\r\n\r\n\r\n> OSError: Model name 'facebook/bart-large-cnn' was not found in tokenizers model name list (bart-large, bart-large-mnli, bart-large-cnn, bart-large-xsum). We assumed 'facebook/bart-large-cnn' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url.\r\n",
"```\r\nfrom transformers import pipeline, AutoTokenizer, AutoModelWithLMHead\r\ntokenizer = AutoTokenizer.from_pretrained(\"bart-large-cnn\")\r\nmodel = AutoModelWithLMHead.from_pretrained(\"bart-large-cnn\")\r\nsummarizer = pipeline(\"summarization\", model = model, tokenizer = tokenizer)\r\n```\r\nworks :-).\r\n\r\nNote that \"bart-large-cnn\" is the default model for the summarization pipeline. The code above is equivalent to: \r\n\r\n```\r\nfrom transformers import pipeline\r\nsummarizer = pipeline(\"summarization\")\r\n```",
"I was also able to discover another reason of why the summarization cut off. I believe setting the max_length conflicted with whatever the default min_length was. It looks like max_length takes priority and so the summary was cut off. I think it would be useful if this was managed automatically somehow, or at least display a warning.",
"Hi @patrickvonplaten I just found that summarization takes 1024 words into consideration for generating a summary on its default parameters. I would like to know if I can increase the input size in order to consider more words while generating a summary in any case.\r\nI got the following message.\r\n\r\n`Your max_length is set to 1300, but you input_length is only 1024. You might consider decreasing max_length manually, e.g. summarizer('...', max_length=50)\r\n`\r\n\r\n",
"As far as I know for `Bart` the `max_length` is 1024 and for `T5` it's 512. So depending on your model, I don't think you can increase the `max_length` higher than its `max_length` value.",
"@patrickvonplaten I got your point. I have another question, what is the maximum token ( or words ) we can provide to Bart for a summary generation. Also, what should I do in order to generate a summary from a large text which contains approximately 100k words in it?",
"A text that contains 100k words is probably more of a novel than a \"text\" :D. \r\nSo for these kinds of text using Bart you would need to chunk the text. Your memory would explode anyways at such sizes. In a couple of days we will add Reformer which can handle super long input text. We will soon also have an encoder-decoder model for Reformer which you could then use for summarization."
] | 1,585 | 1,588 | 1,585 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): default model from pipeline("summarization")
Language I am using the model on (English, Chinese ...): English
I am using the pipeline for summarization in most up to date version of Transformers. I am inputing a long piece of tax and setting the summarizer to be: summarizer(PIECE_OF_TEXT, max_length = 50).
I was expecting the summarizer to generate a summary within 50 words but it seems to only generate a summary that seems cut off (the ending of the summary ends with a comma and doesn't end in a grammatical sensible way. See example below.
**The piece of text to be summarized:**
Renal-cell carcinoma is characterized by susceptibility to both immunotherapeutic and antiangiogenic treatment approaches and resistance to cytotoxic chemotherapy.1 Agents such as sunitinib that target the vascular endothelial growth factor (VEGF) pathway are standard first-line therapy for advanced disease.2-7 Despite the approval of several targeted therapies by entities such as the Food and Drug Administration, the European Medicines Agency, and the Pharmaceuticals and Medical Devices Agency, the survival rate among patients with metastatic renal-cell carcinoma has plateaued.
Both the VEGF receptor tyrosine kinase inhibitor axitinib and the anti–programmed death 1 (PD-1) monoclonal antibody pembrolizumab have shown antitumor activity in patients with previously untreated advanced clear-cell renal-cell carcinoma.6,10 In a phase 1b trial involving patients with previously untreated metastatic renal-cell carcinoma, 73% (95% confidence interval [CI], 59 to 84) of the patients who received pembrolizumab plus axitinib had a response; 65% of patients had at least one treatment-related adverse event.11 We conducted the KEYNOTE-426 trial to determine whether pembrolizumab plus axitinib would result in better outcomes than sunitinib in patients with previously untreated advanced renal-cell carcinoma.
**And the summary:**
Renal-cell carcinoma is characterized by susceptibility to both immunotherapeutic and antiangiogenic treatment approaches. Agents such as sunitinib that target the vascular endothelial growth factor (VEGF) pathway are standard first, axitinib and the anti–programmed death 1 (PD-1) monoclonal antibody pembrolizumab have shown antitumor activity in patients with previously untreated advanced clear-cell renal-cell carcin, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3579/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3578 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3578/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3578/comments | https://api.github.com/repos/huggingface/transformers/issues/3578/events | https://github.com/huggingface/transformers/pull/3578 | 592,214,932 | MDExOlB1bGxSZXF1ZXN0Mzk3MjQxNDc1 | 3,578 | [WIP] Adding model parallelism for T5 (should work for other models as well) | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2107554019,
"node_id": "MDU6TGFiZWwyMTA3NTU0MDE5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Distributed%20Training%20/%20Models",
"name": "Distributed Training / Models",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3578?src=pr&el=h1) Report\n> Merging [#3578](https://codecov.io/gh/huggingface/transformers/pull/3578?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a4ee4da18ad659b196582bbdf40785033ee1d26b?el=desc) will **decrease** coverage by `0.10%`.\n> The diff coverage is `12.90%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3578?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3578 +/- ##\n==========================================\n- Coverage 78.05% 77.94% -0.11% \n==========================================\n Files 100 100 \n Lines 17135 17166 +31 \n==========================================\n+ Hits 13374 13380 +6 \n- Misses 3761 3786 +25 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3578?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3578/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.18% <12.90%> (-4.34%)` | :arrow_down: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3578/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.62% <0.00%> (+0.32%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3578?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3578?src=pr&el=footer). Last update [a4ee4da...3bfeebe](https://codecov.io/gh/huggingface/transformers/pull/3578?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hello! Do you have plans to merge this feature to master branch? \r\nI tried to make it locally in clonned repo but I got an error while tried to use it:\r\n<ipython\r\n\r\n> -input-22-5591bd8e45c0> in main()\r\n> 143 cache_dir=model_args.cache_dir,\r\n> 144 )\r\n> --> 145 model = model.spread_on_devices(['cpu', 'cpu'])\r\n> 146 \r\n> 147 # Get datasets\r\n> \r\n> /usr/local/lib/python3.6/dist-packages/transformers/modeling_t5.py in spread_on_devices(self, devices)\r\n> 936 return\r\n> 937 \r\n> --> 938 modules_to_move = set(self.modules)\r\n> 939 \r\n> 940 # Evenly spread the blocks on devices\r\n> \r\n> TypeError: 'method' object is not iterable",
"Hey @exelents, \r\n\r\nAt the moment I don't think anybody is working on it and I'm not sure what the importance of this PR is at the moment. Feel free to take over the PR and try to make it work. I would be more than happy to help you if you open a PR :-) ",
"This is very much related: https://github.com/huggingface/transformers/issues/7526",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.",
"This feature was awesome! I think this would be a major improvement to the transformers package!"
] | 1,585 | 1,652 | 1,619 | MEMBER | null | This PR adds:
- a `get_block_list()` utility method which returns a list of the blocks in a Transformers model (currently only added on T5). Block can be Modules or list/tuple of Modules (if a single transformer block is spread in several ModuleList like in XLM).
- a `spread_on_devices(devices: Optional[List] = None)` method to spread a model on several devices by spreading the transformers blocks (roughly) evenly on the provided device list or all visible CUDA devices if no device list is given. The first device will host the remaining non-block modules in addition (the embeddings usually).
Currently, the code is in the T5 model but should be generic enough to be applied to other models if needed.
To use:
``` python
model = T5ForConditionalGeneration.from_pretrained('...')
model.spread_on_devices() # Will spread on all visible CUDA devices by default
input = torch.tensor([...]).to('cuda:0') # Inputs and outputs are on the first device
model(input) # you should probably use only positional arguments for the forward pass (see spread_on_devices's docstring)
```
TODO:
- [ ] try it
- [ ] add tests if possible (on a dummy device list like ['cpu', 'cpu']?)
cc @patrickvonplaten @craffel | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3578/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3578/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3578",
"html_url": "https://github.com/huggingface/transformers/pull/3578",
"diff_url": "https://github.com/huggingface/transformers/pull/3578.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3578.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3577 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3577/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3577/comments | https://api.github.com/repos/huggingface/transformers/issues/3577/events | https://github.com/huggingface/transformers/issues/3577 | 592,208,430 | MDU6SXNzdWU1OTIyMDg0MzA= | 3,577 | DistilBert not giving hidden states | {
"login": "ierezell",
"id": 30974685,
"node_id": "MDQ6VXNlcjMwOTc0Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/30974685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ierezell",
"html_url": "https://github.com/ierezell",
"followers_url": "https://api.github.com/users/ierezell/followers",
"following_url": "https://api.github.com/users/ierezell/following{/other_user}",
"gists_url": "https://api.github.com/users/ierezell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ierezell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ierezell/subscriptions",
"organizations_url": "https://api.github.com/users/ierezell/orgs",
"repos_url": "https://api.github.com/users/ierezell/repos",
"events_url": "https://api.github.com/users/ierezell/events{/privacy}",
"received_events_url": "https://api.github.com/users/ierezell/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@Ierezell I am facing same issue. How did you fix this issue? "
] | 1,585 | 1,590 | 1,585 | CONTRIBUTOR | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): DistilBert
Language I am using the model on (English, Chinese ...): Multilingual
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name) just running inference
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Load modules
2. Create model with output_hidden_layers=True
3. run inference on one sentece
`from transformers import BertModel, BertTokenizerFast, BertConfig`
`from transformers import DistilBertModel, DistilBertTokenizerFast, DistilBertConfig`
`import torch`
`bert_config = BertConfig(output_hidden_states=True)`
`bert = BertModel(bert_config)`
`bert = bert.from_pretrained("bert-base-multilingual-cased")`
`bert_tokenizer = BertTokenizerFast.from_pretrained("bert-base-multilingual-cased")`
`distil_bert_config = DistilBertConfig(output_hidden_states=True)`
`distil_bert = DistilBertModel(distil_bert_config)`
`distil_bert = distil_bert.from_pretrained("distilbert-base-multilingual-cased")`
`distil_bert_tokenizer = DistilBertTokenizerFast.from_pretrained("distilbert-base-multilingual-cased")`
`sentence = "One stupid dummy sentence to test !"`
`input_bert = torch.tensor(bert_tokenizer.encode(sentence)).unsqueeze(0)`
`input_distil_bert = torch.tensor(distil_bert_tokenizer.encode(sentence)).unsqueeze(0)
`
`output_bert = bert(input_bert)`
`outupt_distil_bert = distil_bert(input_distil_bert)`
## Expected behavior
Return a tuple with 2 elements (like Bert)
Exemple of bert and desired behavior of distilbert:
`print(len(output_bert)) # => 2`
`print(output_bert[0].size()) # => 1, 18, 768`
`print(output_bert[1].size()) # => 1, 768`
## Real behavior
Return tuple of only 1 element
`print(len(outupt_distil_bert)) # => 1`
`print(outupt_distil_bert[0].size()) # => 1, 18, 768`
## Environment info
- `transformers` version: 2.7.0
- Platform: Linux 5.5.8-arch1-1
- Python version: Python 3.8.1
- PyTorch version (GPU?): 1.4.0 (GPU)
- Tensorflow version (GPU?): None
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3577/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3577/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3576 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3576/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3576/comments | https://api.github.com/repos/huggingface/transformers/issues/3576/events | https://github.com/huggingface/transformers/issues/3576 | 592,194,682 | MDU6SXNzdWU1OTIxOTQ2ODI= | 3,576 | T5 fine tune for seq2seq generation | {
"login": "tuhinjubcse",
"id": 3104771,
"node_id": "MDQ6VXNlcjMxMDQ3NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3104771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tuhinjubcse",
"html_url": "https://github.com/tuhinjubcse",
"followers_url": "https://api.github.com/users/tuhinjubcse/followers",
"following_url": "https://api.github.com/users/tuhinjubcse/following{/other_user}",
"gists_url": "https://api.github.com/users/tuhinjubcse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tuhinjubcse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tuhinjubcse/subscriptions",
"organizations_url": "https://api.github.com/users/tuhinjubcse/orgs",
"repos_url": "https://api.github.com/users/tuhinjubcse/repos",
"events_url": "https://api.github.com/users/tuhinjubcse/events{/privacy}",
"received_events_url": "https://api.github.com/users/tuhinjubcse/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patrickvonplaten ",
"Not yet :-) For translation you could try to create a `run_train.py` script using the following resources:\r\n\r\n- How to run `T5` for translation: https://github.com/huggingface/transformers/tree/master/examples/translation/t5\r\n- How to train `Bart` for summarization (should be very similar to \"How to train\" T5 for translation): https://github.com/huggingface/transformers/blob/master/examples/summarization/bart/run_train.sh and https://github.com/huggingface/transformers/blob/master/examples/summarization/bart/run_bart_sum.py",
"For dialog generation - I would not recommend using T5, but rather a \"decoder-only\" model like gpt2. You could take a look at this script with implements a SOTA dialogue bot using DialoGPT from Microsoft: https://huggingface.co/microsoft/DialoGPT-medium#how-to-use",
"Thanks for answering @patrickvonplaten \r\nOne more query: how to create the **data files** and **vocab file** for T5. \r\n\r\nIf I am not wrong, it requires 4 data files and 1 vocab file. And in train.source and val.source files, each instance should have a prefix like \"translate English to German: \". I prepare the data in the same format.\r\nIt gives this error.\r\nTypeError: 'NoneType' object is not iterable\r\n",
"@sshleifer ",
"in examples/transformer_base.py change line 105 to\r\n```python\r\navg_loss = getattr(self.trainer, \"avg_loss\", 0.0)\r\n```",
"@sshleifer Thanks for the reply. It didn't work. and have the same error. Below is the screenshot of the changed line.\r\n\r\n<img width=\"937\" alt=\"Screenshot 2020-04-12 at 9 17 24 PM\" src=\"https://user-images.githubusercontent.com/30004110/79077852-8a308c00-7d04-11ea-92a2-30d9c6fb0c84.png\">",
"@prabalbansal I think @sshleifer means line 107, so I added this patch in PR #3768",
"@hugoabonizio Thanks for the patch. It works.",
"@sshleifer @hugoabonizio When I use the model to predict for test set using the following command:\r\n\r\npython '/content/transformers-master/examples/summarization/bart/run_bart_sum.py' --data_dir='/content/drive/My Drive/two_keywords/' --model_type=t5 --output_dir=/content/t5 --do_predict --model_name_or_path=t5-small\r\n\r\nError generated:\r\n<img width=\"1010\" alt=\"Screenshot 2020-04-13 at 6 18 47 PM\" src=\"https://user-images.githubusercontent.com/30004110/79137728-8a3b9500-7db3-11ea-90e4-218cbc3e1e74.png\">\r\n",
"> Thanks for answering @patrickvonplaten\r\n> One more query: how to create the **data files** and **vocab file** for T5.\r\n> \r\n> If I am not wrong, it requires 4 data files and 1 vocab file. And in train.source and val.source files, each instance should have a prefix like \"translate English to German: \". I prepare the data in the same format.\r\n> It gives this error.\r\n> TypeError: 'NoneType' object is not iterable\r\n> \r\n\r\nHi, I didn't understand why you have to prepare the vocab file. I think the pertrained T5 and its default tokenizer will take care of the tokenization? Thanks for your response.",
"@MichaelZhouwang yes we didn't need vocab file here.",
"> For dialog generation - I would not recommend using T5, but rather a \"decoder-only\" model like gpt2. You could take a look at this script with implements a SOTA dialogue bot using DialoGPT from Microsoft: https://huggingface.co/microsoft/DialoGPT-medium#how-to-use\r\n\r\n\r\nHi @patrickvonplaten,\r\n\r\nI'm trying out T5 and BART for dialogue generation. \r\n\r\nI'm wondering why you say that it's better to just have a decoder. Both FacebookAI's Blender and Google's Meena had encoders in their architectures.\r\n\r\nWhat's the reason for decoder-only systems being better?",
"1) @Valdegg I think you are correct that it makes sense to use a seq2seq model.\r\n2) We are also currently working on porting blenderbot from parlai, which was trained on dialogue. 3) 3) We have new forums at https://discuss.huggingface.co/ for discussing higher-level things like which model to use. ",
"The same question is there any example about training T5 to translate multiple sentences?"
] | 1,585 | 1,620 | 1,587 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hi
Is a script available for fine-tuning T5 base or large to to do seq2seq generative tasks like translation or dialog generation?
https://github.com/huggingface/transformers/blob/master/examples/run_generation.py
Doesn't seem to have T5
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3576/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3576/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3575 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3575/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3575/comments | https://api.github.com/repos/huggingface/transformers/issues/3575/events | https://github.com/huggingface/transformers/pull/3575 | 592,171,570 | MDExOlB1bGxSZXF1ZXN0Mzk3MjA1NDMx | 3,575 | Create README.md | {
"login": "ahotrod",
"id": 44321615,
"node_id": "MDQ6VXNlcjQ0MzIxNjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/44321615?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahotrod",
"html_url": "https://github.com/ahotrod",
"followers_url": "https://api.github.com/users/ahotrod/followers",
"following_url": "https://api.github.com/users/ahotrod/following{/other_user}",
"gists_url": "https://api.github.com/users/ahotrod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahotrod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahotrod/subscriptions",
"organizations_url": "https://api.github.com/users/ahotrod/orgs",
"repos_url": "https://api.github.com/users/ahotrod/repos",
"events_url": "https://api.github.com/users/ahotrod/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahotrod/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3575/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3575/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3575",
"html_url": "https://github.com/huggingface/transformers/pull/3575",
"diff_url": "https://github.com/huggingface/transformers/pull/3575.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3575.patch",
"merged_at": 1585878193000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3574 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3574/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3574/comments | https://api.github.com/repos/huggingface/transformers/issues/3574/events | https://github.com/huggingface/transformers/issues/3574 | 592,043,623 | MDU6SXNzdWU1OTIwNDM2MjM= | 3,574 | [Benchmark] QUAERO French Medical Corpus for Named Entity Recognition | {
"login": "PieterDujardin",
"id": 48496355,
"node_id": "MDQ6VXNlcjQ4NDk2MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/48496355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PieterDujardin",
"html_url": "https://github.com/PieterDujardin",
"followers_url": "https://api.github.com/users/PieterDujardin/followers",
"following_url": "https://api.github.com/users/PieterDujardin/following{/other_user}",
"gists_url": "https://api.github.com/users/PieterDujardin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PieterDujardin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PieterDujardin/subscriptions",
"organizations_url": "https://api.github.com/users/PieterDujardin/orgs",
"repos_url": "https://api.github.com/users/PieterDujardin/repos",
"events_url": "https://api.github.com/users/PieterDujardin/events{/privacy}",
"received_events_url": "https://api.github.com/users/PieterDujardin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | NONE | null | # 🖥 Benchmarking `transformers`
## Benchmark
I am trying the Transformers library out on [QUAERO French Medical Corpus](https://quaerofrenchmed.limsi.fr/) NER-dataset (consisting of Medline titles and EMEA documents). Using Camembert-base 'out of the box' with default hyperparameters, I get an F1 measure of 85% on the EMEA dataset and 64% on Medline, while [one of the few papers](http://ceur-ws.org/Vol-1391/158-CR.pdf) I found that did an experiment on the same dataset reported F1 of 70% and 52%, respectively, using a classic CRF.
Since extensive grid search for hyperparameter optimization is computationally expensive even with a GPU, and given that I am relatively new to the field, I was wondering how to actually go about further optimizing the current model that I have. Is it even worth doing a lot of hyperparameter optimization or is it common that out of the box transformer models already do a decent job?
In particular, I'm not so sure which of the following things are worthwhile trying out;
- Making the input for the BERT model longer than 1 sentence
e.g. 3 input sentences [CLS] sentence1 sentence2 sentence3 [SEP]
- Extending the vocab of the tokenizer by adding tokens related to the medical domain? Not sure
if this even makes sense doing..
- Which hyperparameters do I focus on the most?
I assume epochs and batch size are the most important, but which others are worth trying out as well?
- Other suggestions to improve this model?
If you need more information regarding the setup I will be happy to provide it, thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3574/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3573 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3573/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3573/comments | https://api.github.com/repos/huggingface/transformers/issues/3573/events | https://github.com/huggingface/transformers/issues/3573 | 592,014,338 | MDU6SXNzdWU1OTIwMTQzMzg= | 3,573 | How can I use masked_lm_labels correctly? | {
"login": "smelly-dog",
"id": 32981640,
"node_id": "MDQ6VXNlcjMyOTgxNjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/32981640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smelly-dog",
"html_url": "https://github.com/smelly-dog",
"followers_url": "https://api.github.com/users/smelly-dog/followers",
"following_url": "https://api.github.com/users/smelly-dog/following{/other_user}",
"gists_url": "https://api.github.com/users/smelly-dog/gists{/gist_id}",
"starred_url": "https://api.github.com/users/smelly-dog/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smelly-dog/subscriptions",
"organizations_url": "https://api.github.com/users/smelly-dog/orgs",
"repos_url": "https://api.github.com/users/smelly-dog/repos",
"events_url": "https://api.github.com/users/smelly-dog/events{/privacy}",
"received_events_url": "https://api.github.com/users/smelly-dog/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Can any one help me\r\nQAQ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hi, I have a dataset like :
From Monday to Friday most people are busy working or studying, but in the evenings and weekends they are free and _ themselves.
And there are four candidates for the missing blank area:
["love", "work", "enjoy", "play"], here "enjoy" is the correct answer, it is a cloze-style task, and it looks like the maskLM in the BERT.
I want to train the model so that it can work better. I notice that there is a parameter called masked_lm_labels, and it can computing the masked language modeling loss. What should I do to train the BertForMaskedLM model with it.
Do you have any Example? Or can you teach me how to do that?
Thanks!
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3573/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3573/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3572 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3572/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3572/comments | https://api.github.com/repos/huggingface/transformers/issues/3572/events | https://github.com/huggingface/transformers/issues/3572 | 591,982,426 | MDU6SXNzdWU1OTE5ODI0MjY= | 3,572 | BART run run_train.sh RuntimeError: expected device cuda:0 but got device cpu | {
"login": "chenbingxiayu",
"id": 23647595,
"node_id": "MDQ6VXNlcjIzNjQ3NTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/23647595?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chenbingxiayu",
"html_url": "https://github.com/chenbingxiayu",
"followers_url": "https://api.github.com/users/chenbingxiayu/followers",
"following_url": "https://api.github.com/users/chenbingxiayu/following{/other_user}",
"gists_url": "https://api.github.com/users/chenbingxiayu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chenbingxiayu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chenbingxiayu/subscriptions",
"organizations_url": "https://api.github.com/users/chenbingxiayu/orgs",
"repos_url": "https://api.github.com/users/chenbingxiayu/repos",
"events_url": "https://api.github.com/users/chenbingxiayu/events{/privacy}",
"received_events_url": "https://api.github.com/users/chenbingxiayu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I meet the same bug when i use bart as an embedding layer.\r\nHave u solve the problem?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> I meet the same bug when i use bart as an embedding layer.\r\n> Have u solve the problem?\r\n\r\nany luck on this? seeing the same \"RuntimeError: expected device cuda:0 but got device cpu\""
] | 1,585 | 1,611 | 1,592 | NONE | null | # 🐛 Bug
## Information
Model I am using (BART):
Language I am using the model on CNN dailymail (English, Chinese ...):
The problem arises when using:
* [* ] the official example scripts: (give details below)
text summarization of bart, run the script run_train.sh, following the guideline
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [*] an official GLUE/SQUaD task: (give the name)
the text summarization task, using cnn dailymail dataset.
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. run the script run_train.sh in examples/summarization/bart/run_train.sh
2. got the error information RuntimeError: expected device cuda:0 but got device cpu
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: I pip install transformers two days ago, I am not sure the version.
- Platform: ubuntu 1604
- Python version: 3.6.7
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
Traceback:
Traceback (most recent call last):
File "run_bart_sum.py", line 166, in <module>
trainer = generic_train(model, args)
File "******/transformer_base.py", line 304, in generic_train
trainer.fit(model)
File "******/anaconda3/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 676, in fit
mp.spawn(self.ddp_train, nprocs=self.num_gpus, args=(model,))
File "******/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 171, in spawn
while not spawn_context.join():
File "******/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 118, in join
raise Exception(msg)
Exception:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "******/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "******/anaconda3/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_data_parallel.py", line 341, in ddp_train
self.run_pretrain_routine(model)
File "******/anaconda3/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 924, in run_pretrain_routine
False)
File "******/anaconda3/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 263, in _evaluate
output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode)
File "******/anaconda3/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 418, in evaluation_forward
output = model(*args)
File "******/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "******/anaconda3/lib/python3.6/site-packages/pytorch_lightning/overrides/data_parallel.py", line 96, in forward
output = self.module.validation_step(*inputs[0], **kwargs[0])
File "******/run_bart_sum.py", line 58, in validation_step
loss = self._step(batch)
File "******/run_bart_sum.py", line 44, in _step
lm_labels=lm_labels.cuda(),
File "******/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "******/run_bart_sum.py", line 32, in forward
lm_labels=lm_labels,
File "******/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "******/anaconda3/lib/python3.6/site-packages/transformers/modeling_bart.py", line 925, in forward
decoder_cached_states=decoder_cached_states,
File "******/anaconda3/lib/python3.6/site-packages/transformers/modeling_bart.py", line 844, in forward
decoder_cached_states=decoder_cached_states,
File "******/anaconda3/lib/python3.6/site-packages/transformers/modeling_bart.py", line 499, in forward
need_attn_weights=self.output_attentions,
File "******/anaconda3/lib/python3.6/site-packages/transformers/modeling_bart.py", line 372, in forward
attn_mask=attention_mask,
File "******/anaconda3/lib/python3.6/site-packages/transformers/modeling_bart.py", line 629, in forward
attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attn_mask
RuntimeError: expected device cuda:0 but got device cpu
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3572/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3571 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3571/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3571/comments | https://api.github.com/repos/huggingface/transformers/issues/3571/events | https://github.com/huggingface/transformers/issues/3571 | 591,845,403 | MDU6SXNzdWU1OTE4NDU0MDM= | 3,571 | transformers pipeline in GCP cloud functions | {
"login": "vijender412",
"id": 31817007,
"node_id": "MDQ6VXNlcjMxODE3MDA3",
"avatar_url": "https://avatars.githubusercontent.com/u/31817007?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vijender412",
"html_url": "https://github.com/vijender412",
"followers_url": "https://api.github.com/users/vijender412/followers",
"following_url": "https://api.github.com/users/vijender412/following{/other_user}",
"gists_url": "https://api.github.com/users/vijender412/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vijender412/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vijender412/subscriptions",
"organizations_url": "https://api.github.com/users/vijender412/orgs",
"repos_url": "https://api.github.com/users/vijender412/repos",
"events_url": "https://api.github.com/users/vijender412/events{/privacy}",
"received_events_url": "https://api.github.com/users/vijender412/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
}
] | closed | false | null | [] | [
"You could cache the model once in your environment and then load it from there. Just point `from_pretrained` to the directory containing the model and configuration (or tokenizer file if loading the tokenizer) instead of the S3 link.",
"@LysandreJik Completely understood and right but this is different case here. I am trying to make use of pipeline and load using pipeline (). And now how this can be achieved in GCP. ",
"The pipeline also accepts directories as models and tokenizers. See the [pipeline documentation](https://huggingface.co/transformers/main_classes/pipelines.html#transformers.pipeline):\r\n\r\n- model (str or PreTrainedModel or TFPreTrainedModel, optional, defaults to None) –\r\n\r\n The model that will be used by the pipeline to make predictions. This can be None, **a string checkpoint identifier** or an actual pre-trained model inheriting from PreTrainedModel for PyTorch and TFPreTrainedModel for TensorFlow.\r\n\r\n If None, the default of the pipeline will be loaded.",
"Thanks @LysandreJik. The documentation link did helped to get better clarity. Will try and get back. ",
"Tried this way ```\r\ntokenizer = AutoTokenizer.from_pretrained(\"distilbert-base-cased-distilled-squad\")\r\n\r\nmodel = AutoModelForQuestionAnswering.from_pretrained(\"https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-cased-distilled-squad-pytorch_model.bin\")\r\n\r\nnlp_qa = pipeline('question-answering', model=model, tokenizer=tokenizer)\r\n```\r\nGetting \r\n`UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte`",
"You can't put a URL like that. It has to be a local file path, like it is shown in the documentation. You can either fetch them and save them to a directory:\r\n\r\n```\r\nmkdir local_path\r\ncd local_path\r\n\r\nwget https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-cased-distilled-squad-pytorch_model.bin\r\nwget https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-cased-distilled-squad-config.json\r\nwget https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-vocab.txt\r\n```\r\n\r\nor in Python:\r\n\r\n```py\r\nmodel = DistilBertModel.from_pretrained(\"distilbert-base-cased-distilled-squad\")\r\nmodel.save_pretrained(\"local_path\")\r\n\r\ntokenizer = DistilBertTokenizer.from_pretrained(\"distilbert-base-cased-distilled-squad\")\r\ntokenizer.save_pretrained(\"local_path\")\r\n```\r\n\r\nYou can then access this model/tokenizer:\r\n\r\n```py\r\nnlp = pipeline(\"question-answering\", model=\"local_path\", tokenizer=\"local_path\")\r\n```",
"Thanks @LysandreJik ",
"@vijender412 may I ask how you got a newer version of pytorch to work on cloud function? I'm unable to get mine to build with anything later than torch vesrion 1.0.1 which is stopping me from using pipeline :/",
"I tried to run transformer on Cloud Functions v1 but as expected I could not run it due to the lack of it's resources.\r\n\r\n@vijender412 \r\nDid you make it?"
] | 1,585 | 1,661 | 1,589 | NONE | null | # ❓ Transformers pipeline in GCP
I am trying to use transformers pipeline in GCP cloud functions. While calling the function the downloading of model is happening everytime. How can we sort this issue.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3571/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3571/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3570 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3570/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3570/comments | https://api.github.com/repos/huggingface/transformers/issues/3570/events | https://github.com/huggingface/transformers/issues/3570 | 591,803,928 | MDU6SXNzdWU1OTE4MDM5Mjg= | 3,570 | tokenizer cannot load form model on disk | {
"login": "makaveli10",
"id": 39617050,
"node_id": "MDQ6VXNlcjM5NjE3MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/39617050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/makaveli10",
"html_url": "https://github.com/makaveli10",
"followers_url": "https://api.github.com/users/makaveli10/followers",
"following_url": "https://api.github.com/users/makaveli10/following{/other_user}",
"gists_url": "https://api.github.com/users/makaveli10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/makaveli10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/makaveli10/subscriptions",
"organizations_url": "https://api.github.com/users/makaveli10/orgs",
"repos_url": "https://api.github.com/users/makaveli10/repos",
"events_url": "https://api.github.com/users/makaveli10/events{/privacy}",
"received_events_url": "https://api.github.com/users/makaveli10/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@makaveli10 how to solve it ?"
] | 1,585 | 1,596 | 1,585 | NONE | null | I wanted to load model saved on disk but it keeps on throwing this error
File "train.py", line 1, in <module>
import config
File "/media/saurabh/D/code/bert_imdb_sentiment/src/config.py", line 14, in <module>
do_lower_case=True
File "/home/saurabh/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 393, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "/home/saurabh/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 496, in _from_pretrained
lst(cls.vocab_files_names.values()),
OSError: Model name '.../input/bert_based_uncased/' was not found in tokenizers model name list
(bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-
multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased,
bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-
uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-
squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-
dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-
cased). We assumed '..input//bert_based_uncased/' was a path, a model identifier, or url to a
directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at
this path or url. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3570/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3570/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3569 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3569/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3569/comments | https://api.github.com/repos/huggingface/transformers/issues/3569/events | https://github.com/huggingface/transformers/issues/3569 | 591,795,687 | MDU6SXNzdWU1OTE3OTU2ODc= | 3,569 | Regarding distilbert-multilingual-uncased model | {
"login": "divyag11",
"id": 39218807,
"node_id": "MDQ6VXNlcjM5MjE4ODA3",
"avatar_url": "https://avatars.githubusercontent.com/u/39218807?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/divyag11",
"html_url": "https://github.com/divyag11",
"followers_url": "https://api.github.com/users/divyag11/followers",
"following_url": "https://api.github.com/users/divyag11/following{/other_user}",
"gists_url": "https://api.github.com/users/divyag11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/divyag11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/divyag11/subscriptions",
"organizations_url": "https://api.github.com/users/divyag11/orgs",
"repos_url": "https://api.github.com/users/divyag11/repos",
"events_url": "https://api.github.com/users/divyag11/events{/privacy}",
"received_events_url": "https://api.github.com/users/divyag11/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | NONE | null | I am using pretrained distilbert-multilingual-uncased model to get the embeddings of a sentence. I want to ask which layer would be great for taking the semantic embedding of a sentence | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3569/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3569/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3568 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3568/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3568/comments | https://api.github.com/repos/huggingface/transformers/issues/3568/events | https://github.com/huggingface/transformers/pull/3568 | 591,768,866 | MDExOlB1bGxSZXF1ZXN0Mzk2ODczNzgy | 3,568 | Create README.md | {
"login": "redewiedergabe",
"id": 47349182,
"node_id": "MDQ6VXNlcjQ3MzQ5MTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/47349182?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/redewiedergabe",
"html_url": "https://github.com/redewiedergabe",
"followers_url": "https://api.github.com/users/redewiedergabe/followers",
"following_url": "https://api.github.com/users/redewiedergabe/following{/other_user}",
"gists_url": "https://api.github.com/users/redewiedergabe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/redewiedergabe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/redewiedergabe/subscriptions",
"organizations_url": "https://api.github.com/users/redewiedergabe/orgs",
"repos_url": "https://api.github.com/users/redewiedergabe/repos",
"events_url": "https://api.github.com/users/redewiedergabe/events{/privacy}",
"received_events_url": "https://api.github.com/users/redewiedergabe/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3568?src=pr&el=h1) Report\n> Merging [#3568](https://codecov.io/gh/huggingface/transformers/pull/3568?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b38d552a92a0a201c005afae0e1b861ae6de9ce0&el=desc) will **increase** coverage by `0.96%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3568?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3568 +/- ##\n==========================================\n+ Coverage 76.90% 77.87% +0.96% \n==========================================\n Files 100 100 \n Lines 17127 17127 \n==========================================\n+ Hits 13172 13338 +166 \n+ Misses 3955 3789 -166 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3568?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3568/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0.00%> (+1.34%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3568/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <0.00%> (+2.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3568/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.24% <0.00%> (+2.63%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3568/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3568/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3568?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3568?src=pr&el=footer). Last update [b38d552...2298b3b](https://codecov.io/gh/huggingface/transformers/pull/3568?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Can you please add a \r\n\r\n```\r\n---\r\nlanguage: german\r\n---\r\n```\r\nmetadata block at the top of the file? \r\n\r\nAlso, cc'ing @severinsimmler who might be interested in this (if you guys don't know each other already)",
"Hey! I added the metablock.\r\n\r\nOne question: We uploaded our models as described in the huggingface documentation and everything looks okay, but when we try to test them with the suggested code, we get a message that the model is not found (OS Error: Model name was not found in tokenizers model name list). Could you please check if we made some error?\r\n\r\nthis is the test code we used:\r\n\r\n> tokenizer = AutoTokenizer.from_pretrained(\"redewiedergabe/bert-base-historical-german-rw-cased\")\r\n ",
"Hi @redewiedergabe,\r\n\r\nI can't reproduce your error on `transformers` 2.7.0, Python 3.7.6 and macOS 10.15.4. Does loading the model with `AutoModel` work?",
"works for me too",
"Thanks! [Model page](https://huggingface.co/redewiedergabe/bert-base-historical-german-rw-cased)"
] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | added documentation for our fine-tuned BERT model | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3568/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3568",
"html_url": "https://github.com/huggingface/transformers/pull/3568",
"diff_url": "https://github.com/huggingface/transformers/pull/3568.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3568.patch",
"merged_at": 1585878512000
} |
https://api.github.com/repos/huggingface/transformers/issues/3567 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3567/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3567/comments | https://api.github.com/repos/huggingface/transformers/issues/3567/events | https://github.com/huggingface/transformers/pull/3567 | 591,692,291 | MDExOlB1bGxSZXF1ZXN0Mzk2ODEyMTM4 | 3,567 | Add tiny-bert-bahasa-cased model card | {
"login": "huseinzol05",
"id": 19810909,
"node_id": "MDQ6VXNlcjE5ODEwOTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/19810909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/huseinzol05",
"html_url": "https://github.com/huseinzol05",
"followers_url": "https://api.github.com/users/huseinzol05/followers",
"following_url": "https://api.github.com/users/huseinzol05/following{/other_user}",
"gists_url": "https://api.github.com/users/huseinzol05/gists{/gist_id}",
"starred_url": "https://api.github.com/users/huseinzol05/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/huseinzol05/subscriptions",
"organizations_url": "https://api.github.com/users/huseinzol05/orgs",
"repos_url": "https://api.github.com/users/huseinzol05/repos",
"events_url": "https://api.github.com/users/huseinzol05/events{/privacy}",
"received_events_url": "https://api.github.com/users/huseinzol05/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3567/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3567/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3567",
"html_url": "https://github.com/huggingface/transformers/pull/3567",
"diff_url": "https://github.com/huggingface/transformers/pull/3567.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3567.patch",
"merged_at": 1585739700000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/3566 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3566/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3566/comments | https://api.github.com/repos/huggingface/transformers/issues/3566/events | https://github.com/huggingface/transformers/pull/3566 | 591,674,925 | MDExOlB1bGxSZXF1ZXN0Mzk2Nzk3ODk4 | 3,566 | BertJapaneseTokenizer accept options for mecab | {
"login": "tamuhey",
"id": 24998666,
"node_id": "MDQ6VXNlcjI0OTk4NjY2",
"avatar_url": "https://avatars.githubusercontent.com/u/24998666?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tamuhey",
"html_url": "https://github.com/tamuhey",
"followers_url": "https://api.github.com/users/tamuhey/followers",
"following_url": "https://api.github.com/users/tamuhey/following{/other_user}",
"gists_url": "https://api.github.com/users/tamuhey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tamuhey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tamuhey/subscriptions",
"organizations_url": "https://api.github.com/users/tamuhey/orgs",
"repos_url": "https://api.github.com/users/tamuhey/repos",
"events_url": "https://api.github.com/users/tamuhey/events{/privacy}",
"received_events_url": "https://api.github.com/users/tamuhey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@singletongue Please give your opinion.",
"Great! It looks good to me.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3566?src=pr&el=h1) Report\n> Merging [#3566](https://codecov.io/gh/huggingface/transformers/pull/3566?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b38d552a92a0a201c005afae0e1b861ae6de9ce0&el=desc) will **increase** coverage by `0.96%`.\n> The diff coverage is `33.33%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3566?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3566 +/- ##\n==========================================\n+ Coverage 76.90% 77.87% +0.96% \n==========================================\n Files 100 100 \n Lines 17127 17127 \n==========================================\n+ Hits 13172 13338 +166 \n+ Misses 3955 3789 -166 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3566?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bert\\_japanese.py](https://codecov.io/gh/huggingface/transformers/pull/3566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9qYXBhbmVzZS5weQ==) | `67.46% <33.33%> (ø)` | |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0.00%> (+1.34%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <0.00%> (+2.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.24% <0.00%> (+2.63%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3566/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3566?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3566?src=pr&el=footer). Last update [b38d552...15522ac](https://codecov.io/gh/huggingface/transformers/pull/3566?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Looks good to me. @LysandreJik?"
] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | Now we can pass `mecab_kwargs` to `BertJapaneseTokenizer.__init__` and set tokenizer more accurately.
Changes:
1. `BertJapaneseTokenizer.__init__` accepts `mecab_kwargs` keyword argument. It is directly passed to `MeCabTokenizer.__init__`
2. Now we can disable `normalize_text` in `MeCabTokenizer` through `mecab_kwargs`
3. Also we can pass the argument to `MeCab.Tagger.__init__` through `mecab_kwargs["mecab_option"]`. It is useful to customize mecab's dictionary. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3566/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3566",
"html_url": "https://github.com/huggingface/transformers/pull/3566",
"diff_url": "https://github.com/huggingface/transformers/pull/3566.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3566.patch",
"merged_at": 1585926739000
} |
https://api.github.com/repos/huggingface/transformers/issues/3565 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3565/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3565/comments | https://api.github.com/repos/huggingface/transformers/issues/3565/events | https://github.com/huggingface/transformers/issues/3565 | 591,672,207 | MDU6SXNzdWU1OTE2NzIyMDc= | 3,565 | Language model fine tuning using scibert as the base model | {
"login": "graviraja",
"id": 7556119,
"node_id": "MDQ6VXNlcjc1NTYxMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7556119?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/graviraja",
"html_url": "https://github.com/graviraja",
"followers_url": "https://api.github.com/users/graviraja/followers",
"following_url": "https://api.github.com/users/graviraja/following{/other_user}",
"gists_url": "https://api.github.com/users/graviraja/gists{/gist_id}",
"starred_url": "https://api.github.com/users/graviraja/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/graviraja/subscriptions",
"organizations_url": "https://api.github.com/users/graviraja/orgs",
"repos_url": "https://api.github.com/users/graviraja/repos",
"events_url": "https://api.github.com/users/graviraja/events{/privacy}",
"received_events_url": "https://api.github.com/users/graviraja/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"SciBert is a BERT model.\r\n\r\nPlease be more descriptive. Saying that features are not calculated correctly is not very helpful. Please describe the problem in full.",
"Hi @BramVanroy , following is the error stack\r\n```code\r\n04/01/2020 09:10:02 - INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, block_size=1000000000000, cache_dir=None, config_name=None, device=device(type='cuda'), do_eval=True, do_train=True, eval_all_checkpoints=False, eval_data_file='/media/data1/ravi/covid-challenge/lm_data/dev.txt', evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=5e-05, line_by_line=False, local_rank=-1, logging_steps=500, max_grad_norm=1.0, max_steps=-1, mlm=True, mlm_probability=0.15, model_name_or_path='allenai/scibert_scivocab_uncased', model_type='bert', n_gpu=1, no_cuda=False, num_train_epochs=1.0, output_dir='./lm_finetune', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=4, per_gpu_train_batch_size=4, save_steps=500, save_total_limit=None, seed=42, server_ip='', server_port='', should_continue=False, tokenizer_name='allenai/scibert_scivocab_uncased', train_data_file='/media/data1/ravi/covid-challenge/lm_data/train.txt', warmup_steps=0, weight_decay=0.0)\r\n04/01/2020 09:10:02 - INFO - __main__ - Creating features from dataset file at /media/data1/ravi/covid-challenge/lm_data\r\n04/01/2020 09:28:10 - INFO - __main__ - Saving features into cached file /media/data1/ravi/covid-challenge/lm_data/bert_cached_lm_999999999998_train.txt\r\nTraceback (most recent call last):\r\n File \"examples/run_language_modeling.py\", line 781, in <module>\r\n main()\r\n File \"examples/run_language_modeling.py\", line 731, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer)\r\n File \"examples/run_language_modeling.py\", line 224, in train\r\n train_sampler = RandomSampler(train_dataset) if args.local_rank == -1 else DistributedSampler(train_dataset)\r\n File \"/media/data2/anaconda/envs/covid/lib/python3.6/site-packages/torch/utils/data/sampler.py\", line 94, in __init__\r\n \"value, but got num_samples={}\".format(self.num_samples))\r\nValueError: num_samples should be a positive integer value, but got num_samples=0\r\n```\r\n\r\nIn the `bert_cached_lm_999999999998_train.txt` file there is only one line. `train.txt` file is of size **345MB**.\r\n\r\nThank you for your help!!",
"This seems to indicate a problem with the dataset. Can you post the contents of bert_cached_lm_999999999998_train.txt? Maybe you chose a block size that is larger than your data size.",
"This is probably the same error as https://github.com/huggingface/transformers/issues/3443#issuecomment-607422291",
"@graviraja Had the same issue which is most likely related to \r\n`model = torch.nn.DataParallel(model)` in [trainer.py](https://github.com/huggingface/transformers/blob/8e093e5981e573a0b591dc57e8d52cc3efe82230/src/transformers/trainer.py#L250)\r\n\r\nUncommenting this line or using only one GPU \r\n`export CUDA_VISIBLE_DEVICES=1`\r\nworks in my case:\r\n",
"> CUDA_VISIBLE_DEVICES\r\n\r\nNote that `CUDA_VISIBLE_DEVICES=1` does not mean to use \"just one GPU\", but it means to specifically use GPU with ID#1. However, device are zero-indexed, so the first GPU on your system will typically be #0: `CUDA_VISIBLE_DEVICES=0`",
"Thanks, @BramVanroy , makes perfect sense.\r\n\r\n@graviraja I've set up a [notebook](https://github.com/Nikoschenk/language_model_finetuning/blob/master/scibert_fine_tuner.ipynb) with the required functionality. Previous comments regarding `block_size` were in fact crucial.\r\n\r\n",
"Thank you @Nikoschenk @BramVanroy for the support."
] | 1,585 | 1,588 | 1,588 | NONE | null | # 🐛 Bug
## Information
I am trying to finetune the Scibert model on Covid dataset. Features for the train data is not being calculated properly.
Model I am using (Bert, XLNet ...): allenai/scibert_scivocab_uncased
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: Language Modelling
* [ ] my own task or dataset: my own dataset in text files
## To reproduce
Steps to reproduce the behavior:
1. python examples/run_language_modeling.py --output_dir=./lm_finetune --model_name_or_path=allenai/scibert_scivocab_uncased --do_train --train_data_file=lm_data/train.txt --do_eval --eval_data_file=lm_data/dev.txt --mlm --tokenizer_name allenai/scibert_scivocab_uncased --model_type bert
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Finetuned model and perplexity score on evaluation data
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.6
- Platform: "CentOS Linux 7"
- Python version: 3.6.9
- PyTorch version (GPU?): yes
- Tensorflow version (GPU?): No
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
What should I provide the `model_type` as?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3565/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3565/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3564 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3564/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3564/comments | https://api.github.com/repos/huggingface/transformers/issues/3564/events | https://github.com/huggingface/transformers/issues/3564 | 591,636,486 | MDU6SXNzdWU1OTE2MzY0ODY= | 3,564 | Tokenizers: setting bos_token_id = 0 and adding language_pair_codes | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"You can't set the ids, they are set automatically from the sentence piece model.\r\nBut (1) why are you using the T5Tokenizer for a Bart checkpoint and (2) why do you want to tweak the id?",
"(1) I used the `T5Tokenizer` in order to make a runnable example that did not require checking out my `mbart` branch.\r\n\r\n(2) Fairseq's MBART logic is split into two stages:\r\n- use `spm_encode --model sentence.bpe.model` to preprocess. (this is like encode_as_pieces in python).\r\n- use a `vocab.json` style lookup to convert each token to an ID.\r\n\r\nI'm trying to do that in one step, using `sp_model.encode_as_ids`, but my ids are off by 1, because the special tokens (sp_model.bos_token, etc) are different than fairseq's dictionary object:\r\n\r\n\r\n\r\n\r\nSo I need to either manipulate the sp_model, retrain it with correct control codes, or try a different approach.\r\n\r\n",
"Yes you can check how we do these token index offset stuff (it’s specific to fairseq + sentencepiece) in Camembert and XLMRoberta tokenizers.",
"Extremely helpful! Mbart also adds a language code like en_XX and ro_RO to the end of the source and target sentences. So the sentences are like `[tokens]+[<eos>, <language_id>]`\r\n\r\nDo we have any tokenizers that do that? ",
"can't find an easy way to generate examples like\r\n```python\r\ninput_ids = [src_tokens]+[<eos>, <src_language_id>]\r\ndecoder_input_ids = [tgt_tokens]+[<eos>, <tgt_language_id>]\r\n```\r\nwhere the special tokens depend on the language.\r\n\r\nMy best idea is to add a method \r\n```python\r\ndef prepare_language_pair_batch(self, source_sentences, source_lang, target_sentences=None, target_lang=None):\r\n\t# encode source sentence\r\n\t# if target_sentence is None ignore it else process it\r\n return {input_ids=encoded_source_ids, attention_mask=attention_mask, decoder_input_ids=processed_target}\r\n```\r\n(Could also overwrite `prepare_inputs_for_model` and add arguments.)\r\n\r\n\r\nTwo other ideas that don't quite work:\r\n- Try to stuff the language codes into the string as text in `prepare_text_for_tokenization`. The problem is this would go before EOS.\r\n- Try to do the magic in `build_inputs_with_special_tokens`. the problem is that you still can't use `prepare_for_model` because it doesn't pass kwargs to `build_inputs_with_special_tokens`.\r\n\r\nWe could also instantiate two tokenizers with different special tokens, but that feels wasteful.\r\n\r\n@LysandreJik @patrickvonplaten ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> Yes you can check how we do these token index offset stuff (it’s specific to fairseq + sentencepiece) in Camembert and XLMRoberta tokenizers.\r\n\r\nFor posterity, I think Thomas means this:\r\n```\r\nhttps://huggingface.co/transformers/v4.6.0/_modules/transformers/models/camembert/tokenization_camembert.html\r\nhttps://huggingface.co/transformers/v3.5.1/_modules/transformers/tokenization_xlm_roberta.html\r\n```\r\n"
] | 1,585 | 1,651 | 1,591 | CONTRIBUTOR | null | I am unable to set bos_token_id=0 for a new SentencePiece tokenizer (MBART).
Here is what I'm doing?
```bash
wget https://s3.amazonaws.com/models.huggingface.co/bert/facebook/mbart-large-en-ro/sentence.bpe.model
```
```python
from transformers import T5Tokenizer
vocab_file = 'sentence.bpe.model'
t2 = T5Tokenizer(vocab_file, bos_token='<s>', bos_token_id=0)
t2.bos_token_id # => 1
```
The following also returns 1
```python
t2 = T5Tokenizer(vocab_file, bos_token='<s>', bos_token_id=0,
additional_special_tokens=['<s>'])
t2.bos_token_id
```
Help much appreciated! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3564/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3564/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3563 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3563/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3563/comments | https://api.github.com/repos/huggingface/transformers/issues/3563/events | https://github.com/huggingface/transformers/pull/3563 | 591,597,278 | MDExOlB1bGxSZXF1ZXN0Mzk2NzQxNDgz | 3,563 | update run_language_modeling.py for high efficiency in Multi GPUs | {
"login": "guoday",
"id": 40300434,
"node_id": "MDQ6VXNlcjQwMzAwNDM0",
"avatar_url": "https://avatars.githubusercontent.com/u/40300434?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guoday",
"html_url": "https://github.com/guoday",
"followers_url": "https://api.github.com/users/guoday/followers",
"following_url": "https://api.github.com/users/guoday/following{/other_user}",
"gists_url": "https://api.github.com/users/guoday/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guoday/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guoday/subscriptions",
"organizations_url": "https://api.github.com/users/guoday/orgs",
"repos_url": "https://api.github.com/users/guoday/repos",
"events_url": "https://api.github.com/users/guoday/events{/privacy}",
"received_events_url": "https://api.github.com/users/guoday/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | NONE | null | The code "model(inputs, masked_lm_labels=labels)" will return all outputs which causes out of GPU memory in train() function. After modifying the code, batch size per GPU increases from 4 to 32 in Multi GPUs. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3563/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3563",
"html_url": "https://github.com/huggingface/transformers/pull/3563",
"diff_url": "https://github.com/huggingface/transformers/pull/3563.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3563.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3562 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3562/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3562/comments | https://api.github.com/repos/huggingface/transformers/issues/3562/events | https://github.com/huggingface/transformers/issues/3562 | 591,595,629 | MDU6SXNzdWU1OTE1OTU2Mjk= | 3,562 | can not init tokenizers from third party model , on albert model | {
"login": "aohan237",
"id": 3992281,
"node_id": "MDQ6VXNlcjM5OTIyODE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3992281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aohan237",
"html_url": "https://github.com/aohan237",
"followers_url": "https://api.github.com/users/aohan237/followers",
"following_url": "https://api.github.com/users/aohan237/following{/other_user}",
"gists_url": "https://api.github.com/users/aohan237/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aohan237/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aohan237/subscriptions",
"organizations_url": "https://api.github.com/users/aohan237/orgs",
"repos_url": "https://api.github.com/users/aohan237/repos",
"events_url": "https://api.github.com/users/aohan237/events{/privacy}",
"received_events_url": "https://api.github.com/users/aohan237/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, could you specify which version of `transformers` you're running?",
"I encountered the same problem when using Albert. @voidful \r\n```\r\nAutoTokenizer.from_pretrained('voidful/albert_chinese_xxlarge')\r\n```\r\nwill raise\r\n```\r\n04/04/2020 14:21:28 - INFO - Model name 'voidful/albert_chinese_xxlarge' not found in model shortcut name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). Assuming 'voidful/albert_chinese_xxlarge' is a path, a model identifier, or url to a directory containing tokenizer files.\r\nTraceback (most recent call last):\r\n File \"preprocess.py\", line 353, in <module>\r\n main()\r\n File \"preprocess.py\", line 303, in main\r\n tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_path, do_lower_case=not args.cased, cache_dir=args.cache_dir)\r\n File \"/data0/username/anaconda3/lib/python3.7/site-packages/transformers/tokenization_auto.py\", line 192, in from_pretrained\r\n return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)\r\n File \"/data0/username/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py\", line 393, in from_pretrained\r\n return cls._from_pretrained(*inputs, **kwargs)\r\n File \"/data0/username/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py\", line 496, in _from_pretrained\r\n list(cls.vocab_files_names.values()),\r\nOSError: Model name 'voidful/albert_chinese_xxlarge' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'voidful/albert_chinese_xxlarge' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.\r\n```\r\n```\r\n>>> torch.__version__\r\n'1.3.1'\r\n>>> transformers.__version__\r\n'2.7.0'\r\n```\r\n",
"@LysandreJik @WiseDoge \r\nthe problem is that the model type is different with the tokenizer type.\r\neg. the model use albert model type and tokenizer use bert tokenizer, so the autoken class won know about it\r\n\r\nyou should let others can specify the tokenizer class or tokenizer model type if nessuary",
"waiting for confirm or feature requests",
"Thank you. I use BERT tokenizer instead, and it works.",
"Since sentencepiece is not used in albert_chinese model\r\nyou have to call BertTokenizer instead of AlbertTokenizer !!! we can eval it using an example on MaskedLM\r\n\r\n[colab trial](https://colab.research.google.com/drive/1Wjz48Uws6-VuSHv_-DcWLilv77-AaYgj) \r\n```python\r\nfrom transformers import *\r\nimport torch\r\nfrom torch.nn.functional import softmax\r\n\r\npretrained = 'voidful/albert_chinese_large'\r\ntokenizer = BertTokenizer.from_pretrained(pretrained)\r\nmodel = AlbertForMaskedLM.from_pretrained(pretrained)\r\n\r\ninputtext = \"今天[MASK]情很好\"\r\n\r\nmaskpos = tokenizer.encode(inputtext, add_special_tokens=True).index(103)\r\n\r\ninput_ids = torch.tensor(tokenizer.encode(inputtext, add_special_tokens=True)).unsqueeze(0) # Batch size 1\r\noutputs = model(input_ids, masked_lm_labels=input_ids)\r\nloss, prediction_scores = outputs[:2]\r\nlogit_prob = softmax(prediction_scores[0, maskpos]).data.tolist()\r\npredicted_index = torch.argmax(prediction_scores[0, maskpos]).item()\r\npredicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]\r\nprint(predicted_token,logit_prob[predicted_index])\r\n```\r\nResult: `心 0.9422469735145569`",
"close for now",
"> Since sentencepiece is not used in albert_chinese model\r\n> you have to call BertTokenizer instead of AlbertTokenizer !!! we can eval it using an example on MaskedLM\r\n> \r\n> [colab trial](https://colab.research.google.com/drive/1Wjz48Uws6-VuSHv_-DcWLilv77-AaYgj)\r\n> \r\n> ```python\r\n> from transformers import *\r\n> import torch\r\n> from torch.nn.functional import softmax\r\n> \r\n> pretrained = 'voidful/albert_chinese_large'\r\n> tokenizer = BertTokenizer.from_pretrained(pretrained)\r\n> model = AlbertForMaskedLM.from_pretrained(pretrained)\r\n> \r\n> inputtext = \"今天[MASK]情很好\"\r\n> \r\n> maskpos = tokenizer.encode(inputtext, add_special_tokens=True).index(103)\r\n> \r\n> input_ids = torch.tensor(tokenizer.encode(inputtext, add_special_tokens=True)).unsqueeze(0) # Batch size 1\r\n> outputs = model(input_ids, masked_lm_labels=input_ids)\r\n> loss, prediction_scores = outputs[:2]\r\n> logit_prob = softmax(prediction_scores[0, maskpos]).data.tolist()\r\n> predicted_index = torch.argmax(prediction_scores[0, maskpos]).item()\r\n> predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]\r\n> print(predicted_token,logit_prob[predicted_index])\r\n> ```\r\n> \r\n> Result: `心 0.9422469735145569`\r\n\r\nI have tried this code\r\nfrom transformers import TFAutoModel, BertTokenizer\r\npretrained = 'voidful/albert_chinese_xlarge'\r\ntokenizer = BertTokenizer.from_pretrained(pretrained)\r\nmodel = TFAutoModel.from_pretrained(pretrained)\r\n\r\ninputs = tokenizer(\"我喜欢你!\", return_tensors=\"tf\")\r\noutputs = model(**inputs)\r\n\r\nprint(outputs)\r\n\r\nit encounters \r\n\r\nOSError: Can't load weights for 'voidful/albert_chinese_xlarge'. Make sure that:\r\n- 'voidful/albert_chinese_xlarge' is a correct model identifier listed on 'https://huggingface.co/models'\r\n- or 'voidful/albert_chinese_xlarge' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.",
"> > Since sentencepiece is not used in albert_chinese model\r\n> > you have to call BertTokenizer instead of AlbertTokenizer !!! we can eval it using an example on MaskedLM\r\n> > [colab trial](https://colab.research.google.com/drive/1Wjz48Uws6-VuSHv_-DcWLilv77-AaYgj)\r\n> > ```python\r\n> > from transformers import *\r\n> > import torch\r\n> > from torch.nn.functional import softmax\r\n> > \r\n> > pretrained = 'voidful/albert_chinese_large'\r\n> > tokenizer = BertTokenizer.from_pretrained(pretrained)\r\n> > model = AlbertForMaskedLM.from_pretrained(pretrained)\r\n> > \r\n> > inputtext = \"今天[MASK]情很好\"\r\n> > \r\n> > maskpos = tokenizer.encode(inputtext, add_special_tokens=True).index(103)\r\n> > \r\n> > input_ids = torch.tensor(tokenizer.encode(inputtext, add_special_tokens=True)).unsqueeze(0) # Batch size 1\r\n> > outputs = model(input_ids, masked_lm_labels=input_ids)\r\n> > loss, prediction_scores = outputs[:2]\r\n> > logit_prob = softmax(prediction_scores[0, maskpos]).data.tolist()\r\n> > predicted_index = torch.argmax(prediction_scores[0, maskpos]).item()\r\n> > predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]\r\n> > print(predicted_token,logit_prob[predicted_index])\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > Result: `心 0.9422469735145569`\r\n> \r\n> I have tried this code\r\n> from transformers import TFAutoModel, BertTokenizer\r\n> pretrained = 'voidful/albert_chinese_xlarge'\r\n> tokenizer = BertTokenizer.from_pretrained(pretrained)\r\n> model = TFAutoModel.from_pretrained(pretrained)\r\n> \r\n> inputs = tokenizer(\"我喜欢你!\", return_tensors=\"tf\")\r\n> outputs = model(**inputs)\r\n> \r\n> print(outputs)\r\n> \r\n> it encounters\r\n> \r\n> OSError: Can't load weights for 'voidful/albert_chinese_xlarge'. Make sure that:\r\n> \r\n> * 'voidful/albert_chinese_xlarge' is a correct model identifier listed on 'https://huggingface.co/models'\r\n> * or 'voidful/albert_chinese_xlarge' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.\r\n\r\nYou need to add `from_pt=True` in order to load a pytorch checkpoint.\r\n```python\r\nfrom transformers import TFAutoModel, BertTokenizer\r\npretrained = './albert_chinese_tiny'\r\ntokenizer = BertTokenizer.from_pretrained(pretrained)\r\nmodel = TFAutoModel.from_pretrained(pretrained, from_pt=True)\r\n\r\ninputs = tokenizer(\"我喜欢你!\", return_tensors=\"tf\")\r\noutputs = model(**inputs)\r\n```"
] | 1,585 | 1,627 | 1,586 | NONE | null | # 🐛 Bug
## Information
Model I am using (albert.):
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [ *] the official example scripts: (give details below)
follow the instructions on :
`https://huggingface.co/models`
such as use "voidful/albert_chinese_tiny" model,
`AutoTokenizer.from_pretrained('voidful/albert_chinese_tiny')`
will raise
` Model name 'voidful/albert_chinese_tiny' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'voidful/albert_chinese_tiny' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3562/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3561 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3561/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3561/comments | https://api.github.com/repos/huggingface/transformers/issues/3561/events | https://github.com/huggingface/transformers/issues/3561 | 591,580,049 | MDU6SXNzdWU1OTE1ODAwNDk= | 3,561 | Evaluation of labelled test set? | {
"login": "Mahmedturk",
"id": 48975334,
"node_id": "MDQ6VXNlcjQ4OTc1MzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/48975334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mahmedturk",
"html_url": "https://github.com/Mahmedturk",
"followers_url": "https://api.github.com/users/Mahmedturk/followers",
"following_url": "https://api.github.com/users/Mahmedturk/following{/other_user}",
"gists_url": "https://api.github.com/users/Mahmedturk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mahmedturk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mahmedturk/subscriptions",
"organizations_url": "https://api.github.com/users/Mahmedturk/orgs",
"repos_url": "https://api.github.com/users/Mahmedturk/repos",
"events_url": "https://api.github.com/users/Mahmedturk/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mahmedturk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, you could use model.eval() or load pre-tuned model and run against test set\r\n\t# Load a trained model and vocabulary that you have fine-tuned\r\n\tmodel = model_class.from_pretrained(output_dir)\r\n\ttokenizer = tokenizer_class.from_pretrained(output_dir)\r\nSee this post: https://mccormickml.com/2019/07/22/BERT-fine-tuning/#a1-saving--loading-fine-tuned-model\r\n",
"+1\r\n\r\nIs there something similar for testing to the `--do_train` or `--do_eval` flag in the glue examples?",
"@Mahmedturk I tried it like this now:\r\n\r\n```python\r\nfrom transformers import BertTokenizer, BertForSequenceClassification\r\nimport torch\r\nimport pandas as pd\r\n\r\ntokenizer = BertTokenizer.from_pretrained('./my-model')\r\nmodel = BertForSequenceClassification.from_pretrained('./my-model')\r\nlabels = [ ... ]\r\n\r\ndf_test = pd.read_csv('./my-data/test.tsv', sep='\\t', names=['label', 'sentence'])\r\ndf_test['prediction'] = None\r\n\r\nfor row in df_test.itertuples():\r\n inputs = tokenizer.encode_plus(row.sentence, add_special_tokens=True, return_tensors='pt')\r\n pred = model(inputs['input_ids'], token_type_ids=inputs['token_type_ids'])[0].argmax().item()\r\n df_test.loc[row.Index, 'prediction'] = labels[pred]\r\n```\r\n\r\nThen you can filter the pandas dataframe, i.e. `df_test[df_test['label'] == df_test['prediction']]` to see the true positives.",
"@olastor is there a way to print confusion matrix?\r\n",
"Also for the evaluation set, i get the following three metrics. What is \"acc_and_f1\" in the below?\r\n\r\n\r\nacc = 0.9455445544554455\r\nacc_and_f1 = 0.8709204253758709\r\nf1 = 0.7962962962962963\r\n",
"@Mahmedturk Not with a built in function in my example, but manually. Let's say you have the labels `True` and `False`, then the correct way to calculate the absolute values of the confusion matrix would be like this I think:\r\n\r\n- number of true positves: `len(df[(df['label'] == True]) & (df['prediction'] == True)])`\r\n- number of false positves: `len(df[(df['label'] == False]) & (df['prediction'] == True)])`\r\n- number of false negatives: `len(df[(df['label'] == True]) & (df['prediction'] == False)])`\r\n- number of true negatives: `len(df[(df['label'] == False]) & (df['prediction'] == False)])`",
"@Mahmedturk From [here](https://github.com/huggingface/transformers/blob/master/src/transformers/data/metrics/__init__.py#L41):\r\n\r\n```python\r\ndef acc_and_f1(preds, labels):\r\n acc = simple_accuracy(preds, labels)\r\n f1 = f1_score(y_true=labels, y_pred=preds)\r\n return {\r\n \"acc\": acc,\r\n \"f1\": f1,\r\n \"acc_and_f1\": (acc + f1) / 2, # <---\r\n }\r\n```",
"@sunyangfu \r\nAfter loading the saved model and vocabulary how do i run against the test set?\r\nSorry if this sounds silly. I am very new to PyTorch and deep learning.\r\nThe given link shows how to test on CoLa dataset which has only one sentence. Whereas in QQP dataset there are two sentences. What changes do i need to make in the code to test it with QQP dataset?",
"\r\n@Mahmedturk Here you can run against the test set:\r\nAfter loading the model, you can code whatever you want to fit the model.\r\n```python\r\noutput_dir = './saved_model_dir/'\r\n\r\nMODEL_CLASSES = {\r\n 'bert': (BertConfig, BertForSequenceClassification, BertTokenizer),\r\n 'xlnet': (XLNetConfig, XLNetForSequenceClassification, XLNetTokenizer),\r\n 'xlm': (XLMConfig, XLMForSequenceClassification, XLMTokenizer),\r\n 'roberta': (RobertaConfig, RobertaForSequenceClassification, RobertaTokenizer),\r\n 'distilbert': (DistilBertConfig, DistilBertForSequenceClassification, DistilBertTokenizer),\r\n 'albert': (AlbertConfig, AlbertForSequenceClassification, AlbertTokenizer)\r\n}\t\r\n\r\n# Config class and load a trained model and vocabulary \r\nconfig_class, model_class, tokenizer_class = MODEL_CLASSES['bert']\t\r\nmodel = model_class.from_pretrained(output_dir)\r\ntokenizer = tokenizer_class.from_pretrained(output_dir)\r\n\r\n# Copy the model to the GPU.\r\nmodel.to(device)\r\n\r\n# Put model in evaluation mode\r\nmodel.eval()\r\n\r\n#Then you do test data pre-processing and apply the model on test data\r\nwith torch.no_grad():\r\n outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask)\r\n\r\n\r\n```",
"Hi @olastor,\r\n\r\nI have tried your method, it returns the below error\r\n\r\n df_test.loc[row.Index, 'prediction'] = labels[pred]\r\nIndexError: list index out of range. ",
"@Mahmedturk Did you update the list of labels in my example for your task?",
"labels = df_test.label.values",
"> labels = df_test.label.values\r\n\r\nIt needs to be the same set of labels used for training.",
"@olastor thanks.\r\n"
] | 1,585 | 1,586 | 1,586 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hi,
I have a labelled test set of QQP data set. What arguments do i need to input if i want to report accuracy and F1 on test set not just the development set?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3561/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3560 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3560/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3560/comments | https://api.github.com/repos/huggingface/transformers/issues/3560/events | https://github.com/huggingface/transformers/issues/3560 | 591,530,669 | MDU6SXNzdWU1OTE1MzA2Njk= | 3,560 | Mean reduce over last hidden state | {
"login": "celsofranssa",
"id": 11181748,
"node_id": "MDQ6VXNlcjExMTgxNzQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/11181748?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/celsofranssa",
"html_url": "https://github.com/celsofranssa",
"followers_url": "https://api.github.com/users/celsofranssa/followers",
"following_url": "https://api.github.com/users/celsofranssa/following{/other_user}",
"gists_url": "https://api.github.com/users/celsofranssa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/celsofranssa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/celsofranssa/subscriptions",
"organizations_url": "https://api.github.com/users/celsofranssa/orgs",
"repos_url": "https://api.github.com/users/celsofranssa/repos",
"events_url": "https://api.github.com/users/celsofranssa/events{/privacy}",
"received_events_url": "https://api.github.com/users/celsofranssa/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | NONE | null | One of the outputs of [TFBert](https://huggingface.co/transformers/model_doc/bert.html#transformers.TFBertModel.call) is the `last_hidden_state` which is a tensor of shape `(batch_size, sequence_length, hidden_size)`.
How someone could proceed to compute the `mean pooling` of the valid embeddings? I mean, as the `attention_mask` avoids performing attention on padding token indices, it can be used as weight to average only over the real input embeddings ignoring the pad embeddings.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3560/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3559 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3559/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3559/comments | https://api.github.com/repos/huggingface/transformers/issues/3559/events | https://github.com/huggingface/transformers/issues/3559 | 591,525,697 | MDU6SXNzdWU1OTE1MjU2OTc= | 3,559 | How to trace the BertForQuestionAnswering | {
"login": "stu1130",
"id": 6792331,
"node_id": "MDQ6VXNlcjY3OTIzMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6792331?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stu1130",
"html_url": "https://github.com/stu1130",
"followers_url": "https://api.github.com/users/stu1130/followers",
"following_url": "https://api.github.com/users/stu1130/following{/other_user}",
"gists_url": "https://api.github.com/users/stu1130/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stu1130/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stu1130/subscriptions",
"organizations_url": "https://api.github.com/users/stu1130/orgs",
"repos_url": "https://api.github.com/users/stu1130/repos",
"events_url": "https://api.github.com/users/stu1130/events{/privacy}",
"received_events_url": "https://api.github.com/users/stu1130/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I found a workaround solution, just to pass the complete inputs including `input_ids, token_type_ids and attention_mask` to the trace method and invoke forward along with those 3 inputs. \r\nBut it would be great to know how I can just pass in input_ids and token_type_ids",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | NONE | null | I followed the example [here](https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering) and want to convert BertForQuestionAnswering to TorchScript.
Here is my code
```
from transformers import BertTokenizer, BertForQuestionAnswering
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForQuestionAnswering.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad', torchscript=True)
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
input_ids = tokenizer.encode(question, text)
token_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))]
input_tensor = torch.tensor([input_ids])
token_type_ids_tensor = torch.tensor([token_type_ids])
# The way I traced the model could be wrong here
traced_model = torch.jit.trace(model, (input_tensor, token_type_ids_tensor))
traced_model.eval()
start_scores, end_scores = traced_model(input_tensor, token_type_ids_tensor)
all_tokens = tokenizer.convert_ids_to_tokens(input_ids)
answer = ' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1])
print(answer)
```
The answer I got is `jim henson was a nice puppet [SEP]`
Do you have any idea how to make it correct?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3559/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3558 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3558/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3558/comments | https://api.github.com/repos/huggingface/transformers/issues/3558/events | https://github.com/huggingface/transformers/issues/3558 | 591,492,933 | MDU6SXNzdWU1OTE0OTI5MzM= | 3,558 | Metrics are coupled to the run_glue.py tasks. | {
"login": "rosenjcb",
"id": 8102129,
"node_id": "MDQ6VXNlcjgxMDIxMjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8102129?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rosenjcb",
"html_url": "https://github.com/rosenjcb",
"followers_url": "https://api.github.com/users/rosenjcb/followers",
"following_url": "https://api.github.com/users/rosenjcb/following{/other_user}",
"gists_url": "https://api.github.com/users/rosenjcb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rosenjcb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rosenjcb/subscriptions",
"organizations_url": "https://api.github.com/users/rosenjcb/orgs",
"repos_url": "https://api.github.com/users/rosenjcb/repos",
"events_url": "https://api.github.com/users/rosenjcb/events{/privacy}",
"received_events_url": "https://api.github.com/users/rosenjcb/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"As you say yourself, the core of the library is to provide you with models (and recently also pipelines and in the future even more exciting things) to build your own projects. The examples show you some implementations that are in themselves usable but that are by no means meant to be exhaustive for all problems. As you indicate yourself, you are invited to adapt these examples to your own use case.\r\n\r\nIf you feel that your changes are useful for the whole community, then feel free to request a PR!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | NONE | null | The metrics used for evaluating a run_glue task is coupled to the task itself (i.e. regression or classification). We wrote a `TrinarySentimentProcessor` which grabs a positive, neutral, or negative sentiment from a text, but we found that the simple accuracy measure was not the right one. We wanted to use log loss (cross entropy) and so we added an log loss function to the metrics (`__init__.py`) package and added another elif to the chain.
Why is the metric so coupled to the task itself? What if you wanted to use a different metric for any particular task? I understand that `run_glue.py` is an "example" but we've been building upon the architecture to reduce the workload of training for tasks outside of GLUE and SQUAD. Maybe you could add a flag to the `run_glue.py` file like `--metric=cross-entropy`. Any thoughts? Are we just misusing the library by extending `glue.py`? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3558/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3558/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3557 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3557/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3557/comments | https://api.github.com/repos/huggingface/transformers/issues/3557/events | https://github.com/huggingface/transformers/pull/3557 | 591,401,720 | MDExOlB1bGxSZXF1ZXN0Mzk2NTgwNDcy | 3,557 | Create model card | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3557?src=pr&el=h1) Report\n> Merging [#3557](https://codecov.io/gh/huggingface/transformers/pull/3557?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b38d552a92a0a201c005afae0e1b861ae6de9ce0&el=desc) will **increase** coverage by `0.97%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3557?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3557 +/- ##\n==========================================\n+ Coverage 76.90% 77.88% +0.97% \n==========================================\n Files 100 100 \n Lines 17127 17127 \n==========================================\n+ Hits 13172 13339 +167 \n+ Misses 3955 3788 -167 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3557?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.79% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0.00%> (+1.34%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <0.00%> (+2.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.24% <0.00%> (+2.63%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3557?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3557?src=pr&el=footer). Last update [b38d552...7e585a0](https://codecov.io/gh/huggingface/transformers/pull/3557?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | Create model card for: distilbert-multi-finetuned-for-xqua-on-tydiqa | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3557/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3557",
"html_url": "https://github.com/huggingface/transformers/pull/3557",
"diff_url": "https://github.com/huggingface/transformers/pull/3557.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3557.patch",
"merged_at": 1585739664000
} |
https://api.github.com/repos/huggingface/transformers/issues/3556 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3556/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3556/comments | https://api.github.com/repos/huggingface/transformers/issues/3556/events | https://github.com/huggingface/transformers/pull/3556 | 591,276,837 | MDExOlB1bGxSZXF1ZXN0Mzk2NDc0Mzkw | 3,556 | [T5, examples] replace heavy t5 models with tiny random models | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3556?src=pr&el=h1) Report\n> Merging [#3556](https://codecov.io/gh/huggingface/transformers/pull/3556?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ae6834e028ecdf7fdbe886c1f86d0e02d5fef6f0&el=desc) will **not change** coverage by `%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3556?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3556 +/- ##\n=======================================\n Coverage 77.80% 77.80% \n=======================================\n Files 100 100 \n Lines 17064 17064 \n=======================================\n Hits 13277 13277 \n Misses 3787 3787 \n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3556?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3556?src=pr&el=footer). Last update [ae6834e...2e14442](https://codecov.io/gh/huggingface/transformers/pull/3556?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,585 | 1,585 | MEMBER | null | As first done by @sshleifer in #3488 , this PR puts a tiny T5 model on S3 to save testing time. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3556/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3556",
"html_url": "https://github.com/huggingface/transformers/pull/3556",
"diff_url": "https://github.com/huggingface/transformers/pull/3556.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3556.patch",
"merged_at": 1585823646000
} |
https://api.github.com/repos/huggingface/transformers/issues/3555 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3555/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3555/comments | https://api.github.com/repos/huggingface/transformers/issues/3555/events | https://github.com/huggingface/transformers/issues/3555 | 591,268,337 | MDU6SXNzdWU1OTEyNjgzMzc= | 3,555 | T5 for summarization: pipeline x T5ForConditionalGeneration different results | {
"login": "renatoviolin",
"id": 8897786,
"node_id": "MDQ6VXNlcjg4OTc3ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8897786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/renatoviolin",
"html_url": "https://github.com/renatoviolin",
"followers_url": "https://api.github.com/users/renatoviolin/followers",
"following_url": "https://api.github.com/users/renatoviolin/following{/other_user}",
"gists_url": "https://api.github.com/users/renatoviolin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/renatoviolin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/renatoviolin/subscriptions",
"organizations_url": "https://api.github.com/users/renatoviolin/orgs",
"repos_url": "https://api.github.com/users/renatoviolin/repos",
"events_url": "https://api.github.com/users/renatoviolin/events{/privacy}",
"received_events_url": "https://api.github.com/users/renatoviolin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @renatoviolin,\r\n\r\nthe T5 pipeline uses special input arguments for the `generate()` function that have been shown to work well for summarization. If you take a look at `task_specific_params` and under `summarization` in T5's config: https://s3.amazonaws.com/models.huggingface.co/bert/t5-base-config.json. You can see the `generate()` arguments that are used for pipeline.\r\n",
"Hi @patrickvonplaten \r\n\r\nThanks for your attention. Now I got the point where I'm doing wrong.\r\nBut the most strange thing that happened is that in third step, after the pipeline was executed, the \r\nmodel.generate() produces similar results (and take long to run) as the pipeline.\r\nAt first glance, given the poor results and how quickly it runs, it seemed to me that the weights were not loaded.",
"I think it's probably because beam search is deactivated, no length penalties, no repeat penalties and a very short max length is used "
] | 1,585 | 1,585 | 1,585 | NONE | null | I've been doing experiments with text summarization and got some different results between pipeline and T5ForConditionalGeneration.
First I use the model.generate() to generate the resume. It runs very fast (even on CPU) and give poors results.
Second I use the pipeline, passing the same model I build on the first step. This runs slower and give a very good results.
Third, I re-run the first model.generate(). Now, the the model runs slower and produces the same result as pipeline.
I did a colab so that you can check.
I'm loosing any point in using the model x pipeline?
https://colab.research.google.com/drive/15HOerw3mYVCsjedW_dVGeyRX5cYWiNvS | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3555/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3554 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3554/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3554/comments | https://api.github.com/repos/huggingface/transformers/issues/3554/events | https://github.com/huggingface/transformers/issues/3554 | 591,234,777 | MDU6SXNzdWU1OTEyMzQ3Nzc= | 3,554 | resize_token_embeddings error for Transformer-XL | {
"login": "vsieplus",
"id": 35880073,
"node_id": "MDQ6VXNlcjM1ODgwMDcz",
"avatar_url": "https://avatars.githubusercontent.com/u/35880073?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vsieplus",
"html_url": "https://github.com/vsieplus",
"followers_url": "https://api.github.com/users/vsieplus/followers",
"following_url": "https://api.github.com/users/vsieplus/following{/other_user}",
"gists_url": "https://api.github.com/users/vsieplus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vsieplus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vsieplus/subscriptions",
"organizations_url": "https://api.github.com/users/vsieplus/orgs",
"repos_url": "https://api.github.com/users/vsieplus/repos",
"events_url": "https://api.github.com/users/vsieplus/events{/privacy}",
"received_events_url": "https://api.github.com/users/vsieplus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @vsieplus ,\r\n\r\nThis is a known bug and sadly we don't have a solution for this now. TransfoXLLMHead uses adaptive weight embeddings which makes it not very easy to implement this function. Should be implemented in the long run though - I will note it down. @thomwolf @LysandreJik ",
"@patrickvonplaten Does the same problem apply to XLNet?",
"No it should not. XLNet uses the standard `nn.embedding` - so it should be fine.",
"Hi, I faced the same issue and wrote some dirty code as a workaround in `modeling_utils.py`. The main idea is to just operate on the last embedding layer:\r\n```\r\ndef _resize_token_embeddings(self, new_num_tokens):\r\n old_embeddings = self.get_input_embeddings()\r\n\r\n if type(self).__name__ == 'TransfoXLModel':\r\n # since the 'TransfoXLModel' has multiple embedding layers, the last layer is resized\r\n new_num_tokens_last = new_num_tokens\r\n for emb_layer in old_embeddings.emb_layers[:-1]:\r\n new_num_tokens_last -= emb_layer.weight.size(0)\r\n\r\n new_embeddings_last = self._get_resized_embeddings(old_embeddings.emb_layers[-1], new_num_tokens_last)\r\n new_embeddings = old_embeddings\r\n new_embeddings.emb_layers[-1] = new_embeddings_last\r\n else:\r\n new_embeddings = self._get_resized_embeddings(old_embeddings, new_num_tokens)\r\n\t\t\r\n self.set_input_embeddings(new_embeddings)\r\n return self.get_input_embeddings()\r\n```\r\nIt workes for me (at least I get no error). Can someone confirm that this makes sense? Maybe @patrickvonplaten ?\r\n",
"Sorry for bothering again @patrickvonplaten, but this is important for me: Can you or someone else comment on my \"fix\" above wether it makes sense?\r\nThanks in advance!",
"This looks okay to me, though I think you can patch a custom `_resize_token_embeddings(self, new_num_tokens)` to [`TransfoXLPreTrainedModel`](https://github.com/huggingface/transformers/blob/3e5928c57d57db3071638e6beaec9349a75b6a22/src/transformers/modeling_transfo_xl.py#L451) to avoid making the test (and leave the default behavior for other models).\r\n\r\nActually adding such a method to `TransfoXLPreTrainedModel` would solve this issue AFAICT. Since you wrote it @RafaelWO, you should make a PR with it :-)",
"Thanks for your feedback @sgugger ! I will move the logic into the `TransfoXLPreTrainedModel` and make my first PR :)",
"Out of curiosity, why do you go with\r\n\r\n```\r\nfor emb_layer in old_embeddings.emb_layers[:-1]:\r\n new_num_tokens_last -= emb_layer.weight.size(0)\r\n```\r\n\r\nWouldn't just `emb_layer = old_embeddings.emb_layers[-1]` work out ? Also are `wug` and `wugs` often used ? If they're syntax tokens, which are frequent, you might want to add them to the corresponding embedding group.",
"I think the for loop is to make sure `new_num_tokens_last` is accurate by substracting the other embedding sizes.\r\n\r\nI agree that ideally, the method written on `TransfoXLPreTrainedModel` should have an argument to decide to which embedding layer add the new tokens (with a default to the last one).",
"Yes that's correct @sgugger, thanks for answering.\r\n\r\nI understand the idea of your introduced parameter, but for me the question is whether this makes sense? Because if you add the new token into e.g. the first layer, you would have to insert it also at the same position in your tokenizer and shift all tokens after that.\r\n\r\n@TevenLeScao \r\n> Also are wug and wugs often used ?\r\n\r\nIn my case I want to a `cls_token` which is not included in the pretrained tokenizer.",
"Ah my bad, misread the `:-1` into `-1:`. I've looked again at the `ProjectedAdaptiveLogSoftmax` and adding elsewhere should be fine if you update the `cutoffs` attribute to make sure it takes into account the changed embedding size.\r\n\r\nAdding at the end is a good baseline; the only issue is that you're going to lose out on some of the benefits of the adaptive softmax as you're often going to have to access the bigger softmax layer whereas you usually want to have the frequent tokens (such as `cls`) on smaller ones.",
"> update the cutoffs attribute to make sure it takes into account the changed embedding size.\r\n\r\n> Adding at the end is a good baseline; the only issue is that you're going to lose out on some of the benefits of the adaptive softmax as you're often going to have to access the bigger softmax layer whereas you usually want to have the frequent tokens (such as cls) on smaller ones.\r\n\r\nYes and yes, that's true. \r\n\r\nBut as I mentioned above: if you add such a common token into the first smaller layer and adjust the cutoffs (which would be the preferred way to do), you have a conflict with the tokenizer, because there the new token is at the end and not at position `20001` as in your model (default cutoffs `[20000, 40000, 200000]`).\r\n\r\nOr am I missing something?",
"Yes, that is also going to be a problem, but it shouldn't be too hard to solve with a simple conversion function that shifts the other tokens. The cleanest way to do it would probably be to update the tokenizer yourself but I am not sure how easy that would be. ",
"Thanks a lot @sgugger for answering here! As @sgugger mentioned, it'd be great if you can add a `_resize_token_embeddings()` function to `TransfoXLPreTrainedModel`. \r\n\r\nThe solution looks great to me @vsieplus :-) \r\n\r\nYou could make it a bit more compact, but that's a nitpick: \r\n\r\n```python \r\n embeddings = self.get_input_embeddings()\r\n new_num_tokens_last = new_num_tokens - sum([emb.shape[0] for emb in embeddings.emb_layers[:-1])\r\n new_embeddings_last = self._get_resized_embeddings(embeddings.emb_layers[-1], new_num_tokens_last)\r\n embeddings.emb_layers[-1] = new_embeddings_last\r\n\r\n self.set_input_embeddings(embeddings)\r\n```",
"Hello, I have faced the same issue while using `TFTransfoXLLMHeadModel` class. \r\nAfter initializing `TFTransfoXLLMHeadModel` I attempted to apply `.resize_token_embeddings()`, but it raised `NotImplementedError` error. \r\n\r\nI checked the github [link ](https://github.com/huggingface/transformers/blob/main/src/transformers/models/transfo_xl/modeling_tf_transfo_xl.py) and figured it out that `.resize_token_embeddings()` was really not implemented. \r\n\r\nWhy is that so? Do you guys have any plan to fix this error? "
] | 1,585 | 1,699 | 1,591 | NONE | null | # 🐛 Bug
## Information
Model I am using : Transformer-XL
Language I am using the model on : English
The problem arises when using:
* [ ] my own modified scripts: a fine-tuning script for TransfoXLLMHeadModel
## To reproduce
The following code aims to add two new tokens to the vocabulary, 'wug' and 'wugs'. After doing so to the tokenizer, we call `resize_token_embeddings` with the model in order to update its input embeddings to have correct dimension to account for the new tokens.
``` python
import torch
from transformers import TransfoXLTokenizer, TransfoXLLMHeadModel
model = TransfoXLLMHeadModel.from_pretrained('transfo-xl-wt103')
tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')
tokenizer.add_tokens(['wug', 'wugs'])
model.resize_token_embeddings(len(tokenizer))
```
Running the above gives the following error
```
Traceback (most recent call last):
File "bug.py", line 9, in <module>
model.resize_token_embeddings(len(tokenizer))
File "/home/AD/rdsie/anaconda3/envs/lign251/lib/python3.7/site-packages/transformers/modeling_utils.py", line 198, in resize_token_embeddings
model_embeds = base_model._resize_token_embeddings(new_num_tokens)
File "/home/AD/rdsie/anaconda3/envs/lign251/lib/python3.7/site-packages/transformers/modeling_utils.py", line 213, in _resize_token_embeddings
new_embeddings = self._get_resized_embeddings(old_embeddings, new_num_tokens)
File "/home/AD/rdsie/anaconda3/envs/lign251/lib/python3.7/site-packages/transformers/modeling_utils.py", line 234, in _get_resized_embeddings
old_num_tokens, old_embedding_dim = old_embeddings.weight.size()
File "/home/AD/rdsie/anaconda3/envs/lign251/lib/python3.7/site-packages/torch/nn/modules/module.py", line 576, in __getattr__
type(self).__name__, name))
AttributeError: 'AdaptiveEmbedding' object has no attribute 'weight'
```
It seems that the function `resize_token_embeddings()` does not currently account for the particulars of the input embeddings used for the TransformerXLLMHeadModel.
## Expected behavior
We expect that `resize_token_embeddings` should handle the appropriate updating of the embedding layers for the new vocabulary size, so that the model can be correctly used with the new tokens.
Thank you in advance | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3554/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3554/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3553 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3553/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3553/comments | https://api.github.com/repos/huggingface/transformers/issues/3553/events | https://github.com/huggingface/transformers/issues/3553 | 591,184,010 | MDU6SXNzdWU1OTExODQwMTA= | 3,553 | unable to completely load T5 pretrained model; missing/unexpected keys | {
"login": "dhecloud",
"id": 25906470,
"node_id": "MDQ6VXNlcjI1OTA2NDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/25906470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhecloud",
"html_url": "https://github.com/dhecloud",
"followers_url": "https://api.github.com/users/dhecloud/followers",
"following_url": "https://api.github.com/users/dhecloud/following{/other_user}",
"gists_url": "https://api.github.com/users/dhecloud/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhecloud/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhecloud/subscriptions",
"organizations_url": "https://api.github.com/users/dhecloud/orgs",
"repos_url": "https://api.github.com/users/dhecloud/repos",
"events_url": "https://api.github.com/users/dhecloud/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhecloud/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @dhecloud, \r\n\r\nThanks for you issue :-) \r\nDoes the model still work fine? ",
"> Hi @dhecloud,\r\n> \r\n> Thanks for you issue :-)\r\n> Does the model still work fine?\r\n\r\nHi, thanks for your reply.\r\nUsing the examples provided in the doc, the model works fine. \r\nBefore i used `T5WithLMHeadModel` in version `2.5.1` which did not raise this missing keys warning. After i moved to `T5ForConditionalGeneration` in `2.7.0` there was this warning and my training loss diverged so i thought i might raise this issue in case there was some sort of change in naming in the checkpoint",
"I'm gonna take a look :-) ",
"Hi guys, \r\nAny news on this?\r\nWhen I try to load t5-base I receive this:\r\n\r\n\r\nINFO:transformers.modeling_utils:Weights of T5ForConditionalGeneration not initialized from pretrained model: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight', 'lm_head.weight']",
"> Hi guys,\r\n> Any news on this?\r\n> When I try to load t5-base I receive this:\r\n> \r\n> INFO:transformers.modeling_utils:Weights of T5ForConditionalGeneration not initialized from pretrained model: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight', 'lm_head.weight']\r\n\r\ni think it's mostly a harmless misplaced error. The model should still work fine. You can test it by trying out the examples ",
"Yeah this should not be a problem, all these weights are `[encoder.embed_tokens.weight', 'decoder.embed_tokens.weight', 'lm_head.weight']` are weights tied to the input embedding matrix and therefore don't need to be initialized.",
"How can we silence the error?",
"It should be enough to lower the cli-logger",
"@sgugger @patrickvonplaten this seems like a very old post, but I just got this warning:\r\n\r\n```\r\n[WARNING|trainer.py:2231] 2023-11-23 23:20:29,636 >> There were missing keys in the checkpoint model loaded: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight']. \r\n```\r\n\r\n on `4.35.2` trying to restart my training from ckpt after my training run crashed. I'm working with FLAN-T5 base. I restarted by passing `resume_from_checkpoint=True` to the `Trainer`.\r\n \r\n I think there was recently an attempt to migrate to `safetensors` so maybe that's why the warning crept in there? Also, rather strangely, the progress training bar started from `0` steps ... I do remember that the `Trainer` used to forward the data loader to the point where you left off when you stopped training and such tricks, but nothing really indicates that this is happening or that indeed I am restarting from my checkpoint ...\r\n \r\n UPDATE: Just reading through the `ignore_data_skip` docs, I'm pretty sure that I haven't started training from the checkpoint, training started rather quickly so I betting the farm on no batch being skipped...\r\n\r\nUPDATE 2: I should expect that to skip batches the trainer state would have to be saved (right?). So given that my code failed at `L2407` (so during `_save_checkpoint`) because I mistakenly passed a metric that I was not calculating as the metric for best model, I presume this elucidates the mystery of not skipping batches. The question remains, has the training started from the checkpoint or not?\r\n\r\n\r\n ",
"> Yeah this should not be a problem, all these weights are `[encoder.embed_tokens.weight', 'decoder.embed_tokens.weight', 'lm_head.weight']` are weights tied to the input embedding matrix and therefore don't need to be initialized.\r\n\r\nHello, what do you mean are \"tied to the input\"? \r\n \r\nI am trying to convert a flax finetuned T5 model into pytorch format using these commands:\r\n - `from_flax_model = T5ForConditionalGeneration.from_pretrained(model_path, from_flax=True)`\r\n - `from_flax_model.save_pretrained(folder)`\r\n \r\nand I get the same message: `Some weights of T5ForConditionalGeneration were not initialized from the Flax model and are newly initialized: ['decoder.embed_tokens.weight', 'lm_head.weight', 'encoder.embed_tokens.weight']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.`"
] | 1,585 | 1,706 | 1,588 | NONE | null | # 🐛 Bug
## Information
Model I am using: T5
## To reproduce
```
model, info = T5ForConditionalGeneration.from_pretrained('t5-small',output_loading_info=True)
```
info is
`
{'missing_keys': ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight', 'lm_head.weight'], 'unexpected_keys': ['encoder.block.0.layer.0.layer_norm.bias', 'encoder.block.0.layer.1.layer_norm.bias', 'encoder.block.1.layer.0.layer_norm.bias', 'encoder.block.1.layer.1.layer_norm.bias', 'encoder.block.2.layer.0.layer_norm.bias', 'encoder.block.2.layer.1.layer_norm.bias', 'encoder.block.3.layer.0.layer_norm.bias', 'encoder.block.3.layer.1.layer_norm.bias', 'encoder.block.4.layer.0.layer_norm.bias', 'encoder.block.4.layer.1.layer_norm.bias', 'encoder.block.5.layer.0.layer_norm.bias', 'encoder.block.5.layer.1.layer_norm.bias', 'encoder.final_layer_norm.bias', 'decoder.block.0.layer.0.layer_norm.bias', 'decoder.block.0.layer.1.layer_norm.bias', 'decoder.block.0.layer.2.layer_norm.bias', 'decoder.block.1.layer.0.layer_norm.bias', 'decoder.block.1.layer.1.layer_norm.bias', 'decoder.block.1.layer.2.layer_norm.bias', 'decoder.block.2.layer.0.layer_norm.bias', 'decoder.block.2.layer.1.layer_norm.bias', 'decoder.block.2.layer.2.layer_norm.bias', 'decoder.block.3.layer.0.layer_norm.bias', 'decoder.block.3.layer.1.layer_norm.bias', 'decoder.block.3.layer.2.layer_norm.bias', 'decoder.block.4.layer.0.layer_norm.bias', 'decoder.block.4.layer.1.layer_norm.bias', 'decoder.block.4.layer.2.layer_norm.bias', 'decoder.block.5.layer.0.layer_norm.bias', 'decoder.block.5.layer.1.layer_norm.bias', 'decoder.block.5.layer.2.layer_norm.bias', 'decoder.final_layer_norm.bias'], 'error_msgs': []}
`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
No keys should be missing or unexpected
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.7.0
- Platform: Ubuntu
- Python version: 3.6
- PyTorch version (GPU?): 1.2.0 (yes)
- Tensorflow version (GPU?): nope
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: nope
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3553/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3552 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3552/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3552/comments | https://api.github.com/repos/huggingface/transformers/issues/3552/events | https://github.com/huggingface/transformers/pull/3552 | 591,146,676 | MDExOlB1bGxSZXF1ZXN0Mzk2MzY2NjUz | 3,552 | Update README.md | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3552?src=pr&el=h1) Report\n> Merging [#3552](https://codecov.io/gh/huggingface/transformers/pull/3552?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/83d1fbcff608f84a27234e20d6531b4404dc059e&el=desc) will **decrease** coverage by `0.49%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3552?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3552 +/- ##\n==========================================\n- Coverage 78.31% 77.81% -0.50% \n==========================================\n Files 100 100 \n Lines 17064 17064 \n==========================================\n- Hits 13363 13278 -85 \n- Misses 3701 3786 +85 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3552?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/3552/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.89% <0.00%> (-27.60%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3552?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3552?src=pr&el=footer). Last update [83d1fbc...cd4f658](https://codecov.io/gh/huggingface/transformers/pull/3552?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | - Show that the last uploaded version was trained on more data (custom_license files) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3552/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3552",
"html_url": "https://github.com/huggingface/transformers/pull/3552",
"diff_url": "https://github.com/huggingface/transformers/pull/3552.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3552.patch",
"merged_at": 1585665635000
} |
https://api.github.com/repos/huggingface/transformers/issues/3551 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3551/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3551/comments | https://api.github.com/repos/huggingface/transformers/issues/3551/events | https://github.com/huggingface/transformers/issues/3551 | 591,138,598 | MDU6SXNzdWU1OTExMzg1OTg= | 3,551 | Recommended preprocessing steps for english sentences in GPT2 | {
"login": "Damiox",
"id": 599804,
"node_id": "MDQ6VXNlcjU5OTgwNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/599804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Damiox",
"html_url": "https://github.com/Damiox",
"followers_url": "https://api.github.com/users/Damiox/followers",
"following_url": "https://api.github.com/users/Damiox/following{/other_user}",
"gists_url": "https://api.github.com/users/Damiox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Damiox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Damiox/subscriptions",
"organizations_url": "https://api.github.com/users/Damiox/orgs",
"repos_url": "https://api.github.com/users/Damiox/repos",
"events_url": "https://api.github.com/users/Damiox/events{/privacy}",
"received_events_url": "https://api.github.com/users/Damiox/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Just pre-process them as you would for any other NLP task. This may include: normalisation of punctuation, dealing with HTML/XML, white-space normalisation, language verification. Something that people used to do in the age of word2vec is creating new entities for specific items. For instance, any number could be replaced with all 1s (1234 becomes 1111) to have relatively small vocabulary that still takes the number of characters into account. Same with users (e.g @user@) and mentions and urls (@URL@), and so on. This might still be worth the effort, but not necessarily. In such cases you may want to ensure that these tokens are not split in the tokenizer.\r\n\r\nI hope that you understand that this is a very general question not related to this repository, so I am closing this.",
"@BramVanroy I think I forgot to mention that I'm using the gpt-2 training models from this repo. I'm not re-training gpt-2. I think I shouldn't create new entities for specific terms if that process has not happened during training step. Am I right? ",
"If you are not planning to pretraining the model (and generating a new vocab) then, indeed, you should not try to add new tokens. So in your case you would just need to do some basic normalisation of punctuation, HTML, etc.",
"@BramVanroy Thanks for the answer. I'm directly using gpt-2 pretrained models from https://huggingface.co/models . Specifically the ones that are created by Huggingface. I can't find the preprocessing code that was used when those models were trained... so I can replicate the same at inference time. I'm wondering if I should just assume things or it'd be safer to see how it was trained so I can make sure to prepare the sentences in a similar way.",
"Has there been any preprocessing during training phase @BramVanroy ? ",
"I don't know. Note that HuggingFace didn't train GPT2. They ported the weights to their own architecture. You can try to get into contact with the people that created it, OpenAI. https://github.com/openai/gpt-2",
"@Damiox have you found the original preprocessing code?",
"@don-prog no, I have not 😢 - I am doing some subset of the preprocessing heuristics that @BramVanroy detailed before when serving the model. But I still think it'd be really good to have a consistency between both preprocessing mechanisms: training (whatever it was) vs inference. I just haven't had the time to identify that original preprocessing code from OpenAI"
] | 1,585 | 1,596 | 1,585 | NONE | null | # ❓ Questions & Help
To run inferences in english sentences... I'm not really sure what preprocessing steps I need to do before sending the tokenized text to gpt-2 in order to get the predictions. Any advice?
How to handle non-english words that can be found in an english sentences? Extra punctuation, space, new lines, user's mentions, hashtags, urls, alternative apostrophes, etc...?
## Details
<!-- Description of your issue -->
**A link to original question on Stack Overflow**: https://stackoverflow.com/questions/60807799/gpt2-huggingfaces-transformer-preprocessing | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3551/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3550 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3550/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3550/comments | https://api.github.com/repos/huggingface/transformers/issues/3550/events | https://github.com/huggingface/transformers/pull/3550 | 590,972,770 | MDExOlB1bGxSZXF1ZXN0Mzk2MjIwNTQ5 | 3,550 | [T5, Testst] Add extensive hard-coded integration tests and make sure PT and TF give equal results | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3550?src=pr&el=h1) Report\n> Merging [#3550](https://codecov.io/gh/huggingface/transformers/pull/3550?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6f5a12a5833d1e3783e4b8a42cb556b64085745e&el=desc) will **decrease** coverage by `0.49%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3550?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3550 +/- ##\n==========================================\n- Coverage 78.30% 77.80% -0.50% \n==========================================\n Files 100 100 \n Lines 17062 17062 \n==========================================\n- Hits 13360 13275 -85 \n- Misses 3702 3787 +85 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3550?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/3550/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.89% <0.00%> (-27.60%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3550?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3550?src=pr&el=footer). Last update [6f5a12a...c3ce2fe](https://codecov.io/gh/huggingface/transformers/pull/3550?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,585 | 1,585 | MEMBER | null | A direct comparison to google's official model seems quite hard - not sure if that's absolutely needed @thomwolf, @craffel
But some integration tests for T5 would be very nice, to be sure that changes in the future will not break T5.
This PR adds hard-coded integration tests, where the input for summarization is copied from Bart's summarization tests and the input for translation is taken from Appendix D of the official [paper](https://arxiv.org/pdf/1910.10683.pdf)
Checking the expected output for PT, one can see that the output looks quite good!
- [x] Add PyTorch integration tests
- [x] Verify quality (subjectively for the moment)
- [x] Add TF integration tests
- [x] Same output for PT and TF
UPDATE:
- Found a big bug in TF Beam Search generation (see comment) -> this PR fixes it | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3550/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3550/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3550",
"html_url": "https://github.com/huggingface/transformers/pull/3550",
"diff_url": "https://github.com/huggingface/transformers/pull/3550.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3550.patch",
"merged_at": 1585756894000
} |
https://api.github.com/repos/huggingface/transformers/issues/3549 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3549/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3549/comments | https://api.github.com/repos/huggingface/transformers/issues/3549/events | https://github.com/huggingface/transformers/issues/3549 | 590,941,204 | MDU6SXNzdWU1OTA5NDEyMDQ= | 3,549 | model name '../data/bert_models/chinese_finetuned_lm/pytorch_model.bin' was not found in model name list . Creating an empty model card. | {
"login": "zbbwss",
"id": 42829645,
"node_id": "MDQ6VXNlcjQyODI5NjQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/42829645?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zbbwss",
"html_url": "https://github.com/zbbwss",
"followers_url": "https://api.github.com/users/zbbwss/followers",
"following_url": "https://api.github.com/users/zbbwss/following{/other_user}",
"gists_url": "https://api.github.com/users/zbbwss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zbbwss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zbbwss/subscriptions",
"organizations_url": "https://api.github.com/users/zbbwss/orgs",
"repos_url": "https://api.github.com/users/zbbwss/repos",
"events_url": "https://api.github.com/users/zbbwss/events{/privacy}",
"received_events_url": "https://api.github.com/users/zbbwss/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | NONE | null | nlp = pipeline('fill-mask',
# model=args.bert_model_path,
# config=args.bert_config_path,
# tokenizer=args.bert_model_dir
model = '../data/bert_models/chinese_finetuned_lm/pytorch_model.bin',
config = '../data/bert_models/chinese_finetuned_lm/config.json',
tokenizer = "../data/bert_models/chinese_finetuned_lm/"
)
i use fine-tune model (chinese bert) ,when i end fine-tune i can not load model ! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3549/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3548 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3548/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3548/comments | https://api.github.com/repos/huggingface/transformers/issues/3548/events | https://github.com/huggingface/transformers/issues/3548 | 590,935,683 | MDU6SXNzdWU1OTA5MzU2ODM= | 3,548 | How to extract "contiguous tokens" from `NerPipeline` results? | {
"login": "enzoampil",
"id": 39557688,
"node_id": "MDQ6VXNlcjM5NTU3Njg4",
"avatar_url": "https://avatars.githubusercontent.com/u/39557688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enzoampil",
"html_url": "https://github.com/enzoampil",
"followers_url": "https://api.github.com/users/enzoampil/followers",
"following_url": "https://api.github.com/users/enzoampil/following{/other_user}",
"gists_url": "https://api.github.com/users/enzoampil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enzoampil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enzoampil/subscriptions",
"organizations_url": "https://api.github.com/users/enzoampil/orgs",
"repos_url": "https://api.github.com/users/enzoampil/repos",
"events_url": "https://api.github.com/users/enzoampil/events{/privacy}",
"received_events_url": "https://api.github.com/users/enzoampil/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"What do you think @mfuntowicz?",
"[Related issue ](https://github.com/huggingface/transformers/issues/2488)",
"Actually, I realized that instead of `index`, even better would be the word piece `offsets`, similar to that returned by `tokenizer.encode` from the huggingface [tokenizers package](https://github.com/huggingface/tokenizers).\r\n\r\nWill get to implementing this within the week if this isn't supported yet!",
"This recently merged [PR](https://github.com/huggingface/transformers/pull/3957) should solve this issue 🙂 "
] | 1,585 | 1,589 | 1,589 | CONTRIBUTOR | null | Using `NerPipeline`, I want to be able to input a string (sequence of tokens), and extract *entity groups*, where an entity group is a contiguous series of tokens, having the same *entity type*.
**Example:**
For the ner code below:
```
from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer
# Allocate a pipeline for sentiment-analysis
model = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english")
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
nlp = pipeline('ner', model=model, tokenizer=tokenizer)
nlp('Enzo works at the Australian National University (AUN)')
```
This returns:
```
[{'entity': 'I-PER', 'score': 0.9983270168304443, 'word': 'En'},
{'entity': 'I-PER', 'score': 0.9952995777130127, 'word': '##zo'},
{'entity': 'I-ORG', 'score': 0.9984350204467773, 'word': 'Australian'},
{'entity': 'I-ORG', 'score': 0.9967807531356812, 'word': 'National'},
{'entity': 'I-ORG', 'score': 0.9959043264389038, 'word': 'University'},
{'entity': 'I-ORG', 'score': 0.9900023937225342, 'word': 'AU'},
{'entity': 'I-ORG', 'score': 0.9763911366462708, 'word': '##N'}]
```
When I want it to return something like:
```
[{'entity_group': 'I-PER', 'score': 0.9983270168304443, 'word': 'Enzo'},
{'entity_group': 'I-ORG', 'score': 0.9984350204467773, 'word': 'Australian National University'},
{'entity_group': 'I-ORG', 'score': 0.9900023937225342, 'word': 'AUN'}]
```
I should be able to write a function that performs the above transformation if the indices of the word pieces are also indicated in the dictionary output of `NerPipeline`. Is this currently possible? Please advise me on if there's already an easy way to do this.
If not, I can fork the repo and introduce an `index` key to the dictionary output. I can send a PR for this if the use case is general enough. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3548/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/3548/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3547 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3547/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3547/comments | https://api.github.com/repos/huggingface/transformers/issues/3547/events | https://github.com/huggingface/transformers/pull/3547 | 590,927,217 | MDExOlB1bGxSZXF1ZXN0Mzk2MTgyOTQz | 3,547 | [T5, TF 2.2] change tf t5 argument naming | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | MEMBER | null | **Problem**:
As shown in #3539, in TF 2.2 errors occur due to the naming of the first argument in the `keras.layer.__call__` function of TF T5.
Previously `decoder_input_ids` was used as the first argument - which did not produce any errors in TF <= 2.1. In TF 2.2, it produces the error:
```
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
797 else:
798 raise ValueError(
--> 799 'The first argument to `Layer.call` must always be passed.')
800
801 call_context = base_layer_utils.call_context()
ValueError: The first argument to `Layer.call` must always be passed.
```
**Conclusion**
It seems that we have to change to a consistent naming, being `inputs` for the first argument of every `keras.layer.__call__` function | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3547/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3547/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3547",
"html_url": "https://github.com/huggingface/transformers/pull/3547",
"diff_url": "https://github.com/huggingface/transformers/pull/3547.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3547.patch",
"merged_at": 1585771460000
} |
https://api.github.com/repos/huggingface/transformers/issues/3546 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3546/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3546/comments | https://api.github.com/repos/huggingface/transformers/issues/3546/events | https://github.com/huggingface/transformers/issues/3546 | 590,906,527 | MDU6SXNzdWU1OTA5MDY1Mjc= | 3,546 | Impossible to use T5 11b | {
"login": "sdan",
"id": 22898443,
"node_id": "MDQ6VXNlcjIyODk4NDQz",
"avatar_url": "https://avatars.githubusercontent.com/u/22898443?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sdan",
"html_url": "https://github.com/sdan",
"followers_url": "https://api.github.com/users/sdan/followers",
"following_url": "https://api.github.com/users/sdan/following{/other_user}",
"gists_url": "https://api.github.com/users/sdan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sdan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sdan/subscriptions",
"organizations_url": "https://api.github.com/users/sdan/orgs",
"repos_url": "https://api.github.com/users/sdan/repos",
"events_url": "https://api.github.com/users/sdan/events{/privacy}",
"received_events_url": "https://api.github.com/users/sdan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | null | [] | [
"Update\r\n\r\nOn a regular CPU machine (not using GPUs), here's the benchmarks you'll need to load these models into memory and run them:\r\n\r\nBart-large-cnn (default): 5GB of RAM\r\nT5-small: 14GB\r\nT5-base: 20GB\r\nT5-large: 31GB\r\nT5-3b: 68GB\r\nT5-11b: 120GB\r\n\r\nSo, I was initially wrong: **You can run this on CPU, but you'll need a lot of RAM, or try out GPUs** "
] | 1,585 | 1,585 | 1,585 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
T5 11B param
Language I am using the model on (English, Chinese ...):
English
The problem arises when using:
* [ x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
summarizer = pipeline(task="summarization", model="t5-11b", tokenizer="t5-11b")
summary = summarizer(
article,
min_length=5,
max_length=100
)
print("The Summary, ",summary[0]['summary_text'])
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
As indicated in [T5's repo](https://github.com/google-research/text-to-text-transfer-transformer), they used Mesh Tensorflow, which is (according to them) the only way to run inference T5. This means the default CPU setting and even 1 GPU device setting would result in the following
1. CPU setting: run forever (my experience)
2. GPU setting: OOM (although I haven't tested this).
Meaning the current implementations of 3b and 11b would most likely render useless.
More testing needs to be done on whether this is the same for smaller models.
Get an output
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.7.0
- Platform: GCP
- Python version: 3.7.7
- PyTorch version (GPU?): 1.1.0 and no GPU
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3546/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3545 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3545/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3545/comments | https://api.github.com/repos/huggingface/transformers/issues/3545/events | https://github.com/huggingface/transformers/pull/3545 | 590,877,230 | MDExOlB1bGxSZXF1ZXN0Mzk2MTQyMDcz | 3,545 | [T5, pipeline] fix bug in warnings | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | MEMBER | null | Warnings only took `self.model.config.max_length` in consideration, but not the actual passed `max_length` parameter.
This PR fixes this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3545/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3545/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3545",
"html_url": "https://github.com/huggingface/transformers/pull/3545",
"diff_url": "https://github.com/huggingface/transformers/pull/3545.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3545.patch",
"merged_at": 1585771153000
} |
https://api.github.com/repos/huggingface/transformers/issues/3544 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3544/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3544/comments | https://api.github.com/repos/huggingface/transformers/issues/3544/events | https://github.com/huggingface/transformers/pull/3544 | 590,798,909 | MDExOlB1bGxSZXF1ZXN0Mzk2MDc1ODAx | 3,544 | [examples] unit test for run_bart_sum | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3544?src=pr&el=h1) Report\n> Merging [#3544](https://codecov.io/gh/huggingface/transformers/pull/3544?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e5c393dcebf42eaec9c1e1d619b5a7788a2d7c65&el=desc) will **increase** coverage by `0.97%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3544?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3544 +/- ##\n==========================================\n+ Coverage 76.84% 77.81% +0.97% \n==========================================\n Files 100 100 \n Lines 17064 17064 \n==========================================\n+ Hits 13112 13279 +167 \n+ Misses 3952 3785 -167 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3544?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.32% <0.00%> (+0.17%)` | :arrow_up: |\n| [src/transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/3544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.54% <0.00%> (+1.34%)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.77% <0.00%> (+2.29%)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/3544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `98.24% <0.00%> (+2.63%)` | :arrow_up: |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/3544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3544/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3544?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3544?src=pr&el=footer). Last update [e5c393d...af9dd75](https://codecov.io/gh/huggingface/transformers/pull/3544?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"LGTM from a superficial glance",
"Planning on merging this April 15 at 7pm EST barring objections."
] | 1,585 | 1,586 | 1,586 | CONTRIBUTOR | null | - add lightning to `examples/requirements.txt`
- first lightning unittest! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3544/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3544/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3544",
"html_url": "https://github.com/huggingface/transformers/pull/3544",
"diff_url": "https://github.com/huggingface/transformers/pull/3544.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3544.patch",
"merged_at": 1586990101000
} |
https://api.github.com/repos/huggingface/transformers/issues/3543 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3543/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3543/comments | https://api.github.com/repos/huggingface/transformers/issues/3543/events | https://github.com/huggingface/transformers/pull/3543 | 590,776,048 | MDExOlB1bGxSZXF1ZXN0Mzk2MDU1OTc5 | 3,543 | [testing] add timeout_decorator | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Feels like it could be reimplemented in a few lines of code – do we need to add a new dependency for this?",
"I copy pasted it. Would love to understand more about the adding a dependency vs maintaining code tradeoff!",
"I think that's a great addition! Trying to keep the tests short is very important I think :-) ",
"This is too long to copy/paste, so I'd see two options:\r\n- add it as a dependency to extras[\"testing\"]\r\n- take just the signals based implem, clean it up/distill it down to a few (10) lines of code and add it in `tests/`",
"I think we should do `extras['testing']` (that was my first attempt on this PR).\r\nIf we delete the `signals=False` logic, I don't think circleci will work in distributed testing mode.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3543?src=pr&el=h1) Report\n> Merging [#3543](https://codecov.io/gh/huggingface/transformers/pull/3543?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b8686174be75220d2c26a961597a39ef4921b616&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3543?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3543 +/- ##\n==========================================\n+ Coverage 78.84% 78.85% +0.01% \n==========================================\n Files 114 114 \n Lines 18691 18691 \n==========================================\n+ Hits 14737 14739 +2 \n+ Misses 3954 3952 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3543?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3543/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.93% <0.00%> (+0.16%)` | :arrow_up: |\n| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/3543/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `43.90% <0.00%> (+0.34%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3543?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3543?src=pr&el=footer). Last update [b868617...8b22919](https://codecov.io/gh/huggingface/transformers/pull/3543?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,588 | 1,588 | CONTRIBUTOR | null | This is a simple way to make sure code doesn't get slower over time. Since it is a new dependency, I wanted to show a tiny PR before I use it more. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3543/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3543",
"html_url": "https://github.com/huggingface/transformers/pull/3543",
"diff_url": "https://github.com/huggingface/transformers/pull/3543.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3543.patch",
"merged_at": 1588338348000
} |
https://api.github.com/repos/huggingface/transformers/issues/3542 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3542/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3542/comments | https://api.github.com/repos/huggingface/transformers/issues/3542/events | https://github.com/huggingface/transformers/issues/3542 | 590,762,213 | MDU6SXNzdWU1OTA3NjIyMTM= | 3,542 | KeyError: 'answers' error when using BioASQ dataset using Huggingface Transformers | {
"login": "urvashikhanna",
"id": 32611800,
"node_id": "MDQ6VXNlcjMyNjExODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/32611800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/urvashikhanna",
"html_url": "https://github.com/urvashikhanna",
"followers_url": "https://api.github.com/users/urvashikhanna/followers",
"following_url": "https://api.github.com/users/urvashikhanna/following{/other_user}",
"gists_url": "https://api.github.com/users/urvashikhanna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/urvashikhanna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/urvashikhanna/subscriptions",
"organizations_url": "https://api.github.com/users/urvashikhanna/orgs",
"repos_url": "https://api.github.com/users/urvashikhanna/repos",
"events_url": "https://api.github.com/users/urvashikhanna/events{/privacy}",
"received_events_url": "https://api.github.com/users/urvashikhanna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Have you checked whether the Bioasq format suits the Huggingface interface/format?\r\nBecause the Bioasq does not natively support a reading comprehension task as defined in SquaD",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,593 | 1,593 | NONE | null | # 🐛 Bug
I am using Bert on BioASQ question answering dataset using the script run_squad.py from Huggingface Transformers.
##To reproduce
Steps to reproduce the behavior:
1.I am using run_squad.py https://github.com/huggingface/transformers/blob/master/examples/run_squad.py from Huggingface Transformers for fine-tuning on BioASQ Question Answering dataset.
2.I have converted the tensorflow weights provided by the authors of BioBERT https://github.com/dmis-lab/bioasq-biobert to Pytorch as discussed here https://github.com/huggingface/transformers/issues/312.
3.Further, I am using the preprocessed data of BioASQ [(https://github.com/dmis-lab/bioasq-biobert)] which is converted to the SQuAD form. However, when I am running the run_squad.py script with the below parameters
python3 run_squad.py \
--model_type bert \
--model_name_or_path /scratch/oe7/uk1594/BioBERT/BioBERT-PyTorch/BioBERTv1.1-SQuADv1.1-Factoid-PyTorch/ \
--do_train \
--do_eval \
--save_steps 1000 \
--train_file $data/BioASQ-train-factoid-6b.json \
--predict_file $data/BioASQ-test-factoid-6b-1.json \
--per_gpu_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /scratch/oe7/uk1594/BioBERT/BioBERT-PyTorch/QA_output_squad/BioASQ-factoid-6b/BioASQ-factoid-6b-1-issue-23mar/
I get the below error:
0%| | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
File "run_squad.py", line 856, in <module>
main()
File "run_squad.py", line 845, in main
result = evaluate(args, model, tokenizer, prefix=global_step)
File "run_squad.py", line 299, in evaluate
dataset, examples, features = load_and_cache_examples(args, tokenizer, evaluate=True, output_examples=True)
File "run_squad.py", line 475, in load_and_cache_examples
examples = processor.get_dev_examples(args.data_dir, filename=args.predict_file)
File "/scratch/oe7/uk1594/lib/python3.7/site-packages/transformers/data/processors/squad.py", line 522, in get_dev_examples
return self._create_examples(input_data, "dev")
File "/scratch/oe7/uk1594/lib/python3.7/site-packages/transformers/data/processors/squad.py", line 549, in _create_examples
answers = qa["answers"]
KeyError: 'answers'
- `transformers` version: Latest
- Platform:
- Python version: python3.7.4
- PyTorch version (GPU?): pytorch 1.4.
- Tensorflow version (GPU?):tensorflow/2.0.0
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?:No
Appreciate your help.
Thanks!!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3542/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3541 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3541/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3541/comments | https://api.github.com/repos/huggingface/transformers/issues/3541/events | https://github.com/huggingface/transformers/issues/3541 | 590,610,506 | MDU6SXNzdWU1OTA2MTA1MDY= | 3,541 | forward() got an unexpected keyword argument 'output_all_encoded_layers' | {
"login": "gogokre",
"id": 44871498,
"node_id": "MDQ6VXNlcjQ0ODcxNDk4",
"avatar_url": "https://avatars.githubusercontent.com/u/44871498?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gogokre",
"html_url": "https://github.com/gogokre",
"followers_url": "https://api.github.com/users/gogokre/followers",
"following_url": "https://api.github.com/users/gogokre/following{/other_user}",
"gists_url": "https://api.github.com/users/gogokre/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gogokre/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gogokre/subscriptions",
"organizations_url": "https://api.github.com/users/gogokre/orgs",
"repos_url": "https://api.github.com/users/gogokre/repos",
"events_url": "https://api.github.com/users/gogokre/events{/privacy}",
"received_events_url": "https://api.github.com/users/gogokre/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"This should be added to the initializer, not the forward method. Also, the correct parameter is `output_hidden_states`. You can also be more explicit by changing the config:\r\n\r\n```python\r\nmodel_config = AutoConfig.from_pretrained('bert-base-uncased', output_hidden_states=True)\r\nself.bert = AutoModel.from_pretrained('bert-base-uncased', config=model_config)\r\n```",
"> This should be added to the initializer, not the forward method. Also, the correct parameter is `output_hidden_states`. You can also be more explicit by changing the config:\r\n> \r\n> ```python\r\n> model_config = AutoConfig.from_pretrained('bert-base-uncased', output_hidden_states=True)\r\n> self.bert = AutoModel.from_pretrained('bert-base-uncased', config=model_config)\r\n> ```\r\n\r\nThank you very much for answering. \r\nSorry, ...can you please write the entire code?",
"In your code, replace\r\n\r\n```python\r\nself.bert = BertModel.from_pretrained('bert-base-uncased')\r\n```\r\n\r\nwith\r\n\r\n```python\r\nmodel_config = BertConfig.from_pretrained('bert-base-uncased', output_hidden_states=True)\r\nself.bert = BertModel.from_pretrained('bert-base-uncased', config=model_config)\r\n```\r\n\r\nand don't forget to import BertConfig at the top.\r\n\r\nIn the future, please format your code correctly. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks"
] | 1,585 | 1,585 | 1,585 | NONE | null | I am getting an error such as forward () got an unexpected keyword argument 'output_all_encoded_layers', how can I fix it?
class BertBinaryClassifier(nn.Module):
def __init__(self, dropout=0.1):
super(BertBinaryClassifier, self).__init__()
self.bert = BertModel.from_pretrained('bert-base-uncased')
self.dropout = nn.Dropout(dropout)
self.linear = nn.Linear(768, 1)
self.sigmoid = nn.Sigmoid()
def forward(self, tokens, masks=None):
_, pooled_output = self.bert(tokens, attention_mask=masks, output_all_encoded_layers=False)
dropout_output = self.dropout(pooled_output)
linear_output = self.linear(dropout_output)
proba = self.sigmoid(linear_output)
return proba
bert_clf = BertBinaryClassifier()
bert_clf = bert_clf.cuda()
x = torch.tensor(train_tokens_ids[:3]).to(device)
y, pooled = bert_clf.bert(x, output_all_encoded_layers=False)
x.shape, y.shape, pooled.shape
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-22-def915600e8d> in <module>()
1 x = torch.tensor(train_tokens_ids[:3]).to(device)
----> 2 y, pooled = bert_clf.bert(x, output_all_encoded_layers=False)
3 x.shape, y.shape, pooled.shape
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
TypeError: forward() got an unexpected keyword argument 'output_all_encoded_layers' | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3541/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3541/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3540 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3540/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3540/comments | https://api.github.com/repos/huggingface/transformers/issues/3540/events | https://github.com/huggingface/transformers/issues/3540 | 590,592,868 | MDU6SXNzdWU1OTA1OTI4Njg= | 3,540 | Quick Tour TF2.0 error: dataclasses.FrozenInstanceError: cannot assign to field 'label' | {
"login": "anhminh3105",
"id": 18170028,
"node_id": "MDQ6VXNlcjE4MTcwMDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/18170028?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anhminh3105",
"html_url": "https://github.com/anhminh3105",
"followers_url": "https://api.github.com/users/anhminh3105/followers",
"following_url": "https://api.github.com/users/anhminh3105/following{/other_user}",
"gists_url": "https://api.github.com/users/anhminh3105/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anhminh3105/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anhminh3105/subscriptions",
"organizations_url": "https://api.github.com/users/anhminh3105/orgs",
"repos_url": "https://api.github.com/users/anhminh3105/repos",
"events_url": "https://api.github.com/users/anhminh3105/events{/privacy}",
"received_events_url": "https://api.github.com/users/anhminh3105/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] | [
"This was fixed yesterday, can you try installing from master?",
"verified that it's fixed in master. However, the bug remains when installing from pip. ",
"We'll ship a new pip release soon, but in any case we'll try to update the code so that the TF script can run with an immutable `InputExample` (as discussed w/ @jplu)",
"I have just had the same problem. Can you show me how to fix it specifically?",
"[Install from source](https://github.com/huggingface/transformers#from-source)",
"I am executing finetune_llama2_guanaco_7b.sh using qlora. I am getting the below error. Attaching the stack trace.\r\n\r\nqlora.py\", line 841, in <module>\r\n train()\r\n qlora.py\", line 694, in train\r\n training_args.generation_config = transformers.GenerationConfig(**vars(generation_args))\r\n File \"qlora/.venv/lib/python3.10/site-packages/transformers/training_args.py\", line 1714, in __setattr__\r\n raise FrozenInstanceError(f\"cannot assign to field {name}\")\r\ndataclasses.FrozenInstanceError: cannot assign to field generation_config\r\n\r\nCan you give pls suggest me a solution to fix this issue?",
"Are you using this [script](https://gist.github.com/younesbelkada/9f7f75c94bdc1981c8ca5cc937d4a4da)? Are you sure you are using the latest version of `transformers`? ",
"I am using the run_summarization.py to finetune and infer the mt5 model. \r\nThe script is https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization.py\r\ntransformers_version is **4.32.0**\r\n\r\nI get the below error:\r\n Traceback (most recent call last):\r\n File \"./code/run_summarization.py\", line 902, in <module>\r\n main()\r\n File \"./code/run_summarization.py\", line 764, in main\r\n t**raining_args.generation_max_length** = (\r\n File \"/tmp/env/lib/python3.8/site-packages/transformers/training_args.py\", line 1712, in __setattr__\r\n raise **FrozenInstanceError(f\"cannot assign to field {name}\")**\r\ndataclasses.FrozenInstanceError: cannot assign to field generation_max_length\r\n\r\nCould you please help look? Any help will be appreciated!\r\n",
"cc @muellerzr and @gante We have some needed updates in the script.",
"I am trying to use the run_mae.py to finetune an MAE model on my own dataset.\r\nThis is the referenced script: https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-pretraining/run_mae.py\r\nand the minimum version for the transformers is:\r\ntransformers_version is 4.32.0.dev0\r\n\r\nI encountered the below error:\r\nTraceback (most recent call last):\r\n File \"run_mae_mydata.py\", line 400, in <module>\r\n main()\r\n File \"run_mae_mydata.py\", line 351, in main\r\n training_args.learning_rate = training_args.base_learning_rate * total_train_batch_size / 256\r\n File \"./miniconda3/envs/mae/lib/python3.8/site-packages/transformers/training_args.py\", line 1712, in __setattr__\r\n raise FrozenInstanceError(f\"cannot assign to field {name}\")\r\ndataclasses.FrozenInstanceError: cannot assign to field learning_rate\r\n\r\nAny advice/help is appreciated.",
"After I downgraded transfromers from 4.32 to 4.31, this bug just went away. Hope it will help someone who meets this bug too.",
"@tuzeao @bgoldenboy please use the latest version of the script and let me know if you still face issues: https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-pretraining/run_mae.py (this was fixed a week ago or two)",
"> @tuzeao @bgoldenboy please use the latest version of the script and let me know if you still face issues: https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-pretraining/run_mae.py (this was fixed a week ago or two)\r\n\r\nThank you for your reply. I used the updated script and it works now! Thanks again!",
"Same problem here when upgrading transformers 4.32->4.31",
"This has been reverted on main @ndvbd, either install via `pip install git+https://github.com/huggingface/transformers`, or use the version of the script I linked",
"The whole frozen arguments thing was removed in main? I'm trying to change seed, and few more.",
"Yes it was. See https://github.com/huggingface/transformers/pull/25903 (which discusses why)",
"> I am executing finetune_llama2_guanaco_7b.sh using qlora. I am getting the below error. Attaching the stack trace.\r\n> \r\n> qlora.py\", line 841, in train() qlora.py\", line 694, in train training_args.generation_config = transformers.GenerationConfig(**vars(generation_args)) File \"qlora/.venv/lib/python3.10/site-packages/transformers/training_args.py\", line 1714, in **setattr** raise FrozenInstanceError(f\"cannot assign to field {name}\") dataclasses.FrozenInstanceError: cannot assign to field generation_config\r\n> \r\n> Can you give pls suggest me a solution to fix this issue?\r\n\r\nChange:\r\n```\r\ntraining_args.generation_config = transformers.GenerationConfig(**vars(generation_args)) \r\n```\r\nTo:\r\n```\r\nimport dataclasses\r\nimport transformers\r\ntraining_args = dataclasses.replace(\r\n training_args,\r\n generation_config = transformers.GenerationConfig(**vars(generation_args))\r\n)\r\n````\r\nIn case of updating, e.g., max_length: \r\n```\r\n# training_args.max_length = 1024 # got error\r\ntraining_args = dataclasses.replace(training_args, max_length=1024)\r\n```\r\n"
] | 1,585 | 1,694 | 1,693 | NONE | null | # 🐛 Bug
## Information
Model I am using: bert-base-cased
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. install TensorFlow 2 with conda `conda install tensorflow`
2. install Transformers either from source or using pip `pip install transformers`
3. run the Quick Tour TF 2 example with the following content:
```python
import tensorflow as tf
import tensorflow_datasets
from transformers import *
# Load dataset, tokenizer, model from pretrained model/vocabulary
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
model = TFBertForSequenceClassification.from_pretrained('bert-base-cased')
data = tensorflow_datasets.load('glue/mrpc')
# Prepare dataset for GLUE as a tf.data.Dataset instance
train_dataset = glue_convert_examples_to_features(data['train'], tokenizer, max_length=128, task='mrpc')
valid_dataset = glue_convert_examples_to_features(data['validation'], tokenizer, max_length=128, task='mrpc')
train_dataset = train_dataset.shuffle(100).batch(32).repeat(2)
valid_dataset = valid_dataset.batch(64)
# Prepare training: Compile tf.keras model with optimizer, loss and learning rate schedule
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
# Train and evaluate using tf.keras.Model.fit()
history = model.fit(train_dataset, epochs=2, steps_per_epoch=115,
validation_data=valid_dataset, validation_steps=7)
# Load the TensorFlow model in PyTorch for inspection
model.save_pretrained('./save/')
```
## Expected behavior
```
Traceback (most recent call last):
File "quick_tour_tf2.py", line 11, in <module>
train_dataset = glue_convert_examples_to_features(data['train'], tokenizer, max_length=128, task='mrpc')
File "C:\Users\Anh Minh\.conda\envs\transformers\lib\site-packages\transformers\data\processors\glue.py", line 86, in glue_convert_examples_to_features
example = processor.tfds_map(example)
File "C:\Users\Anh Minh\.conda\envs\transformers\lib\site-packages\transformers\data\processors\utils.py", line 115, in tfds_map
example.label = self.get_labels()[int(example.label)]
File "<string>", line 4, in __setattr__
dataclasses.FrozenInstanceError: cannot assign to field 'label'
```
### Update:
I have recently installed Pytorch and tried out `examples/run_tf_glue.py` and the same error occured.
```
(transformers) C:\Users\Anh Minh\Workspace\transformers_my_codes>python run_tf_glue.py
2020-03-31 10:43:55.555102: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2
2020-03-31 10:44:02.576281: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2020-03-31 10:44:02.669572: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce RTX 2080 computeCapability: 7.5
coreClock: 1.59GHz coreCount: 46 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 417.29GiB/s
2020-03-31 10:44:02.679337: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2020-03-31 10:44:02.683708: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
2020-03-31 10:44:02.689044: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll
2020-03-31 10:44:02.693280: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll
2020-03-31 10:44:02.762552: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll
2020-03-31 10:44:02.767982: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll
2020-03-31 10:44:02.773095: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2020-03-31 10:44:02.779069: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-03-31 10:44:02.789070: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
2020-03-31 10:44:02.799782: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce RTX 2080 computeCapability: 7.5
coreClock: 1.59GHz coreCount: 46 deviceMemorySize: 8.00GiB deviceMemoryBandwidth: 417.29GiB/s
2020-03-31 10:44:02.809292: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2020-03-31 10:44:02.813862: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
2020-03-31 10:44:02.818889: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll
2020-03-31 10:44:02.823516: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll
2020-03-31 10:44:02.828140: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll
2020-03-31 10:44:02.833958: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll
2020-03-31 10:44:02.839710: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2020-03-31 10:44:02.845469: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-03-31 10:44:05.483986: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-03-31 10:44:05.489238: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] 0
2020-03-31 10:44:05.492138: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0: N
2020-03-31 10:44:05.499953: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6269 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080, pci bus id: 0000:01:00.0, compute capability: 7.5)
2020-03-31 10:44:06.412558: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
INFO:absl:Overwrite dataset info from restored data version.
INFO:absl:Reusing dataset glue (C:\Users\Anh Minh\tensorflow_datasets\glue\mrpc\1.0.0)
INFO:absl:Constructing tf.data.Dataset for split None, from C:\Users\Anh Minh\tensorflow_datasets\glue\mrpc\1.0.0
Traceback (most recent call last):
File "run_tf_glue.py", line 51, in <module>
train_dataset = glue_convert_examples_to_features(data["train"], tokenizer, 128, TASK)
File "C:\Users\Anh Minh\.conda\envs\transformers\lib\site-packages\transformers\data\processors\glue.py", line 86, in glue_convert_examples_to_features
example = processor.tfds_map(example)
File "C:\Users\Anh Minh\.conda\envs\transformers\lib\site-packages\transformers\data\processors\utils.py", line 115, in tfds_map
example.label = self.get_labels()[int(example.label)]
File "<string>", line 4, in __setattr__
dataclasses.FrozenInstanceError: cannot assign to field 'label'
```
The issue has been resolved by reinstalling Transformers 2.5.0
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.7.0
- Platform: Windows 10
- Python version: 3.7.7
- PyTorch version (GPU?): 1.4.0 on GPU
- Tensorflow version (GPU?): 2.1 on GPU
- Using GPU in script?: yes, RTX 2080
- Using distributed or parallel set-up in script?: Unavailable | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3540/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3539 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3539/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3539/comments | https://api.github.com/repos/huggingface/transformers/issues/3539/events | https://github.com/huggingface/transformers/issues/3539 | 590,577,915 | MDU6SXNzdWU1OTA1Nzc5MTU= | 3,539 | T5 Summarization | {
"login": "cformosa",
"id": 13603877,
"node_id": "MDQ6VXNlcjEzNjAzODc3",
"avatar_url": "https://avatars.githubusercontent.com/u/13603877?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cformosa",
"html_url": "https://github.com/cformosa",
"followers_url": "https://api.github.com/users/cformosa/followers",
"following_url": "https://api.github.com/users/cformosa/following{/other_user}",
"gists_url": "https://api.github.com/users/cformosa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cformosa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cformosa/subscriptions",
"organizations_url": "https://api.github.com/users/cformosa/orgs",
"repos_url": "https://api.github.com/users/cformosa/repos",
"events_url": "https://api.github.com/users/cformosa/events{/privacy}",
"received_events_url": "https://api.github.com/users/cformosa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi @cformosa, \r\n\r\nThanks for posting this bug. This seems to be related to the new TF 2.2 release. \r\nCould you instead use TF2.1 for the moment:\r\n\r\n```\r\n!pip install transformers\r\n!pip install tensorflow==2.1\r\nfrom transformers import pipeline\r\n\r\nsummarizer = pipeline(\"summarization\", model=\"t5-base\", tokenizer=\"t5-base\", framework=\"tf\")\r\n\r\nsummarizer(\"Sam Shleifer writes the best docstring examples in the whole world.\", min_length=5, max_length=10\r\n```\r\n\r\nPlease let me know if you still encounter problems.\r\n"
] | 1,585 | 1,585 | 1,585 | NONE | null | # 🐛 Bug
T5 summarization code in pipelines.py file gives an error.
## Information
Model I am using: T5
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
pipelines.py official documentation example at line 1146
```
# use t5 in tf
summarizer = pipeline("summarization", model="t5-base", tokenizer="t5-base", framework="tf")
summarizer("Sam Shleifer writes the best docstring examples in the whole world.", min_length=5, max_length=20)
```
## To reproduce
In google colab, I did the following:
```
!pip install transformers
from transformers import pipeline
summarizer = pipeline("summarization", model="t5-base", tokenizer="t5-base", framework="tf")
summarizer("Sam Shleifer writes the best docstring examples in the whole world.", min_length=5, max_length=10)
```
And this gets the following error:
```
Your max_length is set to 200, but you input_length is only 18. You might consider decreasing max_length manually, e.g. summarizer('...', max_length=50)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-23-0fc603a01733> in <module>()
----> 1 summarizer("Sam Shleifer writes the best docstring examples in the whole world.", min_length=5, max_length=10)
3 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
797 else:
798 raise ValueError(
--> 799 'The first argument to `Layer.call` must always be passed.')
800
801 call_context = base_layer_utils.call_context()
ValueError: The first argument to `Layer.call` must always be passed.
```
- `transformers` version: 2.7.0
- Platform: Google Colab
- PyTorch version (GPU?): 1.4.0 GPU enabled
- Tensorflow version (GPU?): 2.2.0-rcl GPU enabled
- Using GPU in script?:
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3539/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3538 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3538/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3538/comments | https://api.github.com/repos/huggingface/transformers/issues/3538/events | https://github.com/huggingface/transformers/pull/3538 | 590,576,179 | MDExOlB1bGxSZXF1ZXN0Mzk1ODg1ODk1 | 3,538 | [Docs] Add usage examples for translation and summarization | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | MEMBER | null | Adds docs. Fastest way to check is the changes in "rich diff" format. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3538/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3538",
"html_url": "https://github.com/huggingface/transformers/pull/3538",
"diff_url": "https://github.com/huggingface/transformers/pull/3538.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3538.patch",
"merged_at": 1585661764000
} |
https://api.github.com/repos/huggingface/transformers/issues/3537 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3537/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3537/comments | https://api.github.com/repos/huggingface/transformers/issues/3537/events | https://github.com/huggingface/transformers/pull/3537 | 590,544,482 | MDExOlB1bGxSZXF1ZXN0Mzk1ODU5MDAx | 3,537 | Add model cards | {
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | MEMBER | null | Add IMDB tuned classifier and LMs model cards. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3537/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3537",
"html_url": "https://github.com/huggingface/transformers/pull/3537",
"diff_url": "https://github.com/huggingface/transformers/pull/3537.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3537.patch",
"merged_at": 1585655685000
} |
https://api.github.com/repos/huggingface/transformers/issues/3536 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3536/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3536/comments | https://api.github.com/repos/huggingface/transformers/issues/3536/events | https://github.com/huggingface/transformers/pull/3536 | 590,535,333 | MDExOlB1bGxSZXF1ZXN0Mzk1ODUxMzQy | 3,536 | [Encoder-Decoder] Force models outputs to always have batch_size as their first dim | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3536?src=pr&el=h1) Report\n> Merging [#3536](https://codecov.io/gh/huggingface/transformers/pull/3536?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6f5a12a5833d1e3783e4b8a42cb556b64085745e&el=desc) will **decrease** coverage by `0.49%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3536?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3536 +/- ##\n==========================================\n- Coverage 78.30% 77.80% -0.50% \n==========================================\n Files 100 100 \n Lines 17062 17062 \n==========================================\n- Hits 13360 13275 -85 \n- Misses 3702 3787 +85 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3536?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `81.48% <ø> (-0.05%)` | :arrow_down: |\n| [src/transformers/modeling\\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `97.59% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.80% <100.00%> (-0.02%)` | :arrow_down: |\n| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/3536/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.89% <0.00%> (-27.60%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3536?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3536?src=pr&el=footer). Last update [6f5a12a...89d0945](https://codecov.io/gh/huggingface/transformers/pull/3536?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,585 | 1,585 | MEMBER | null | This PR remove the hard-coded variable `encoder_outputs_batch_dim_idx` from Bart and T5 by transposing BART's `encoder_outputs` dimensions before returning them.
**Reasons:**
- When adding more encoder-decoder models, we would always force the newly added model to have this variable
- When adding the modeling_encoder_decoder.py file, models that could be used in an encoder-decoder structure would also need to have this attribute, e.g. we would have to add it to Bert for example
- `encoder_outputs_batch_dim_idx` is a hard-coded variable that I don't think is very pretty
**Trade-off:**
- Now every encoder output in a encoder-decoder model has to have `batch_size` as their first dimension.
This PR is related to a question that already came up before (see #3120):
*Should we force all model outputs to have `batch_size` as their first dimension* ?
I think it would be good to always have `batch_size` as the first dimension (to the user) exposed output @thomwolf @LysandreJik @sshleifer @julien-c | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3536/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3536",
"html_url": "https://github.com/huggingface/transformers/pull/3536",
"diff_url": "https://github.com/huggingface/transformers/pull/3536.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3536.patch",
"merged_at": 1585833514000
} |
https://api.github.com/repos/huggingface/transformers/issues/3535 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3535/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3535/comments | https://api.github.com/repos/huggingface/transformers/issues/3535/events | https://github.com/huggingface/transformers/issues/3535 | 590,533,973 | MDU6SXNzdWU1OTA1MzM5NzM= | 3,535 | Error on fine-tuning XLM like model on SQUaD like dataset | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/followers",
"following_url": "https://api.github.com/users/mrm8488/following{/other_user}",
"gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions",
"organizations_url": "https://api.github.com/users/mrm8488/orgs",
"repos_url": "https://api.github.com/users/mrm8488/repos",
"events_url": "https://api.github.com/users/mrm8488/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrm8488/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi, any follow-up in this thread? I have received the same error with yours, but a different parameter 'model_name_or_path = xlm-mlm-tlm-xnli15-1024' is used in my experiment.",
"I had to revert all the way to 2.5.1 to get this to work (xlnet-base fine-tuning on SQuAD 1.1), FWIW, so it's been broken for a bit...",
"Thanks @nelson-liu ",
"Cc @julien-c ",
"> I had to revert all the way to 2.5.1\r\nThanks @nelson-liu\r\nusing `run_squad.py` at `huggingface/transformers/v2.5.1/examples`\r\nhttps://raw.githubusercontent.com/huggingface/transformers/v2.5.1/examples/run_squad.py",
"I encountered a similar error fine-tuning a RoBERTa model on a SWAG-like dataset using the example scripts. The problem appears to be that the transformers.Trainer object defined [here](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py) unpacks all the properties of the InputFeatures as arguments to the model's `forward`, like so:\r\n```\r\nfor k, v in inputs.items():\r\n inputs[k] = v.to(self.args.device)\r\n ...\r\n outputs = model(**inputs)\r\n```\r\nThe problem is that the InputFeatures have properties like example_id that are not valid keyword args for `forward`. (Same problem for this ticket: SquadFeatures has cls_index). \r\n\r\nAs a workaround, I'm removing the example_id property from InputFeatures. Long-term, maybe the Trainer should be more selective about which arguments it passes?\r\n",
"`run_squad.py` doesn't currently use the Trainer so this is probably a distinct issue, @steeter-cyclist .",
"Same query as @andyweizhao. Any updates? Reverting to v2.5.1 throws ImportError: cannot import name 'MODEL_FOR_QUESTION_ANSWERING_MAPPING'",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Still occurs\r\n```\r\nIteration: 0%| | 0/44511 [00:00<?, ?it/s]\r\nEpoch: 0%| | 0/4 [00:00<?, ?it/s]\r\nTraceback (most recent call last):\r\n File \"run_squad.py\", line 820, in <module>\r\n main()\r\n File \"run_squad.py\", line 763, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer)\r\n File \"run_squad.py\", line 202, in train\r\n outputs = model(**inputs)\r\n File \"C:\\Users\\erann\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 72\r\n2, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\nTypeError: forward() got an unexpected keyword argument 'cls_index'\r\n```\r\n\r\nhttps://github.com/huggingface/transformers/issues/6360 shows stale-bot already kicked into action in there as well"
] | 1,585 | 1,604 | 1,602 | CONTRIBUTOR | null | # 🐛 Bug
## Information
I am trying to fine-tune the model [xlm-mlm-100-1280](https://huggingface.co/xlm-mlm-100-1280) on SQuAD v1 dataset like (tydiQA) with the script provided for this task (transformers/examples/run_squad.py) and I get the following error:
```python
Epoch: 0% 0/5 [00:00<?, ?it/s]
Iteration: 0% 0/2383 [00:00<?, ?it/s]Traceback (most recent call last):
File "/content/transformers/examples/run_squad.py", line 829, in <module>
main()
File "/content/transformers/examples/run_squad.py", line 768, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "/content/transformers/examples/run_squad.py", line 204, in train
outputs = model(**inputs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'cls_index'
Epoch: 0% 0/5 [00:00<?, ?it/s]
Iteration: 0% 0/2383 [00:00<?, ?it/s]
```
## To reproduce
```bash
git clone https://github.com/huggingface/transformers
pip install -q ./transformers
# wget "your SQuAD v1 like dataset
python /content/transformers/examples/run_squad.py \
--model_type xlm \
--model_name_or_path xlm-mlm-100-1280 \
--do_lower \
--do_train \
--do_eval \
--train_file /content/dataset/tydiqa-goldp-v1.0-train.json \
--predict_file /content/dataset/tydiqa-goldp-v1.0-dev.json \
--per_gpu_train_batch_size 24 \
--per_gpu_eval_batch_size 128 \
--learning_rate 3e-5 \
--num_train_epochs 5 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /content/model_output \
--overwrite_output_dir \
--save_steps 2000 \
--threads 400
```
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.7.0
- Platform: Linux-4.14.137+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.2.0-rc1 (True)
- Using GPU in script?: Yes. Nvidia Tesla P100
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3535/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3535/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3534 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3534/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3534/comments | https://api.github.com/repos/huggingface/transformers/issues/3534/events | https://github.com/huggingface/transformers/issues/3534 | 590,372,412 | MDU6SXNzdWU1OTAzNzI0MTI= | 3,534 | pretrained EsperBERTo | {
"login": "vr25",
"id": 22553367,
"node_id": "MDQ6VXNlcjIyNTUzMzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/22553367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vr25",
"html_url": "https://github.com/vr25",
"followers_url": "https://api.github.com/users/vr25/followers",
"following_url": "https://api.github.com/users/vr25/following{/other_user}",
"gists_url": "https://api.github.com/users/vr25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vr25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vr25/subscriptions",
"organizations_url": "https://api.github.com/users/vr25/orgs",
"repos_url": "https://api.github.com/users/vr25/repos",
"events_url": "https://api.github.com/users/vr25/events{/privacy}",
"received_events_url": "https://api.github.com/users/vr25/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834053007,
"node_id": "MDU6TGFiZWwxODM0MDUzMDA3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20LM%20(Pretraining)",
"name": "Ex: LM (Pretraining)",
"color": "76FFAF",
"default": false,
"description": "Related to language modeling pre-training"
},
{
"id": 1834081910,
"node_id": "MDU6TGFiZWwxODM0MDgxOTEw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Usage",
"name": "Usage",
"color": "e28436",
"default": false,
"description": "General questions about the library"
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | NONE | null | Hi,
I am trying to replicate the pretraining process mentioned in this blog post: https://huggingface.co/blog/how-to-train
I have time-restricted access to the GPU I'm currently working on and so I wanted to know how to save the checkpoint and resume the pretraining process from the latest checkpoint.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3534/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3533 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3533/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3533/comments | https://api.github.com/repos/huggingface/transformers/issues/3533/events | https://github.com/huggingface/transformers/issues/3533 | 590,339,617 | MDU6SXNzdWU1OTAzMzk2MTc= | 3,533 | Error when training with distributed training on 4/8 Nvidia v100. | {
"login": "timsoraro",
"id": 61194445,
"node_id": "MDQ6VXNlcjYxMTk0NDQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/61194445?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timsoraro",
"html_url": "https://github.com/timsoraro",
"followers_url": "https://api.github.com/users/timsoraro/followers",
"following_url": "https://api.github.com/users/timsoraro/following{/other_user}",
"gists_url": "https://api.github.com/users/timsoraro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timsoraro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timsoraro/subscriptions",
"organizations_url": "https://api.github.com/users/timsoraro/orgs",
"repos_url": "https://api.github.com/users/timsoraro/repos",
"events_url": "https://api.github.com/users/timsoraro/events{/privacy}",
"received_events_url": "https://api.github.com/users/timsoraro/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Out of curiosity, does the same issue arise without fp16?",
"@BramVanroy Yes.",
"Someone's help, please?",
"Have look here: https://github.com/pytorch/pytorch/issues/22436",
"@BramVanroy I'm sorry, the repo is working fine with distributed training. I found the error comes from adding special tokens:\r\n```python\r\nSPECIAL_TOKENS_DICT = {'additional_special_tokens': ['token1', 'token2']}\r\ntokenizer.add_special_tokens(SPECIAL_TOKENS_DICT)\r\n```\r\nIt's weird because I didn't get any error with only 1 GPU.\r\nI solved it by doing:\r\n```python\r\nmodel.resize_token_embeddings(len(tokenizer))\r\n```\r\n@BramVanroy Can you please confirm it's the right way?",
"Yes, after modifying the vocabulary of the tokenizer, you also need to propagate those changes to the model's embeddings.\r\n\r\nIf your problem is fixed, please close this topic.",
"Great, thanks!"
] | 1,585 | 1,585 | 1,585 | NONE | null | # 🐛 Bug
## Information
I'm getting the following error while training the official implementation in [examples/run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) on [WikiText-2 dataset](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) using multi-gpu setting (4/8 Nvidia GPU's).
```python
Traceback (most recent call last):
File "run_language_modeling.py", line 976, in <module>
main()
File "run_language_modeling.py", line 926, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_language_modeling.py", line 513, in train
outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/distributed.py", line 457, in forward
self.reducer.prepare_for_backward(list(_find_tensors(output)))
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable). (prepare_for_backward at /pytorch/torch/csrc/distributed/c10d/reducer.cpp:518)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7f66375cb273 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)
frame #1: c10d::Reducer::prepare_for_backward(std::vector<torch::autograd::Variable, std::allocator<torch::autograd::Variable> > const&) + 0x734 (0x7f66822b09e4 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #2: <unknown function> + 0x691a4c (0x7f668229fa4c in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #3: <unknown function> + 0x1d3ef4 (0x7f6681de1ef4 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
frame #4: _PyCFunction_FastCallDict + 0x35c (0x5674fc in /usr/bin/python)
frame #5: /usr/bin/python() [0x50abb3]
frame #6: _PyEval_EvalFrameDefault + 0x449 (0x50c5b9 in /usr/bin/python)
frame #7: /usr/bin/python() [0x508245]
```
Model I am using (Bert, XLNet ...): GPT-2
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: [examples/run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: [WikiText-2 dataset](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run [examples/run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) Using this command:
```
python -m torch.distributed.launch --nproc_per_node 4 run_language_modeling.py --output_dir=./output/ --model_type=gpt2 --model_name_or_path=gpt2 --do_train --train_data_file=./data/wiki.train.raw --per_gpu_train_batch_size 2 --num_train_epochs 10 --fp16
```
## Expected behavior
Should run distributed training without any errors.
## Environment info
I'm using this docker:
```
docker pull deepspeed/deepspeed:latest
```
- `transformers` version: 2.6.0
- Platform: Linux
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?: Yes
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3533/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3532 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3532/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3532/comments | https://api.github.com/repos/huggingface/transformers/issues/3532/events | https://github.com/huggingface/transformers/pull/3532 | 590,327,634 | MDExOlB1bGxSZXF1ZXN0Mzk1NjgxNTk2 | 3,532 | Resizing embedding matrix before sending it to the optimizer. | {
"login": "ngarneau",
"id": 665101,
"node_id": "MDQ6VXNlcjY2NTEwMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/665101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ngarneau",
"html_url": "https://github.com/ngarneau",
"followers_url": "https://api.github.com/users/ngarneau/followers",
"following_url": "https://api.github.com/users/ngarneau/following{/other_user}",
"gists_url": "https://api.github.com/users/ngarneau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ngarneau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ngarneau/subscriptions",
"organizations_url": "https://api.github.com/users/ngarneau/orgs",
"repos_url": "https://api.github.com/users/ngarneau/repos",
"events_url": "https://api.github.com/users/ngarneau/events{/privacy}",
"received_events_url": "https://api.github.com/users/ngarneau/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3532?src=pr&el=h1) Report\n> Merging [#3532](https://codecov.io/gh/huggingface/transformers/pull/3532?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d38bbb225f7b847e8be4e969cb9b40e7e4d798a6&el=desc) will **decrease** coverage by `0.00%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3532?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3532 +/- ##\n==========================================\n- Coverage 77.81% 77.80% -0.01% \n==========================================\n Files 100 100 \n Lines 17062 17062 \n==========================================\n- Hits 13276 13275 -1 \n- Misses 3786 3787 +1 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3532?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3532/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.81% <0.00%> (-0.14%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3532?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3532?src=pr&el=footer). Last update [d38bbb2...0475459](https://codecov.io/gh/huggingface/transformers/pull/3532?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | Hi there,
This is a minor bug when fine-tuning a pre-trained language model with a larger vocabulary using the run_lm_finetuning.py script.
Nothing major but I spent a couple of hours trying to figure out why my token embeddings were not being updated with their corresponding gradient :)
This bug will rise when you add new token types in the Tokenizer.
Since the resizing was done after passing the params to the optimizer, the wrong set of params for the embedding table were optimized.
Cheers | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3532/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3532",
"html_url": "https://github.com/huggingface/transformers/pull/3532",
"diff_url": "https://github.com/huggingface/transformers/pull/3532.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3532.patch",
"merged_at": 1585854005000
} |
https://api.github.com/repos/huggingface/transformers/issues/3531 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3531/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3531/comments | https://api.github.com/repos/huggingface/transformers/issues/3531/events | https://github.com/huggingface/transformers/pull/3531 | 590,269,903 | MDExOlB1bGxSZXF1ZXN0Mzk1NjMzNDQw | 3,531 | [T5, docs] remove useless and confusing lm_labels line | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | MEMBER | null | Remove useless docstring | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3531/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3531",
"html_url": "https://github.com/huggingface/transformers/pull/3531",
"diff_url": "https://github.com/huggingface/transformers/pull/3531.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3531.patch",
"merged_at": 1585661546000
} |
https://api.github.com/repos/huggingface/transformers/issues/3530 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3530/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3530/comments | https://api.github.com/repos/huggingface/transformers/issues/3530/events | https://github.com/huggingface/transformers/issues/3530 | 590,261,325 | MDU6SXNzdWU1OTAyNjEzMjU= | 3,530 | TypeError: sequence item 0: expected str instance, NBProgressBar found | {
"login": "pascalhuszar",
"id": 45284935,
"node_id": "MDQ6VXNlcjQ1Mjg0OTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/45284935?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pascalhuszar",
"html_url": "https://github.com/pascalhuszar",
"followers_url": "https://api.github.com/users/pascalhuszar/followers",
"following_url": "https://api.github.com/users/pascalhuszar/following{/other_user}",
"gists_url": "https://api.github.com/users/pascalhuszar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pascalhuszar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pascalhuszar/subscriptions",
"organizations_url": "https://api.github.com/users/pascalhuszar/orgs",
"repos_url": "https://api.github.com/users/pascalhuszar/repos",
"events_url": "https://api.github.com/users/pascalhuszar/events{/privacy}",
"received_events_url": "https://api.github.com/users/pascalhuszar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834054694,
"node_id": "MDU6TGFiZWwxODM0MDU0Njk0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/TensorFlow",
"name": "TensorFlow",
"color": "FF6F00",
"default": false,
"description": "Anything TensorFlow"
}
] | closed | false | null | [] | [
"Whats also strange, the PyTorch version list the labels as text, e.g: \"B-ORG, I-LOC, [...] in the Model Config Ouput but the TF-version list them like this bellow. Is this okay?\r\n\r\n```\r\n\"id2label\": {\r\n \"0\": \"LABEL_0\",\r\n \"1\": \"LABEL_1\",\r\n \"2\": \"LABEL_2\",\r\n \"3\": \"LABEL_3\",\r\n \"4\": \"LABEL_4\",\r\n \"5\": \"LABEL_5\",\r\n \"6\": \"LABEL_6\",\r\n \"7\": \"LABEL_7\",\r\n \"8\": \"LABEL_8\",\r\n \"9\": \"LABEL_9\",\r\n \"10\": \"LABEL_10\",\r\n \"11\": \"LABEL_11\",\r\n \"12\": \"LABEL_12\",\r\n \"13\": \"LABEL_13\",\r\n \"14\": \"LABEL_14\",\r\n \"15\": \"LABEL_15\",\r\n \"16\": \"LABEL_16\",\r\n \"17\": \"LABEL_17\",\r\n \"18\": \"LABEL_18\",\r\n \"19\": \"LABEL_19\",\r\n \"20\": \"LABEL_20\",\r\n \"21\": \"LABEL_21\",\r\n \"22\": \"LABEL_22\",\r\n \"23\": \"LABEL_23\",\r\n \"24\": \"LABEL_24\",\r\n \"25\": \"LABEL_25\"\r\n```",
"Any update or fix for this?",
"Got it working - the parent progress bar is not needed for evalute/predict as there is a single iteration. \r\n\r\nIn run_tf_ner.py I changed:\r\n\r\neval_iterator = progress_bar(eval_dataset, total=num_eval_steps,parent=master, display=args[\"n_device\"] > 1)\r\n\r\nto \r\n\r\neval_iterator = progress_bar(eval_dataset, total=num_eval_steps, display=args[\"n_device\"] > 1)\r\n\r\nand commented out\r\n\r\nmaster = master_bar(range(1))",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"same issue",
"no im just an idiot"
] | 1,585 | 1,702 | 1,592 | NONE | null | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): bert-base-multilingual-cased
Language I am using the model on (English, Chinese ...): German
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: NER
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Google Colab Notebook
2. Run: `!python3 run_tf_ner.py --data_dir ./data --model_type bert --labels labels.txt --model_name_or_path bert-base-multilingual-cased --output_dir germeval-model --max_seq_length 128 --num_train_epochs 3 --per_device_train_batch_size 32 --save_steps 750 --seed 1 --do_train --do_eval --do_predict`
3.
```
I0330 12:15:45.104261 140536926144384 modeling_tf_utils.py:388] loading weights file germeval-model/tf_model.h5
I0330 12:15:46.316838 140536926144384 modeling_tf_utils.py:428] Layers of TFBertForTokenClassification not initialized from pretrained model: ['dropout_75']
I0330 12:15:46.317042 140536926144384 modeling_tf_utils.py:432] Layers from pretrained model not used in TFBertForTokenClassification: ['dropout_37']
I0330 12:15:46.317251 140536926144384 run_tf_ner.py:420] Loading features from cached file ./data/cached_dev_bert-base-multilingual-cased_128.tf_record
I0330 12:15:46.483210 140536926144384 run_tf_ner.py:318] ***** Running evaluation *****
I0330 12:15:46.483375 140536926144384 run_tf_ner.py:319] Num examples = 2200
I0330 12:15:46.483478 140536926144384 run_tf_ner.py:320] Batch size = 8
Traceback (most recent call last):
File "run_tf_ner.py", line 644, in <module>
app.run(main)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 299, in run
_run_main(main, args)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 250, in _run_main
sys.exit(main(argv))
File "run_tf_ner.py", line 579, in main
args, strategy, model, tokenizer, labels, pad_token_label_id, mode="dev"
File "run_tf_ner.py", line 322, in evaluate
for eval_features, eval_labels in eval_iterator:
File "/usr/local/lib/python3.6/dist-packages/fastprogress/fastprogress.py", line 39, in __iter__
if self.total != 0: self.update(0)
File "/usr/local/lib/python3.6/dist-packages/fastprogress/fastprogress.py", line 56, in update
self.update_bar(0)
File "/usr/local/lib/python3.6/dist-packages/fastprogress/fastprogress.py", line 76, in update_bar
else: self.on_update(val, f'{100 * val/self.total:.2f}% [{val}/{self.total} {elapsed_t}<{remaining_t}{end}]')
File "/usr/local/lib/python3.6/dist-packages/fastprogress/fastprogress.py", line 126, in on_update
elif self.parent is not None: self.parent.show()
File "/usr/local/lib/python3.6/dist-packages/fastprogress/fastprogress.py", line 167, in show
self.html_code = '\n'.join([getattr(self.inner_dict[n], 'progress', self.inner_dict[n]) for n in to_show])
TypeError: sequence item 0: expected str instance, NBProgressBar found
```
## Expected behavior
Evaluation of test.txt
## Environment info
- `transformers` version: 2.6.0
- Platform: Google Colab
- Python version: 3.6
- PyTorch version (GPU?): -
- Tensorflow version (GPU?): 2.2.0rc1
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: using example setup
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3530/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3530/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3529 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3529/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3529/comments | https://api.github.com/repos/huggingface/transformers/issues/3529/events | https://github.com/huggingface/transformers/pull/3529 | 590,237,723 | MDExOlB1bGxSZXF1ZXN0Mzk1NjA2NzU1 | 3,529 | [T5] fix lm lables in docstring | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3529?src=pr&el=h1) Report\n> Merging [#3529](https://codecov.io/gh/huggingface/transformers/pull/3529?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/75ec6c9e3a7de6cc3e2920f3bb531e7c840b8ada&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3529?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3529 +/- ##\n==========================================\n+ Coverage 77.80% 77.81% +0.01% \n==========================================\n Files 100 100 \n Lines 17062 17062 \n==========================================\n+ Hits 13275 13277 +2 \n+ Misses 3787 3785 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3529?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `81.52% <ø> (ø)` | |\n| [src/transformers/modeling\\_tf\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `94.98% <ø> (ø)` | |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.94% <0.00%> (+0.13%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3529/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.32% <0.00%> (+0.17%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3529?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3529?src=pr&el=footer). Last update [75ec6c9...8692264](https://codecov.io/gh/huggingface/transformers/pull/3529?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,585 | 1,585 | 1,585 | MEMBER | null | Add better explanation to T5 `lm_labels` dosctring. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3529/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3529/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3529",
"html_url": "https://github.com/huggingface/transformers/pull/3529",
"diff_url": "https://github.com/huggingface/transformers/pull/3529.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3529.patch",
"merged_at": 1585571184000
} |
https://api.github.com/repos/huggingface/transformers/issues/3528 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3528/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3528/comments | https://api.github.com/repos/huggingface/transformers/issues/3528/events | https://github.com/huggingface/transformers/issues/3528 | 590,206,547 | MDU6SXNzdWU1OTAyMDY1NDc= | 3,528 | Unexpected ZeroDivisionError when calling model.prune_heads | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Found the solution to this. When you try to prune all the attention heads in a layer, you will run into this error. This is why it sometimes shows up and sometimes does not, because your pruning function may or may not decide to prune all the attention heads in some layer depending on how you are computing the importance of each attention head. If you try the example on the Hugging Face Model page for the prune_heads function, {1: [0, 2], 2: [2, 3]}, it should work without any error (at least, you will not end up with the ZeroDivisionError. \r\n\r\nI was able to debug this by printing out what my original heads_to_prune dictionary looked like, and therefore noticed the edge case. With this hunch, testing it out on other cases confirmed the cause. In the future, printing out your inputs to the function that is returning the error is a good practice! Especially when the function is implemented by some entity like Hugging Face and the only thing that could probably go wrong is the input you give it.\r\n\r\nHope this helps!"
] | 1,585 | 1,623 | 1,591 | NONE | null | # 🐛 Bug
Traceback (most recent call last):
File "/Users/user/anaconda3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/Users/user/anaconda3/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/Users/user/Desktop/org/text_models_branches/section_extraction/text_models/bert_text_classification/train_bert_ml_mc.py", line 609, in <module>
masking_amount=args.masking_amount,
File "/Users/user/Desktop/org/text_models_branches/section_extraction/text_models/bert_text_classification/train_bert_ml_mc.py", line 288, in train_model
local_rank=transformer_args["local_rank"],
File "/Users/user/Desktop/org/text_models_branches/section_extraction/text_models/bert_text_classification/prune_attention_heads.py", line 488, in prune_model_and_return
metric=metric,
File "/Users/user/Desktop/org/text_models_branches/section_extraction/text_models/bert_text_classification/prune_attention_heads.py", line 396, in prune_heads
model.prune_heads(heads_to_prune)
File "/Users/user/Desktop/org/text_models_branches/section_extraction/text_models/env/lib/python3.7/site-packages/transformers/modeling_utils.py", line 234, in prune_heads
self.base_model._prune_heads(heads_to_prune)
File "/Users/user/Desktop/org/text_models_branches/section_extraction/text_models/env/lib/python3.7/site-packages/transformers/modeling_bert.py", line 635, in _prune_heads
self.encoder.layer[layer].attention.prune_heads(heads)
File "/Users/user/Desktop/org/text_models_branches/section_extraction/text_models/env/lib/python3.7/site-packages/transformers/modeling_bert.py", line 301, in prune_heads
self.output.dense = prune_linear_layer(self.output.dense, index, dim=1)
File "/Users/user/Desktop/org/text_models_branches/section_extraction/text_models/env/lib/python3.7/site-packages/transformers/modeling_utils.py", line 824, in prune_linear_layer
new_layer = nn.Linear(new_size[1], new_size[0], bias=layer.bias is not None).to(layer.weight.device)
File "/Users/user/Desktop/org/text_models_branches/section_extraction/text_models/env/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 81, in __init__
self.reset_parameters()
File "/Users/user/Desktop/org/text_models_branches/section_extraction/text_models/env/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 84, in reset_parameters
init.kaiming_uniform_(self.weight, a=math.sqrt(5))
File "/Users/user/Desktop/org/text_models_branches/section_extraction/text_models/env/lib/python3.7/site-packages/torch/nn/init.py", line 325, in kaiming_uniform_
std = gain / math.sqrt(fan)
ZeroDivisionError: float division by zero
Basically, the error is being thrown on calling model.prune_heads(heads_to_prune). This error is not coming every time I run the script. Not sure what is causing the error.
## Information
Model I am using (Bert, XLNet ...): BERT
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below) - No
* [x] my own modified scripts: (give details below) - Yes
My script calls mask_heads and then prune_heads similar to the original bertology script
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
A multi-class classification task on a proprietary dataset
## To reproduce
Steps to reproduce the behavior:
Still don't know as the error seems to be unexpected. I don't get it every time I run the script.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
model.prune_heads(heads_to_prune) where heads_to_prune -> Dict[int, List] where key is the layer number and value is the list of heads to prune (Calculated by calling the mask_heads function). Expected behaviour is pruning off heads present in heads_to_prune for each layer
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.2.2
- Platform: macOS Catalina 10.15
- Python version: Python 3.7.3
- PyTorch version (GPU?): 1.3.1 (No GPU)
- Tensorflow version (GPU?): 1.14.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3528/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3527 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3527/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3527/comments | https://api.github.com/repos/huggingface/transformers/issues/3527/events | https://github.com/huggingface/transformers/issues/3527 | 590,174,338 | MDU6SXNzdWU1OTAxNzQzMzg= | 3,527 | Bart.generate requires config.output_past=True | {
"login": "tuhinjubcse",
"id": 3104771,
"node_id": "MDQ6VXNlcjMxMDQ3NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3104771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tuhinjubcse",
"html_url": "https://github.com/tuhinjubcse",
"followers_url": "https://api.github.com/users/tuhinjubcse/followers",
"following_url": "https://api.github.com/users/tuhinjubcse/following{/other_user}",
"gists_url": "https://api.github.com/users/tuhinjubcse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tuhinjubcse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tuhinjubcse/subscriptions",
"organizations_url": "https://api.github.com/users/tuhinjubcse/orgs",
"repos_url": "https://api.github.com/users/tuhinjubcse/repos",
"events_url": "https://api.github.com/users/tuhinjubcse/events{/privacy}",
"received_events_url": "https://api.github.com/users/tuhinjubcse/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"Bart is a encoder-decoder model. So it should be rather used as translating one sequence to another one. This means that the generation method expects `input_ids` and creates `decoder_input_ids`. \r\n\r\nMaybe you can take a look at this: https://sshleifer.github.io/blog_v2/jupyter/2020/03/12/bart.html",
"I think I might have found a potential issue with `BartForConditionalGeneration`. In zero-shot setup, the vanilla `bart-large` model produces gibberish, while the `bart-large-cnn` can generate fluent language. I think the problem is with the default setup on `output_past` attribute of `BartConfig`\r\n\r\nExample:\r\n```\r\nfrom transformers import AutoTokenizer, BartForConditionalGeneration\r\n\r\nmodel_name_or_path = 'bart-large'\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_name_or_path)\r\nmodel = BartForConditionalGeneration(model_name_or_path)\r\n\r\ntext = \"Trump falsely denied that he claimed governors from certain states\"\r\ninput_ids = tokenizer.batch_encode_plus([text], return_tensors='pt')['input_ids']\r\noutput = model.generate(input_ids=input_ids, max_length=50, num_beams=1)\r\nprint(tokenizer.decode(output[0]))\r\n```\r\nIf `model_name_or_path=\"bart-large\"`, the result will be `<s>Mr\\'<s>Mr\\'Mr\"Mr\"\"<s>Mr\"Mr\"\\'Mr\"<s>Mr\"<s>Mr\"<s>Mr\"<s>Mr\"Mr\"<s>Mr\"<s>Mr\\'Mr\"\\'Mr\"Mr\"\\'Mr\"Mr`.\r\n\r\nIf it is set to `bart-large-cnn`, the result will be `</s><s><s><s>Trump falsely denied that he claimed governors from certain states. Trump falsely denied he claimed that he had been in contact with governors from some states. He also falsely denied saying he had met with governors of certain states in the past. Trump`\r\n\r\nBut once I override the `output_past` flag in config, the result of `bart-large` will be normal:\r\n```\r\nconfig = BartConfig.from_pretrained('bart-large')\r\nconfig.output_past = True\r\nmodel = BartForConditionalGeneration(model_name_or_path, config=config)\r\n...\r\n```\r\nResult would be: `<s>MrThreatening to deport immigrants from certain states</s>`\r\n\r\nThis seems to be related to autoregressive decoding where the decoder states need to be cached. Not sure if this is intended so that `bart-large` is always used as a masked language model, correct me if I'm wrong.\r\n\r\n",
"Thanks Xinyu . I owe you a drink :)",
"> I think I might have found a potential issue with `BartForConditionalGeneration`. In zero-shot setup, the vanilla `bart-large` model produces gibberish, while the `bart-large-cnn` can generate fluent language. I think the problem is with the default setup on `output_past` attribute of `BartConfig`\r\n> \r\n> Example:\r\n> \r\n> ```\r\n> from transformers import AutoTokenizer, BartForConditionalGeneration\r\n> \r\n> model_name_or_path = 'bart-large'\r\n> \r\n> tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)\r\n> model = BartForConditionalGeneration(model_name_or_path)\r\n> \r\n> text = \"Trump falsely denied that he claimed governors from certain states\"\r\n> input_ids = tokenizer.batch_encode_plus([text], return_tensors='pt')['input_ids']\r\n> output = model.generate(input_ids=input_ids, max_length=50, num_beams=1)\r\n> print(tokenizer.decode(output[0]))\r\n> ```\r\n> \r\n> If `model_name_or_path=\"bart-large\"`, the result will be `<s>Mr\\'<s>Mr\\'Mr\"Mr\"\"<s>Mr\"Mr\"\\'Mr\"<s>Mr\"<s>Mr\"<s>Mr\"<s>Mr\"Mr\"<s>Mr\"<s>Mr\\'Mr\"\\'Mr\"Mr\"\\'Mr\"Mr`.\r\n> \r\n> If it is set to `bart-large-cnn`, the result will be `</s><s><s><s>Trump falsely denied that he claimed governors from certain states. Trump falsely denied he claimed that he had been in contact with governors from some states. He also falsely denied saying he had met with governors of certain states in the past. Trump`\r\n> \r\n> But once I override the `output_past` flag in config, the result of `bart-large` will be normal:\r\n> \r\n> ```\r\n> config = BartConfig.from_pretrained('bart-large')\r\n> config.output_past = True\r\n> model = BartForConditionalGeneration(model_name_or_path, config=config)\r\n> ...\r\n> ```\r\n> \r\n> Result would be: `<s>MrThreatening to deport immigrants from certain states</s>`\r\n> \r\n> This seems to be related to autoregressive decoding where the decoder states need to be cached. Not sure if this is intended so that `bart-large` is always used as a masked language model, correct me if I'm wrong.\r\n\r\n@sshleifer - maybe you can answer this better than I can",
"@patrickvonplaten \r\n```\r\n\r\n>>> model = BartForConditionalGeneration(model_name_or_path, config=c)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nTypeError: __init__() got multiple values for argument 'config'\r\n```\r\n\r\nGetting this error. Also is there a way to force a generation to contain prefix tokens?\r\ni know fairseq has this feature",
"@tuhinjubcse\r\n- to pass a model name, you need to instantiate using `from_pretrained`. You can pass in configuration options as keyword arugments.\r\n\r\n```python\r\nBartForConditionalGeneration.from_pretrained(model_name, **c.__dict__)\r\n``` \r\n\r\n- for prefix tokens, see the `decoder_start_input_ids` kwarg to `generate`",
"@XinyuHua you are correct!",
"Idk the results look pretty bad to me @sshleifer \r\n\r\n```\r\nfrom transformers import AutoTokenizer, BartForConditionalGeneration ,BartConfig\r\nc = BartConfig.from_pretrained('bart-large')\r\nc.output_past = True\r\n\r\nmodel_name_or_path = 'bart-large'\r\ntokenizer = AutoTokenizer.from_pretrained(model_name_or_path)\r\nmodel = BartForConditionalGeneration.from_pretrained(model_name_or_path, config=c)\r\n\r\ntext = \"Milton scrunched his eyes and moodily turned back to his computer like a\"\r\ninput_ids = tokenizer.batch_encode_plus([text], return_tensors='pt')['input_ids']\r\n\r\ninput_ids = tokenizer.batch_encode_plus([text], return_tensors='pt')['input_ids']\r\noutput = model.generate(input_ids=input_ids,do_sample=True,max_length=50,top_k=5,temperature=0.7)\r\nprint(tokenizer.decode(output[0]))\r\n```\r\n\r\nThe output I got is *MrMilton*",
"I'm not super surprised, since 'bart-large' is not finetuned on a generative task.",
"@sshleifer do you suggest using a different checkpoint or model\r\nThe reason I am asking is I am fine tuning on a novel dataset created for a task\r\nBut I need to have a baseline where I wanted to see how BART pretrained does , coz based on GPT2 it seems it does decently on generative tasks",
"I think it depends on the task, but I haven't tried using bart for the \"text continuation\" type workflow. CTRL, GPT2, T5 could work better.",
"@sshleifer Let me be a bit clear\r\nI wanted to do something like\r\n\r\ntext_input = “Milton scrunched his eyes and moodily turned back to his computer helpless”\r\ntext_output = “Milton scrunched his eyes and moodily turned back to his computer like a”\r\n\r\nI want my output to contain text_output as a prefix\r\n\r\nNormally when I was fine-tuning BART where I had paired data\r\n\r\nMilton scrunched his eyes and moodily turned back to his computer helpless----->Milton scrunched his eyes and moodily turned back to his computer like a despondent child \r\n\r\nThe generation result was\r\nMilton scrunched his eyes and moodily turned back to his computer like a child caught in the headlights \r\n\r\nI want to be able to get some results without fine-tuning and just using pretrained BART to compare. How do I do that?\r\n\r\n",
"The short answer is I don't know, we don't have that use case supported with Bart.\r\n\r\nFor now I am going to close this, but feel free to open a discussion issue about your task."
] | 1,585 | 1,586 | 1,586 | NONE | null | Is there a way to generate using pre-trained BART like one in
https://huggingface.co/blog/how-to-generate
I am currently using BART for a generation task but finetuning it
I was wondering if it's possible to see generation result from pre-trained BART | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3527/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3526 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3526/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3526/comments | https://api.github.com/repos/huggingface/transformers/issues/3526/events | https://github.com/huggingface/transformers/issues/3526 | 590,120,644 | MDU6SXNzdWU1OTAxMjA2NDQ= | 3,526 | bug in run_glue.py | {
"login": "keloemma",
"id": 40454218,
"node_id": "MDQ6VXNlcjQwNDU0MjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/40454218?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/keloemma",
"html_url": "https://github.com/keloemma",
"followers_url": "https://api.github.com/users/keloemma/followers",
"following_url": "https://api.github.com/users/keloemma/following{/other_user}",
"gists_url": "https://api.github.com/users/keloemma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/keloemma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keloemma/subscriptions",
"organizations_url": "https://api.github.com/users/keloemma/orgs",
"repos_url": "https://api.github.com/users/keloemma/repos",
"events_url": "https://api.github.com/users/keloemma/events{/privacy}",
"received_events_url": "https://api.github.com/users/keloemma/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | NONE | null | # 🐛 Bug
## Information
Model I am using (FlauBert ...):
Language I am using the model on (French ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
the script "run_glue" (I did not modified anything) I took it as it is
The tasks I am working on is:
* [x] an official FLUE task: (CLS = classification tast ) : finetuning model
## To reproduce
Steps to reproduce the behavior:
1. Just run the script run_glue.py from a bash script following commands in the FlauBert tuto (https://github.com/getalp/Flaubert/tree/master/flue).
config='flue/examples/cls_books_lr5e6_hf_base_cased.cfg'
source $config
python ~/transformers/examples/run_flue.py \
--data_dir $data_dir \
--model_type flaubert \
--model_name_or_path $model_name_or_path \
--task_name $task_name \
--output_dir $output_dir \
--max_seq_length 512 \
--do_train \
--do_eval \
--learning_rate $lr \
--num_train_epochs $epochs \
--save_steps $save_steps \
--fp16 \
--fp16_opt_level O1 \
|& tee output.log
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
`
03/27/2020 11:54:48 - WARNING - main - Process rank: -1, device: cuda, n_gpu: 2, distributed training: False, 16-bits training: True
Traceback (most recent call last):
File "/home/getalp/kelodjoe/transformers/examples/run_glue.py", line 693, in
main()
File "/home/getalp/kelodjoe/transformers/examples/run_glue.py", line 613, in main
config_class, model_class, tokenizer_class = MODEL_CLASSES[args.model_type]
KeyError: 'flaubert'
`
Code of line 613
`
# Training
if args.do_train:
train_dataset = load_and_cache_examples(args, args.task_name, tokenizer, evaluate=False)
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
logger.info(" global_step = %s, average loss = %s", global_step, tr_loss)
# Saving best-practices: if you use defaults names for the model, you can reload it using from_pretrained() #613
if args.do_train and (args.local_rank == -1 or torch.distributed.get_rank() == 0):
# Create output directory if needed
if not os.path.exists(args.output_dir) and args.local_rank in [-1, 0]:
os.makedirs(args.output_dir)
logger.info("Saving model checkpoint to %s", args.output_dir)
# Save a trained model, configuration and tokenizer using `save_pretrained()`.
# They can then be reloaded using `from_pretrained()`
model_to_save = (
model.module if hasattr(model, "module") else model
) # Take care of distributed/parallel training
model_to_save.save_pretrained(args.output_dir)
tokenizer.save_pretrained(args.output_dir)
`
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I would Have expected it to work. and train the model.
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: => how do I know I git pull before running the script
- Platform: linux
- Python version: 3.6
- PyTorch version (GPU?): 1.4
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: no
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3526/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3526/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3525 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3525/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3525/comments | https://api.github.com/repos/huggingface/transformers/issues/3525/events | https://github.com/huggingface/transformers/issues/3525 | 590,063,584 | MDU6SXNzdWU1OTAwNjM1ODQ= | 3,525 | Issue loading custom tokenizer for fine-tuning gpt2 | {
"login": "modern-online",
"id": 53079544,
"node_id": "MDQ6VXNlcjUzMDc5NTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/53079544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/modern-online",
"html_url": "https://github.com/modern-online",
"followers_url": "https://api.github.com/users/modern-online/followers",
"following_url": "https://api.github.com/users/modern-online/following{/other_user}",
"gists_url": "https://api.github.com/users/modern-online/gists{/gist_id}",
"starred_url": "https://api.github.com/users/modern-online/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/modern-online/subscriptions",
"organizations_url": "https://api.github.com/users/modern-online/orgs",
"repos_url": "https://api.github.com/users/modern-online/repos",
"events_url": "https://api.github.com/users/modern-online/events{/privacy}",
"received_events_url": "https://api.github.com/users/modern-online/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"Can you post a sample code?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,591 | 1,591 | NONE | null | I'm trying to fine-tune gpt2 with a custom tokenizer. It was working fine just over 10 days ago, with --tokenizer_name=/path to vocab and merges folder/ and now it cannot load, asking to check if it's a correct model identifier or contains a config.json file. As if instead of a tokenizer it is now trying to load a model? It also asked for an extra model identifier in the config file of my model, which before was not required.
I suppose there was a library update? What would be the workaround? Thanks in advance.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3525/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3524 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3524/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3524/comments | https://api.github.com/repos/huggingface/transformers/issues/3524/events | https://github.com/huggingface/transformers/pull/3524 | 589,975,629 | MDExOlB1bGxSZXF1ZXN0Mzk1MzkwODcw | 3,524 | Add shoarora/electra and alectra model cards | {
"login": "shoarora",
"id": 16643856,
"node_id": "MDQ6VXNlcjE2NjQzODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/16643856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shoarora",
"html_url": "https://github.com/shoarora",
"followers_url": "https://api.github.com/users/shoarora/followers",
"following_url": "https://api.github.com/users/shoarora/following{/other_user}",
"gists_url": "https://api.github.com/users/shoarora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shoarora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shoarora/subscriptions",
"organizations_url": "https://api.github.com/users/shoarora/orgs",
"repos_url": "https://api.github.com/users/shoarora/repos",
"events_url": "https://api.github.com/users/shoarora/events{/privacy}",
"received_events_url": "https://api.github.com/users/shoarora/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1838412367,
"node_id": "MDU6TGFiZWwxODM4NDEyMzY3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/model%20card",
"name": "model card",
"color": "92d5f4",
"default": false,
"description": "Related to pretrained model cards"
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3524?src=pr&el=h1) Report\n> Merging [#3524](https://codecov.io/gh/huggingface/transformers/pull/3524?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/33ef7002e17fe42b276dc6d36c07a3c39b1f09ed&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3524?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3524 +/- ##\n==========================================\n- Coverage 77.80% 77.79% -0.02% \n==========================================\n Files 100 100 \n Lines 17051 17051 \n==========================================\n- Hits 13267 13265 -2 \n- Misses 3784 3786 +2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3524?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3524/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.15% <0.00%> (-0.18%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3524/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.81% <0.00%> (-0.14%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3524?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3524?src=pr&el=footer). Last update [33ef700...daff82d](https://codecov.io/gh/huggingface/transformers/pull/3524?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Model pages:\r\nhttps://huggingface.co/shoarora/alectra-small-owt\r\nhttps://huggingface.co/shoarora/electra-small-owt\r\n\r\nThanks for sharing @shoarora \r\n\r\nDid you see those models btw @LysandreJik?"
] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | Add model cards for recently uploaded models:
- shoarora/electra-small-owt (BERT)
- shoarora/alectra-small-owt (ALBERT) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3524/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3524",
"html_url": "https://github.com/huggingface/transformers/pull/3524",
"diff_url": "https://github.com/huggingface/transformers/pull/3524.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3524.patch",
"merged_at": 1585655928000
} |
https://api.github.com/repos/huggingface/transformers/issues/3523 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3523/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3523/comments | https://api.github.com/repos/huggingface/transformers/issues/3523/events | https://github.com/huggingface/transformers/issues/3523 | 589,955,282 | MDU6SXNzdWU1ODk5NTUyODI= | 3,523 | Why GPT2 train loss and topK accuracy both decrease? | {
"login": "lx-kika",
"id": 62126666,
"node_id": "MDQ6VXNlcjYyMTI2NjY2",
"avatar_url": "https://avatars.githubusercontent.com/u/62126666?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lx-kika",
"html_url": "https://github.com/lx-kika",
"followers_url": "https://api.github.com/users/lx-kika/followers",
"following_url": "https://api.github.com/users/lx-kika/following{/other_user}",
"gists_url": "https://api.github.com/users/lx-kika/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lx-kika/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lx-kika/subscriptions",
"organizations_url": "https://api.github.com/users/lx-kika/orgs",
"repos_url": "https://api.github.com/users/lx-kika/repos",
"events_url": "https://api.github.com/users/lx-kika/events{/privacy}",
"received_events_url": "https://api.github.com/users/lx-kika/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I found a mistake that in GPT2LMHeadModel, the label is shifted, however, I shift it again when preparing the batch."
] | 1,585 | 1,585 | 1,585 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hi,
I use GPT2 LMHeadModel training from scratch. The training loss decreases, however, I test it on the same train dataset, I get top3 accuracy decreasing as well. Moreover, is it normal that my top3 accuracy goes down sharply like from 0.2 to 0.05 only one or two epochs? Because it seems to be stable when it converges. Did anyone meet the same problem?

<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3523/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3522 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3522/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3522/comments | https://api.github.com/repos/huggingface/transformers/issues/3522/events | https://github.com/huggingface/transformers/issues/3522 | 589,936,865 | MDU6SXNzdWU1ODk5MzY4NjU= | 3,522 | why isn't AlbertForMultipulChioce in modeling_albert? | {
"login": "oashua",
"id": 58935410,
"node_id": "MDQ6VXNlcjU4OTM1NDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/58935410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oashua",
"html_url": "https://github.com/oashua",
"followers_url": "https://api.github.com/users/oashua/followers",
"following_url": "https://api.github.com/users/oashua/following{/other_user}",
"gists_url": "https://api.github.com/users/oashua/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oashua/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oashua/subscriptions",
"organizations_url": "https://api.github.com/users/oashua/orgs",
"repos_url": "https://api.github.com/users/oashua/repos",
"events_url": "https://api.github.com/users/oashua/events{/privacy}",
"received_events_url": "https://api.github.com/users/oashua/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1834056761,
"node_id": "MDU6TGFiZWwxODM0MDU2NzYx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Modeling",
"name": "Core: Modeling",
"color": "FF8446",
"default": false,
"description": "Internals of the library; Models."
}
] | closed | false | null | [] | [
"I have come accross the same problem when I was testing RACE for Albert.\r\nYour implementation might be right because there are few differences betweent \"roberta\" and \"albert\" finetune heads.\r\nIf you are testing RACE like me, the real problem may lie in the lack of max_qa_length implementation in run_multiple_choice.py.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,595 | 1,595 | NONE | null | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I just copy code from RobertaForMultipleChoice to modeling_albert and change all 'roberta' to 'albert', but the loss didn't goes dowm evidently. And the result is even worse than articles.
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3522/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3521 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3521/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3521/comments | https://api.github.com/repos/huggingface/transformers/issues/3521/events | https://github.com/huggingface/transformers/pull/3521 | 589,913,840 | MDExOlB1bGxSZXF1ZXN0Mzk1MzQyMjcw | 3,521 | [T5] make decoder input ids optional for t5 training | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3521?src=pr&el=h1) Report\n> Merging [#3521](https://codecov.io/gh/huggingface/transformers/pull/3521?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/33ef7002e17fe42b276dc6d36c07a3c39b1f09ed&el=desc) will **increase** coverage by `0.00%`.\n> The diff coverage is `95.23%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3521?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3521 +/- ##\n=======================================\n Coverage 77.80% 77.81% \n=======================================\n Files 100 100 \n Lines 17051 17069 +18 \n=======================================\n+ Hits 13267 13282 +15 \n- Misses 3784 3787 +3 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3521?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3521/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.15% <ø> (-0.18%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3521/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.81% <ø> (-0.14%)` | :arrow_down: |\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/3521/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `81.79% <95.23%> (+0.50%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3521?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3521?src=pr&el=footer). Last update [33ef700...168bed0](https://codecov.io/gh/huggingface/transformers/pull/3521?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@patrickvonplaten Hi Patrick! Could you tell me what is the difference between decoder_input_ids and lm_labels for T5ForConditionalGeneration? For context: I am using T5ForConditionalGeneration for paraphrase generation. I am checking this code:https://github.com/ramsrigouthamg/Paraphrase-any-question-with-T5-Text-To-Text-Transfer-Transformer-/blob/master/t5-pretrained-question-paraphraser.ipynb He uses lm_labels with decoder_attention_mask. Thanks in advance!",
"@mengyahuUSTC-PU . When calling the [forward()](https://github.com/ramsrigouthamg/Paraphrase-any-question-with-T5-Text-To-Text-Transfer-Transformer-/blob/9b26db2336d6077cc9d95bc28f123d32298aaf94/train.py#L66) , decoder_input_ids is None as follows:\r\n```\r\n outputs = self(\r\n input_ids=batch[\"source_ids\"],\r\n attention_mask=batch[\"source_mask\"],\r\n lm_labels=lm_labels,\r\n decoder_attention_mask=batch['target_mask']\r\n )\r\n```\r\n\r\ndecode_input_ids is derived from lm_labels if decode_input_ids is None. [decode_input_ids=](https://github.com/huggingface/transformers/blob/1aec991643a6fec0e7d504626fc68347fe93b658/src/transformers/modeling_t5.py#L1156)\r\n\r\nI was wondering in what case I need to feed decode_input_ids.\r\n\r\n"
] | 1,585 | 1,596 | 1,585 | MEMBER | null | - [x] Make `decoder_input_ids` optional when supplying `lm_labels` for `T5ForConditionalGeneration`
- [x] Add test | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3521/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3521",
"html_url": "https://github.com/huggingface/transformers/pull/3521",
"diff_url": "https://github.com/huggingface/transformers/pull/3521.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3521.patch",
"merged_at": 1585568727000
} |
https://api.github.com/repos/huggingface/transformers/issues/3520 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3520/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3520/comments | https://api.github.com/repos/huggingface/transformers/issues/3520/events | https://github.com/huggingface/transformers/pull/3520 | 589,905,365 | MDExOlB1bGxSZXF1ZXN0Mzk1MzM1NzU2 | 3,520 | WIP: haiku bert implementation | {
"login": "madisonmay",
"id": 2645393,
"node_id": "MDQ6VXNlcjI2NDUzOTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2645393?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/madisonmay",
"html_url": "https://github.com/madisonmay",
"followers_url": "https://api.github.com/users/madisonmay/followers",
"following_url": "https://api.github.com/users/madisonmay/following{/other_user}",
"gists_url": "https://api.github.com/users/madisonmay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/madisonmay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/madisonmay/subscriptions",
"organizations_url": "https://api.github.com/users/madisonmay/orgs",
"repos_url": "https://api.github.com/users/madisonmay/repos",
"events_url": "https://api.github.com/users/madisonmay/events{/privacy}",
"received_events_url": "https://api.github.com/users/madisonmay/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
},
{
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
},
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,585 | 1,595 | 1,595 | NONE | null | Still a work in progress but the contextual embeddings line up with the pytorch version so this is roughly at parity with jax-bert
TODO (mostly notes to myself):
- [x] Add `save_pretrained`
- [ ] Make `from_pretrained` work with names
- [ ] Add dropout at training time, pass through training flag
- [ ] Make sure weight initializations line up when pre-trained state isn't passed
- [ ] Gradually work towards parity with the pytorch version if desired? (target models, BERT variants, etc.)
- [ ] Write HaikuPretrainedModel to take advantage of archive resolution / make saving + loading compatible with pytorch bins?
To use the pre-trained weights cleanly I ended up subclassing `hk.Module` -- unsure how I feel about this decision but I couldn't think of a better method at the time. Feel free to suggest an alternative if you have ideas. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3520/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3520",
"html_url": "https://github.com/huggingface/transformers/pull/3520",
"diff_url": "https://github.com/huggingface/transformers/pull/3520.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3520.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3519 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3519/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3519/comments | https://api.github.com/repos/huggingface/transformers/issues/3519/events | https://github.com/huggingface/transformers/pull/3519 | 589,871,846 | MDExOlB1bGxSZXF1ZXN0Mzk1MzEwNTgz | 3,519 | Resizing embedding matrix before sending it to the optimizer. | {
"login": "ngarneau",
"id": 665101,
"node_id": "MDQ6VXNlcjY2NTEwMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/665101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ngarneau",
"html_url": "https://github.com/ngarneau",
"followers_url": "https://api.github.com/users/ngarneau/followers",
"following_url": "https://api.github.com/users/ngarneau/following{/other_user}",
"gists_url": "https://api.github.com/users/ngarneau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ngarneau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ngarneau/subscriptions",
"organizations_url": "https://api.github.com/users/ngarneau/orgs",
"repos_url": "https://api.github.com/users/ngarneau/repos",
"events_url": "https://api.github.com/users/ngarneau/events{/privacy}",
"received_events_url": "https://api.github.com/users/ngarneau/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Since I just swapped some lines, I guess this code quality check got activated after this file came up in the repo..! 😬",
"Hi Nicolas,\r\nYou have to install the code style tools and run `make style` and `make quality` on your PR.\r\nCheck the contributing guide for the details: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests",
"Shoot I'm sorry, totally forgot to check the contributing guidelines. Let me fix this real quick."
] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | This bug will rise when you add new token types in the Tokenizer.
Since the resizing was done **after** passing the params to the optimizer, the wrong set of params for the embedding table were optimized. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3519/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3519",
"html_url": "https://github.com/huggingface/transformers/pull/3519",
"diff_url": "https://github.com/huggingface/transformers/pull/3519.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3519.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/3518 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3518/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3518/comments | https://api.github.com/repos/huggingface/transformers/issues/3518/events | https://github.com/huggingface/transformers/issues/3518 | 589,840,657 | MDU6SXNzdWU1ODk4NDA2NTc= | 3,518 | Argument “never_split” not working on bert tokenizer | {
"login": "acmilannesta",
"id": 47703762,
"node_id": "MDQ6VXNlcjQ3NzAzNzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/47703762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/acmilannesta",
"html_url": "https://github.com/acmilannesta",
"followers_url": "https://api.github.com/users/acmilannesta/followers",
"following_url": "https://api.github.com/users/acmilannesta/following{/other_user}",
"gists_url": "https://api.github.com/users/acmilannesta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/acmilannesta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/acmilannesta/subscriptions",
"organizations_url": "https://api.github.com/users/acmilannesta/orgs",
"repos_url": "https://api.github.com/users/acmilannesta/repos",
"events_url": "https://api.github.com/users/acmilannesta/events{/privacy}",
"received_events_url": "https://api.github.com/users/acmilannesta/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834056635,
"node_id": "MDU6TGFiZWwxODM0MDU2NjM1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Tokenization",
"name": "Core: Tokenization",
"color": "FF4446",
"default": false,
"description": "Internals of the library; Tokenization."
}
] | closed | false | null | [] | [
"Does this problem arise with the fast tokenizer too? Can you try both:\r\n\r\n```python\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased', never_split=['lol'], use_fast=True)\r\n# and\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased', never_split=['lol'], use_fast=False)\r\n```",
"> Does this problem arise with the fast tokenizer too? Can you try both:\r\n> \r\n> ```python\r\n> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', never_split=['lol'], use_fast=True)\r\n> # and\r\n> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', never_split=['lol'], use_fast=False)\r\n> ```\r\n\r\nTried. Neither works...\r\nHowever, if I tried to load customzied vocab which replace \"[unused]\" toakens with the ones I don't want to split. The tokenizer works.\r\n\r\nBut the default vocab only allows around 1k new tokens. If I add more, the embedding size will change. But the TF models will raise implementation error if I called this:\r\n```\r\nbert = TFBertModel.from_pretrained('bert-base-uncased')\r\nbert.resize_token_embeddings(36000)\r\n```\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This problem still exists where the \"never_split\"\r\n\r\n```python\r\nfrom transformers import BertTokenizer\r\nUSE_FAST = False\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-cased\",\r\n use_fast=USE_FAST,\r\n never_split=['lol'],\r\n do_basic_tokenize=True)\r\nprint(tokenizer.tokenize(\" lol That's funny\"))\r\n```\r\nI started going through the code, but it's a bit of a rabbit hole since you have \"never_split\" as an init argument of the basic tokenizer as well as the pretrained tokenizer, but also as part of the `tokenize` method. It isn't clear to me exactly where it is used.\r\n\r\nPerhaps some doc changes are needed as well, since it is typed as boolean:\r\n\r\nhttps://github.com/huggingface/transformers/blob/35df91148545e09cd199d89e707043eba5434f59/src/transformers/tokenization_bert.py#L133\r\n\r\ncc @n1t0 @mfuntowicz ",
"I just tested the last example provided by @BramVanroy and it seems to work after #4723. Do not hesitate to reopen if needed!"
] | 1,585 | 1,591 | 1,591 | NONE | null | I used the ```never_split``` option and tried to retain some tokens. But the tokenizer still divide them into wordpieces.
```
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', never_split=['lol'])
tokenizer.tokenize("lol That's funny")
['lo', '##l', 'that', "'", 's', 'funny']
```
**A link to original question on Stack Overflow**:
https://stackoverflow.com/posts/60914793/edit | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3518/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3517 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3517/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3517/comments | https://api.github.com/repos/huggingface/transformers/issues/3517/events | https://github.com/huggingface/transformers/pull/3517 | 589,834,441 | MDExOlB1bGxSZXF1ZXN0Mzk1MjgyODg3 | 3,517 | [Tokenization] fix edge case for bert tokenization | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@mfuntowicz @n1t0 @LysandreJik - could you check? :-) ",
"Does this mean that `batch_encode_plus` is supposed to handle \"pre-tokenized\" inputs? I thought this was something introduced by https://github.com/huggingface/transformers/pull/3185 with a specific flag `is_pretokenized` (cc @mfuntowicz)",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3517?src=pr&el=h1) Report\n> Merging [#3517](https://codecov.io/gh/huggingface/transformers/pull/3517?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5aa8a278a3f13b8f83a0deb9b6d743f159cea23c&el=desc) will **increase** coverage by `0.01%`.\n> The diff coverage is `100.00%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/3517?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #3517 +/- ##\n==========================================\n+ Coverage 78.03% 78.05% +0.01% \n==========================================\n Files 104 104 \n Lines 17708 17709 +1 \n==========================================\n+ Hits 13819 13822 +3 \n+ Misses 3889 3887 -2 \n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3517?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3517/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `85.78% <100.00%> (+0.01%)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3517/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `92.23% <0.00%> (+0.12%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3517/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.28% <0.00%> (+0.16%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3517?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3517?src=pr&el=footer). Last update [5aa8a27...3bde162](https://codecov.io/gh/huggingface/transformers/pull/3517?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"> Does this mean that `batch_encode_plus` is supposed to handle \"pre-tokenized\" inputs? I thought this was something introduced by #3185 with a specific flag `is_pretokenized` (cc @mfuntowicz)\r\n\r\n@mfuntowicz showed me the `is_pretokenized` flag for tokenizers v3.0.0 so this makes everything much easier"
] | 1,585 | 1,586 | 1,586 | MEMBER | null | This PR fixes #3502 .
The reason why the tests fail in #3502 is because of an edge case.
If the input to `tokenizer.batch_encode_plus()` consists of a tokenized string that results in a list of exactly two strings (``[[16], [.]]`` in issue #3502) then it is treated as a pair of input sequences (=> [CLS] input_sequence_1 [SEP] input_sequence_2 [SEP]) but this behavior should only happen if the input list consists of two **untokenized** strings. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3517/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3517",
"html_url": "https://github.com/huggingface/transformers/pull/3517",
"diff_url": "https://github.com/huggingface/transformers/pull/3517.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3517.patch",
"merged_at": 1586291191000
} |
https://api.github.com/repos/huggingface/transformers/issues/3516 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3516/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3516/comments | https://api.github.com/repos/huggingface/transformers/issues/3516/events | https://github.com/huggingface/transformers/pull/3516 | 589,808,368 | MDExOlB1bGxSZXF1ZXN0Mzk1MjYzNzc0 | 3,516 | [Docs] examples/summarization/bart: Simplify CNN/DM preprocessing steps | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Merging to unblock patrick. "
] | 1,585 | 1,585 | 1,585 | CONTRIBUTOR | null | I added the preprocessed data in S3.
Evidence that it is the correct size:
 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3516/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3516",
"html_url": "https://github.com/huggingface/transformers/pull/3516",
"diff_url": "https://github.com/huggingface/transformers/pull/3516.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3516.patch",
"merged_at": 1585502742000
} |
https://api.github.com/repos/huggingface/transformers/issues/3515 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3515/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3515/comments | https://api.github.com/repos/huggingface/transformers/issues/3515/events | https://github.com/huggingface/transformers/issues/3515 | 589,795,657 | MDU6SXNzdWU1ODk3OTU2NTc= | 3,515 | Isort installed from github branch does not correspond to circle ci isort | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Looking at the CI config, it should install the correct version, though. Not sure what I am missing here.\r\n\r\nhttps://github.com/huggingface/transformers/blob/e5c393dcebf42eaec9c1e1d619b5a7788a2d7c65/.circleci/config.yml#L89",
"Hi @BramVanroy, \r\n\r\nThanks for your answer :-) \r\nI get the feeling that it's somehow related to my computer.\r\n\r\nRunning `pip install git+git://github.com/timothycrosley/isort.git@e63ae06ec7d70b06df9e528357650281a3d3ec22#egg=isort` in my terminal installs the same isort version / the exact same code (isort-4.3.21)\r\n\r\nas if running \r\n\r\n`pip install isort`\r\n\r\nWhen I `pip uninstalled isort` and reinstalled it with `pip install git+git://github.com/timothycrosley/isort.git@e63ae06ec7d70b06df9e528357650281a3d3ec22#egg=isort` the library was not the same anymore. Any ideas what could be happening here on my computer? I tried deleting the pip cache, but no success yet :-/. ",
"Ok for some reason, upgrading python3.6 to python3.7 solved the problem for me "
] | 1,585 | 1,585 | 1,585 | MEMBER | null | # 🐛 Bug
## Information
Installing isort via:
`$ pip install -U git+git://github.com/timothycrosley/isort.git@e63ae06ec7d70b06df9e528357650281a3d3ec22#egg=isort`
does not correspond to the circle ci version anymore, so that `make style` formatting leads to circle-ci make the code quality test fail.
- `transformers` version: 2.6.0
- Platform: Linux-5.3.0-42-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0+cpu (False)
- Tensorflow version (GPU?): 2.1.0 (False)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3515/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/3514 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3514/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3514/comments | https://api.github.com/repos/huggingface/transformers/issues/3514/events | https://github.com/huggingface/transformers/pull/3514 | 589,788,520 | MDExOlB1bGxSZXF1ZXN0Mzk1MjQ4OTU1 | 3,514 | [Examples] Clean summarization and translation example testing files for T5 and Bart | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,585 | 1,585 | 1,585 | MEMBER | null | - Only create temporary files instead of "real" files so that each `test_file` in `examples/summarization` and `examples/translations` creates unique testing output files that cannot be overridden by other test files. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3514/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/3514",
"html_url": "https://github.com/huggingface/transformers/pull/3514",
"diff_url": "https://github.com/huggingface/transformers/pull/3514.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/3514.patch",
"merged_at": 1585670053000
} |
https://api.github.com/repos/huggingface/transformers/issues/3513 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/3513/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/3513/comments | https://api.github.com/repos/huggingface/transformers/issues/3513/events | https://github.com/huggingface/transformers/issues/3513 | 589,781,536 | MDU6SXNzdWU1ODk3ODE1MzY= | 3,513 | Adding mbart-large-cc25 | {
"login": "MaksymDel",
"id": 8141935,
"node_id": "MDQ6VXNlcjgxNDE5MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8141935?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaksymDel",
"html_url": "https://github.com/MaksymDel",
"followers_url": "https://api.github.com/users/MaksymDel/followers",
"following_url": "https://api.github.com/users/MaksymDel/following{/other_user}",
"gists_url": "https://api.github.com/users/MaksymDel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MaksymDel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MaksymDel/subscriptions",
"organizations_url": "https://api.github.com/users/MaksymDel/orgs",
"repos_url": "https://api.github.com/users/MaksymDel/repos",
"events_url": "https://api.github.com/users/MaksymDel/events{/privacy}",
"received_events_url": "https://api.github.com/users/MaksymDel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
},
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
},
{
"id": 1845609017,
"node_id": "MDU6TGFiZWwxODQ1NjA5MDE3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq",
"name": "seq2seq",
"color": "fef2c0",
"default": false,
"description": ""
},
{
"id": 2009457320,
"node_id": "MDU6TGFiZWwyMDA5NDU3MzIw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/translation",
"name": "translation",
"color": "b2d2f4",
"default": false,
"description": "machine translation utilities and models"
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"This is a Work in progress but still a few weeks out :)",
"Hi @sshleifer , additional (perhaps bug, or document bug) related to this issue: \r\n\r\nThis model page suggests that we can load mBart-cc25 : \r\nhttps://huggingface.co/facebook/mbart-large-cc25\r\n\r\nHowever, using the instructed command with the newest HuggingFace 2.8.0 : \r\n`model = AutoModel.from_pretrained(\"facebook/mbart-large-cc25\")`\r\n\r\nis failed :\r\n```\r\nAttributeError: 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOSError Traceback (most recent call last)\r\n<ipython-input-4-c034f52e2196> in <module>\r\n 11 '''\r\n 12 \r\n---> 13 model = AutoModel.from_pretrained(\"facebook/mbart-large-cc25\")\r\n 14 tokenizer = AutoTokenizer.from_pretrained(\"facebook/mbart-large-cc25\")\r\n\r\n/opt/conda/lib/python3.7/site-packages/transformers/modeling_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 421 for config_class, model_class in MODEL_MAPPING.items():\r\n 422 if isinstance(config, config_class):\r\n--> 423 return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)\r\n 424 raise ValueError(\r\n 425 \"Unrecognized configuration class {} for this kind of AutoModel: {}.\\n\"\r\n\r\n/opt/conda/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 625 except Exception:\r\n 626 raise OSError(\r\n--> 627 \"Unable to load weights from pytorch checkpoint file. \"\r\n 628 \"If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. \"\r\n 629 )\r\n\r\nOSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. \r\n```",
"Yes, the docs are wrong/aspirational at the moment. Will fix today!",
"Fixed the docs. That model is currently not supported, but it's on my roadmap to add it in the coming weeks.",
"sshleifer, wonder if the mbart-large-cc25 have been added? We are looking to use mbart for a multilingual text classification problem. Thanks for the great work.\r\nPatrick",
"Hopefully this weekend!",
"What languages are you trying to support?\r\nWe have 1,000+ models in the `MarianMTModel` family, 11 of which are multi-lingual.\r\n",
"We are blocked for the moment on https://github.com/pytorch/fairseq/issues/2258, \r\nif anybody has any ideas how to fix that it would be much appreciated!\r\n\r\n"
] | 1,585 | 1,594 | 1,594 | CONTRIBUTOR | null | # 🌟 New model addition
Multilingual BART model implemented in fairseq introduced by FAIR
## Model description
This issue is to request adding mBART model existing as a part of fairseq lib.
[Link to the fairseq description of the model](https://github.com/pytorch/fairseq/tree/master/examples/mbart
)
[Link to the mBART paper](https://arxiv.org/abs/2001.08210)
Multilingually pretrained BART checkpoint.
<!-- Important information -->
The model code follows the original BART model code which is already a part of ```transformers``` repo. However, it introduces a couple more features like multilingual denoising and translation from pretrained BART.
## Open source status
- [x] _the model implementation is available: (give details)_
[Link to the PR adding mBART to the fairseq](https://github.com/pytorch/fairseq/commit/5e79322b3a4a9e9a11525377d3dda7ac520b921c)
This PR shows the main pieces that were added to the fairseq to make mBART work considering BART which is already existing in the codebase. However, a few additional mBART commits were added afterward.
- [x] _the model weights are available: (give details)_
[Link to the weights](https://github.com/pytorch/fairseq/tree/master/examples/mbart#pre-trained-models)
- [x] _who are the authors: (mention them, if possible by @gh-username)_
Facebook AI Research (@MultiPath) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/3513/reactions",
"total_count": 10,
"+1": 10,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/3513/timeline | completed | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.