url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/1810 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1810/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1810/comments | https://api.github.com/repos/huggingface/transformers/issues/1810/events | https://github.com/huggingface/transformers/issues/1810 | 521,782,106 | MDU6SXNzdWU1MjE3ODIxMDY= | 1,810 | NameError: name 'DUMMY_INPUTS' is not defined - From TF to PyTorch | {
"login": "RubensZimbres",
"id": 20270054,
"node_id": "MDQ6VXNlcjIwMjcwMDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/20270054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RubensZimbres",
"html_url": "https://github.com/RubensZimbres",
"followers_url": "https://api.github.com/users/RubensZimbres/followers",
"following_url": "https://api.github.com/users/RubensZimbres/following{/other_user}",
"gists_url": "https://api.github.com/users/RubensZimbres/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RubensZimbres/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RubensZimbres/subscriptions",
"organizations_url": "https://api.github.com/users/RubensZimbres/orgs",
"repos_url": "https://api.github.com/users/RubensZimbres/repos",
"events_url": "https://api.github.com/users/RubensZimbres/events{/privacy}",
"received_events_url": "https://api.github.com/users/RubensZimbres/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! I believe this is a bug that was fixed on master. Could you try and install from source and tell me if it fixes your issue?\r\n\r\nYou can do so with the following command in your python environment:\r\n\r\n```\r\npip install git+https://github.com/huggingface/transformers\r\n```",
"@LysandreJik Thanks for the hint. I used a workaround. I installed `transformers` using:\r\n\r\n```\r\nconda install -c conda-forge transformers\r\n```\r\n\r\nIn Python 3.7.4, then I added `DUMMY_INPUTS = [[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]]` after variable `logger` in `/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_utils.py`, because it was missing.\r\n\r\nIt was interesting, because in GCP, transformers were not showing in `python3` , only in `sudo python3`",
"@LysandreJik I updated the install from source but ran into the same issue. The \"solution\" was again as @RubensZimbres mentions: adding `DUMMY_INPUTS = [[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]]` after variable `logger`, however I needed it changed in `/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py`.\r\n",
"I created a Pull Request at: https://github.com/huggingface/transformers/pull/1847",
"Can't reproduce on master and the latest release (2.2.1).\r\nFeel free to reopen if the issue is still there.\r\n"
] | 1,573 | 1,575 | 1,575 | NONE | null | ## 🐛 Bug
I'm using TFBertForSequenceClassification, Tensorflow 2.0.0b0, PyTorch is up-to-date and the code from Hugging Face README.md:
```
import tensorflow as tf
import tensorflow_datasets
from transformers import *
tf.compat.v1.enable_eager_execution()
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
model = TFBertForSequenceClassification.from_pretrained('bert-base-cased')
data = tensorflow_datasets.load('glue/mrpc')
train_dataset = glue_convert_examples_to_features(data['train'], tokenizer, max_length=128, task='mrpc')
valid_dataset = glue_convert_examples_to_features(data['validation'], tokenizer, max_length=128, task='mrpc')
train_dataset = train_dataset.shuffle(100).batch(32).repeat(2)
valid_dataset = valid_dataset.batch(64)
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
history = model.fit(train_dataset, epochs=2, steps_per_epoch=115,
validation_data=valid_dataset, validation_steps=7)
model.save_pretrained('/home/rubens_gmail_com/tf')
```
The model trains properly, but after weights are saved as `.h5` and I try to load to `BertForSequenceClassification.from_pretrained`, the following error shows up:
```
pytorch_model = BertForSequenceClassification.from_pretrained('/home/rubens_gmail_com/tf',from_tf=True)
NameError Traceback (most recent call last)
<ipython-input-25-7a878059a298> in <module>
1 pytorch_model = BertForSequenceClassification.from_pretrained('/home/rubens_gmail_com/tf',
----> 2 from_tf=True)
~/.local/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
357 try:
358 from transformers import load_tf2_checkpoint_in_pytorch_model
--> 359 model = load_tf2_checkpoint_in_pytorch_model(model, resolved_archive_file, allow_missing_keys=True)
360 except ImportError as e:
361 logger.error("Loading a TensorFlow model in PyTorch, requires both PyTorch and TensorFlow to be installed. Please see "
~/.local/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py in load_tf2_checkpoint_in_pytorch_model(pt_model, tf_checkpoint_path, tf_inputs, allow_missing_keys)
199
200 if tf_inputs is None:
--> 201 tf_inputs = tf.constant(DUMMY_INPUTS)
202
203 if tf_inputs is not None:
NameError: name 'DUMMY_INPUTS' is not defined
```
If I run:
```
pytorch_model = BertForSequenceClassification.from_pretrained('/home/rubens_gmail_com/tf/tf_model.h5',from_tf=True)
```
I get:
```
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-35-4146e5f25140> in <module>
----> 1 pytorch_model = BertForSequenceClassification.from_pretrained('/home/rubens_gmail_com/tf/tf_model.h5',from_tf=True)
~/.local/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
285 cache_dir=cache_dir, return_unused_kwargs=True,
286 force_download=force_download,
--> 287 **kwargs
288 )
289 else:
~/.local/lib/python3.7/site-packages/transformers/configuration_utils.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
152
153 # Load config
--> 154 config = cls.from_json_file(resolved_config_file)
155
156 if hasattr(config, 'pruned_heads'):
~/.local/lib/python3.7/site-packages/transformers/configuration_utils.py in from_json_file(cls, json_file)
184 """Constructs a `BertConfig` from a json file of parameters."""
185 with open(json_file, "r", encoding='utf-8') as reader:
--> 186 text = reader.read()
187 return cls.from_dict(json.loads(text))
188
/opt/anaconda3/lib/python3.7/codecs.py in decode(self, input, final)
320 # decode input (taking the buffer into account)
321 data = self.buffer + input
--> 322 (result, consumed) = self._buffer_decode(data, self.errors, final)
323 # keep undecoded input until the next call
324 self.buffer = data[consumed:]
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte
```
I'm using Python 3.7.4, 8 x V100 GPU, Anaconda environment on a Debian on GCP. Any ideas on how to overcome this issue ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1810/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1810/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1809 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1809/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1809/comments | https://api.github.com/repos/huggingface/transformers/issues/1809/events | https://github.com/huggingface/transformers/issues/1809 | 521,750,456 | MDU6SXNzdWU1MjE3NTA0NTY= | 1,809 | Why do language modeling heads not have activation functions? | {
"login": "langfield",
"id": 35980963,
"node_id": "MDQ6VXNlcjM1OTgwOTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/35980963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/langfield",
"html_url": "https://github.com/langfield",
"followers_url": "https://api.github.com/users/langfield/followers",
"following_url": "https://api.github.com/users/langfield/following{/other_user}",
"gists_url": "https://api.github.com/users/langfield/gists{/gist_id}",
"starred_url": "https://api.github.com/users/langfield/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/langfield/subscriptions",
"organizations_url": "https://api.github.com/users/langfield/orgs",
"repos_url": "https://api.github.com/users/langfield/repos",
"events_url": "https://api.github.com/users/langfield/events{/privacy}",
"received_events_url": "https://api.github.com/users/langfield/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, some Transformers have activation in their heads, for instance, Bert.\r\nSee here: https://github.com/huggingface/transformers/blob/master/transformers/modeling_bert.py#L421\r\n\r\nThis is most likely a design choice with a minor effect for deep transformers as they learn to generate the current or next token along several of the output layers.\r\nSee the nice blog post and paper of Lena Voita for some intuition on this: https://lena-voita.github.io/posts/emnlp19_evolution.html",
"Thank you! That link was very helpful."
] | 1,573 | 1,573 | 1,573 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
This is more of a question about the transformer architecture in general than anything else. I noticed that, in `modeling_openai.py`, for example, the `self.lm_head()` module is just a linear layer. Why is it sufficient to use a linear transformation here for the language modeling task? Would there by any advantages/disadvantages to throwing a `gelu` or something on the end and then calling the output of that the logits?
```python
class OpenAIGPTLMHeadModel(OpenAIGPTPreTrainedModel):
r"""
**labels**: (`optional`) ``torch.LongTensor`` of shape ``(batch_size, sequence_length)``:
Labels for language modeling.
Note that the labels **are shifted** inside the model, i.e. you can set ``labels = input_ids``
Indices are selected in ``[-1, 0, ..., config.vocab_size]``
All labels set to ``-1`` are ignored (masked), the loss is only
computed for labels in ``[0, ..., config.vocab_size]``
Outputs: `Tuple` comprising various elements depending on the configuration (config) and inputs:
**loss**: (`optional`, returned when ``labels`` is provided) ``torch.FloatTensor`` of shape ``(1,)``:
Language modeling loss.
**prediction_scores**: ``torch.FloatTensor`` of shape ``(batch_size, sequence_length, config.vocab_size)``
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
**hidden_states**: (`optional`, returned when ``config.output_hidden_states=True``)
list of ``torch.FloatTensor`` (one for the output of each layer + the output of the embeddings)
of shape ``(batch_size, sequence_length, hidden_size)``:
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
**attentions**: (`optional`, returned when ``config.output_attentions=True``)
list of ``torch.FloatTensor`` (one for each layer) of shape ``(batch_size, num_heads, sequence_length, sequence_length)``:
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Examples::
tokenizer = OpenAIGPTTokenizer.from_pretrained('openai-gpt')
model = OpenAIGPTLMHeadModel.from_pretrained('openai-gpt')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
outputs = model(input_ids, labels=input_ids)
loss, logits = outputs[:2]
"""
def __init__(self, config):
super(OpenAIGPTLMHeadModel, self).__init__(config)
self.transformer = OpenAIGPTModel(config)
self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False)
self.init_weights()
def get_output_embeddings(self):
return self.lm_head
def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None,
labels=None):
transformer_outputs = self.transformer(input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds)
hidden_states = transformer_outputs[0]
lm_logits = self.lm_head(hidden_states)
outputs = (lm_logits,) + transformer_outputs[1:]
if labels is not None:
# Shift so that tokens < n predict n
shift_logits = lm_logits[..., :-1, :].contiguous()
shift_labels = labels[..., 1:].contiguous()
# Flatten the tokens
loss_fct = CrossEntropyLoss(ignore_index=-1)
loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)),
shift_labels.view(-1))
outputs = (loss,) + outputs
return outputs # (loss), lm_logits, (all hidden states), (all attentions)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1809/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1808 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1808/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1808/comments | https://api.github.com/repos/huggingface/transformers/issues/1808/events | https://github.com/huggingface/transformers/issues/1808 | 521,734,279 | MDU6SXNzdWU1MjE3MzQyNzk= | 1,808 | XLMForSequenceClassification - help with zero-shot cross-lingual classification | {
"login": "rsilveira79",
"id": 11993881,
"node_id": "MDQ6VXNlcjExOTkzODgx",
"avatar_url": "https://avatars.githubusercontent.com/u/11993881?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rsilveira79",
"html_url": "https://github.com/rsilveira79",
"followers_url": "https://api.github.com/users/rsilveira79/followers",
"following_url": "https://api.github.com/users/rsilveira79/following{/other_user}",
"gists_url": "https://api.github.com/users/rsilveira79/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rsilveira79/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rsilveira79/subscriptions",
"organizations_url": "https://api.github.com/users/rsilveira79/orgs",
"repos_url": "https://api.github.com/users/rsilveira79/repos",
"events_url": "https://api.github.com/users/rsilveira79/events{/privacy}",
"received_events_url": "https://api.github.com/users/rsilveira79/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, we've added some details on the multi-modal models here: https://huggingface.co/transformers/multilingual.html\r\nAnd an XNLI example here: https://github.com/huggingface/transformers/tree/master/examples#xnli"
] | 1,573 | 1,575 | 1,575 | NONE | null | ## ❓ Questions & Help
Hi guys
According to XLM description (https://github.com/facebookresearch/XLM?fbclid=IwAR0-ZJpmWmIVfR20fA2KCHgrUU3k0cMUyx2n_V9-9C8g857-nhavrfBnVSI#pretrained-cross-lingual-language-models), we could potentially do XLNI by training in `en` dataset and do inference in other language:
```
XLMs can be used to build cross-lingual classifiers. After fine-tuning an XLM model
on an English training corpus for instance (e.g. of sentiment analysis, natural language
inference), the model is still able to make accurate predictions at test time in other
languages, for which there is very little or no training data. This approach is usually
referred to as "zero-shot cross-lingual classification".
```
I'm doing some tests w/ a dataset here at work, and I'm able to train the model with *XLMForSequenceClassification* and when I test it in `en`, performance looks great!
However, when I try to pass another language and do some inference, performance on other language (the so-called zero shot XLNI) does not look fine.
Here are the configurations I'm using:
```python
config = XLMConfig.from_pretrained('xlm-mlm-tlm-xnli15-1024')
config.num_labels = len(list(label_to_ix.values()))
config.n_layers = 6
tokenizer = XLMTokenizer.from_pretrained('xlm-mlm-tlm-xnli15-1024')
model = XLMForSequenceClassification(config)
```
I trained the forward pass passing the `lang` vector with `en` language:
```python
langs = torch.full(ids.shape, fill_value = language_id, dtype = torch.long).cuda()
output = model.forward(ids.cuda(), token_type_ids=tokens.cuda(), langs=langs, head_mask=None)[0]
```
For inference, I'm passing now other languages, by using following function:
```python
def get_reply(msg, language = 'en'):
features = prepare_features(msg, zero_pad = False)
language_id = tokenizer.lang2id[language]
ids = torch.tensor(features['input_ids']).unsqueeze(0).cuda()
langs = torch.full(ids.shape, fill_value = language_id, dtype = torch.long).cuda()
tokens = torch.tensor(features['token_type_ids']).unsqueeze(0).cuda()
output = model.forward(ids,token_type_ids=tokens, langs=langs)[0]
_, predicted = torch.max(output.data, 1)
return list(label_to_ix.keys())[predicted]
```
BTW, my `prepare_features` function is quite simple, just use the `encode_plus` method and then zero pad for a given sentence size:
```python
def prepare_features(seq_1, zero_pad = False, max_seq_length = 120):
enc_text = tokenizer.encode_plus(seq_1, add_special_tokens=True, max_length=300)
if zero_pad:
while len(enc_text['input_ids']) < max_seq_length:
enc_text['input_ids'].append(0)
enc_text['token_type_ids'].append(0)
return enc_text
```
I've noticed in file (https://github.com/huggingface/transformers/blob/master/transformers/modeling_xlm.py) that the forward loop for `XLMModel` does add `lang` vector as sort of an offset for the embeddings:
```python
tensor = tensor + self.lang_embeddings(langs)
```
I'm wondering if I'm passing the `lang` vector in right format. I choose the model with *tlm* language model on purpose because I was guessing that it benefits of the translation language model of the XLM. Have you guys experiment already with zero-shot text classification? Any clue my inference in other languages is not working?
Any insight will be greatly appreciated!
Cheers,
Roberto | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1808/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1807 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1807/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1807/comments | https://api.github.com/repos/huggingface/transformers/issues/1807/events | https://github.com/huggingface/transformers/issues/1807 | 521,619,218 | MDU6SXNzdWU1MjE2MTkyMTg= | 1,807 | Whether it belongs to the bug of class trainedtokenizer decode? | {
"login": "yuanxiaosc",
"id": 16183570,
"node_id": "MDQ6VXNlcjE2MTgzNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16183570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuanxiaosc",
"html_url": "https://github.com/yuanxiaosc",
"followers_url": "https://api.github.com/users/yuanxiaosc/followers",
"following_url": "https://api.github.com/users/yuanxiaosc/following{/other_user}",
"gists_url": "https://api.github.com/users/yuanxiaosc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuanxiaosc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuanxiaosc/subscriptions",
"organizations_url": "https://api.github.com/users/yuanxiaosc/orgs",
"repos_url": "https://api.github.com/users/yuanxiaosc/repos",
"events_url": "https://api.github.com/users/yuanxiaosc/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuanxiaosc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You're right, this is an error. The PR #1811 aims to fix that issue!",
"It should be fixed now, thanks! Feel free to re-open if the error persists."
] | 1,573 | 1,573 | 1,573 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->


| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1807/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1806 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1806/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1806/comments | https://api.github.com/repos/huggingface/transformers/issues/1806/events | https://github.com/huggingface/transformers/issues/1806 | 521,590,490 | MDU6SXNzdWU1MjE1OTA0OTA= | 1,806 | Extracting First Hidden States | {
"login": "brytjy",
"id": 46053996,
"node_id": "MDQ6VXNlcjQ2MDUzOTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/46053996?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brytjy",
"html_url": "https://github.com/brytjy",
"followers_url": "https://api.github.com/users/brytjy/followers",
"following_url": "https://api.github.com/users/brytjy/following{/other_user}",
"gists_url": "https://api.github.com/users/brytjy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brytjy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brytjy/subscriptions",
"organizations_url": "https://api.github.com/users/brytjy/orgs",
"repos_url": "https://api.github.com/users/brytjy/repos",
"events_url": "https://api.github.com/users/brytjy/events{/privacy}",
"received_events_url": "https://api.github.com/users/brytjy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,573 | 1,573 | 1,573 | NONE | null | ## ❓ Questions & Help
For example, right now in order to extract the first hidden states of DistilBert model,
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = DistilBertModel.from_pretrained('distilbert-base-uncased', **output_hidden_states=True**)
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
outputs = model(input_ids)
**first_hidden_states = outputs[1][0] # The first hidden-state**
However, I can only extract the first layer hidden states after running through the entire model; all 7 layers (which is unnecessary if I only want the first).
Is there a way I could use the model to only extract the hidden states of the first layer and stop there? I am looking to further optimize the inference time of DistilBert.
Thank you :)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1806/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1806/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1805 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1805/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1805/comments | https://api.github.com/repos/huggingface/transformers/issues/1805/events | https://github.com/huggingface/transformers/issues/1805 | 521,574,385 | MDU6SXNzdWU1MjE1NzQzODU= | 1,805 | RuntimeError: CUDA error: device-side assert triggered | {
"login": "cswangjiawei",
"id": 33107884,
"node_id": "MDQ6VXNlcjMzMTA3ODg0",
"avatar_url": "https://avatars.githubusercontent.com/u/33107884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cswangjiawei",
"html_url": "https://github.com/cswangjiawei",
"followers_url": "https://api.github.com/users/cswangjiawei/followers",
"following_url": "https://api.github.com/users/cswangjiawei/following{/other_user}",
"gists_url": "https://api.github.com/users/cswangjiawei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cswangjiawei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cswangjiawei/subscriptions",
"organizations_url": "https://api.github.com/users/cswangjiawei/orgs",
"repos_url": "https://api.github.com/users/cswangjiawei/repos",
"events_url": "https://api.github.com/users/cswangjiawei/events{/privacy}",
"received_events_url": "https://api.github.com/users/cswangjiawei/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Are you in a multi-GPU setup ?",
"I did not use multi-GPU setup, I used to `model.cuda()`, it ocuurs \"RuntimeError: CUDA error: device-side assert triggered\", so I changed to `model.cuda(0)`, but the error still occurs.",
"Do you mind showing how you initialize BERT and the code surrounding the error?",
"My project structure is as follows:\r\n\r\nAnd `dataset.py` is as follows:\r\n```\r\n# -*- coding: utf-8 -*-\r\n\"\"\"\r\n@time: 2019/8/6 19:47\r\n@author: wangjiawei\r\n\"\"\"\r\n\r\n\r\nfrom torch.utils.data import Dataset\r\nimport torch\r\nimport csv\r\n\r\n\r\nclass MyDataset(Dataset):\r\n\r\n def __init__(self, data_path, tokenizer):\r\n super(MyDataset, self).__init__()\r\n\r\n self.tokenizer = tokenizer\r\n texts, labels = [], []\r\n with open(data_path, 'r', encoding='utf-8') as csv_file:\r\n reader = csv.reader(csv_file, quotechar='\"')\r\n for idx, line in enumerate(reader):\r\n text = \"\"\r\n for tx in line[1:]:\r\n text += tx\r\n text += \" \"\r\n text = self.tokenizer.tokenize(text)\r\n if len(text) > 512:\r\n text = text[:512]\r\n text = self.tokenizer.encode(text, add_special_tokens=True)\r\n text_id = torch.tensor(text)\r\n texts.append(text_id)\r\n label = int(line[0]) - 1\r\n labels.append(label)\r\n\r\n self.texts = texts\r\n self.labels = labels\r\n self.num_class = len(set(self.labels))\r\n\r\n def __len__(self):\r\n return len(self.labels)\r\n\r\n def __getitem__(self, item):\r\n label = self.labels[item]\r\n text = self.texts[item]\r\n\r\n return {'text': text, 'label': torch.tensor(label).long()}\r\n```\r\n\r\n`main.py` is as follows:\r\n```\r\n# -*- coding: utf-8 -*-\r\n\"\"\"\r\n@time: 2019/7/17 20:37\r\n@author: wangjiawei\r\n\"\"\"\r\n\r\nimport os\r\nimport torch\r\nimport torch.nn as nn\r\nfrom torch.utils.data import DataLoader, random_split\r\nfrom utils import get_default_folder, my_collate_fn\r\nfrom dataset import MyDataset\r\nfrom model import TextClassify\r\nfrom torch.utils.tensorboard import SummaryWriter\r\nimport argparse\r\nimport shutil\r\nimport numpy as np\r\nimport random\r\nimport time\r\nfrom train import train_model, evaluate\r\nfrom transformers import BertModel, BertTokenizer\r\n\r\nseed_num = 123\r\nrandom.seed(seed_num)\r\ntorch.manual_seed(seed_num)\r\nnp.random.seed(seed_num)\r\n\r\nif __name__ == \"__main__\":\r\n parser = argparse.ArgumentParser(\"self attention for Text Categorization\")\r\n parser.add_argument(\"--batch_size\", type=int, default=64)\r\n parser.add_argument(\"--num_epoches\", type=int, default=20)\r\n parser.add_argument(\"--lr\", type=float, default=0.001)\r\n parser.add_argument(\"--kernel_size\", type=int, default=3)\r\n parser.add_argument(\"--word_dim\", type=int, default=768)\r\n parser.add_argument(\"--out_dim\", type=int, default=300)\r\n parser.add_argument(\"--dropout\", default=0.5)\r\n parser.add_argument(\"--es_patience\", type=int, default=3)\r\n parser.add_argument(\"--dataset\", type=str,\r\n choices=[\"agnews\", \"dbpedia\", \"yelp_review\", \"yelp_review_polarity\", \"amazon_review\",\r\n \"amazon_polarity\", \"sogou_news\", \"yahoo_answers\"], default=\"yelp_review\")\r\n parser.add_argument(\"--log_path\", type=str, default=\"tensorboard/classify\")\r\n\r\n args = parser.parse_args()\r\n\r\n input, output = get_default_folder(args.dataset)\r\n train_path = input + os.sep + \"train.csv\"\r\n\r\n if not os.path.exists(output):\r\n os.makedirs(output)\r\n\r\n # with open(input + os.sep + args.vocab_file, 'rb') as f1:\r\n # vocab = pickle.load(f1)\r\n\r\n # emb_begin = time.time()\r\n # pretrain_word_embedding = build_pretrain_embedding(args.embedding_path, vocab, args.word_dim)\r\n # emb_end = time.time()\r\n # emb_min = (emb_end - emb_begin) % 3600 // 60\r\n # print('build pretrain embed cost {}m'.format(emb_min))\r\n\r\n model_class, tokenizer_class, pretrained_weights = BertModel, BertTokenizer, 'bert-base-uncased'\r\n tokenizer = tokenizer_class.from_pretrained(pretrained_weights)\r\n bert = model_class.from_pretrained(pretrained_weights)\r\n\r\n\r\n train_dev_dataset = MyDataset(input + os.sep + \"train.csv\", tokenizer)\r\n len_train_dev_dataset = len(train_dev_dataset)\r\n dev_size = int(len_train_dev_dataset * 0.1)\r\n train_size = len_train_dev_dataset - dev_size\r\n train_dataset, dev_dataset = random_split(train_dev_dataset, [train_size, dev_size])\r\n test_dataset = MyDataset(input + os.sep + \"test.csv\", tokenizer)\r\n\r\n train_dataloader = DataLoader(train_dataset, batch_size=args.batch_size, shuffle=True, collate_fn=my_collate_fn)\r\n dev_dataloader = DataLoader(dev_dataset, batch_size=args.batch_size, shuffle=False, collate_fn=my_collate_fn)\r\n test_dataloader = DataLoader(test_dataset, batch_size=args.batch_size, shuffle=False, collate_fn=my_collate_fn)\r\n\r\n model = TextClassify(bert, args.kernel_size, args.word_dim, args.out_dim, test_dataset.num_class, args.dropout)\r\n\r\n log_path = \"{}_{}\".format(args.log_path, args.dataset)\r\n if os.path.isdir(log_path):\r\n shutil.rmtree(log_path)\r\n os.makedirs(log_path)\r\n writer = SummaryWriter(log_path)\r\n\r\n if torch.cuda.is_available():\r\n model.cuda(0)\r\n\r\n criterion = nn.CrossEntropyLoss()\r\n optimizer = torch.optim.Adam(model.parameters(), lr=args.lr)\r\n best_acc = -1\r\n early_stop = 0\r\n model.train()\r\n batch_num = -1\r\n\r\n train_begin = time.time()\r\n\r\n for epoch in range(args.num_epoches):\r\n epoch_begin = time.time()\r\n print('train {}/{} epoch'.format(epoch + 1, args.num_epoches))\r\n batch_num = train_model(model, optimizer, criterion, batch_num, train_dataloader, writer)\r\n dev_acc = evaluate(model, dev_dataloader)\r\n writer.add_scalar('dev/Accuracy', dev_acc, epoch)\r\n print('dev_acc:', dev_acc)\r\n if dev_acc > best_acc:\r\n early_stop = 0\r\n best_acc = dev_acc\r\n print('new best_acc on dev set:', best_acc)\r\n torch.save(model.state_dict(), \"{}{}\".format(output, 'best.pt'))\r\n else:\r\n early_stop += 1\r\n\r\n epoch_end = time.time()\r\n cost_time = epoch_end - epoch_begin\r\n print('train {}th epoch cost {}m {}s'.format(epoch + 1, int(cost_time / 60), int(cost_time % 60)))\r\n print()\r\n\r\n # Early stopping\r\n if early_stop > args.es_patience:\r\n print(\r\n \"Stop training at epoch {}. The best dev_acc achieved is {}\".format(epoch - args.es_patience, best_acc))\r\n break\r\n\r\n train_end = time.time()\r\n train_cost = train_end - train_begin\r\n hour = int(train_cost / 3600)\r\n min = int((train_cost % 3600) / 60)\r\n second = int(train_cost % 3600 % 60)\r\n print()\r\n print()\r\n print('train end', '-' * 50)\r\n print('train total cost {}h {}m {}s'.format(hour, min, second))\r\n print('-' * 50)\r\n\r\n model_name = \"{}{}\".format(output, 'best.pt')\r\n\r\n model.load_state_dict(torch.load(model_name))\r\n test_acc = evaluate(model, test_dataloader)\r\n print('test acc on test set:', test_acc)\r\n```\r\n\r\n`model.py` is as follows:\r\n```\r\n# -*- coding: utf-8 -*-\r\n\"\"\"\r\n@time: 2019/8/6 19:04\r\n@author: wangjiawei\r\n\"\"\"\r\n\r\nimport torch.nn as nn\r\nimport torch\r\nimport torch.nn.functional as F\r\n\r\n\r\nclass TextClassify(nn.Module):\r\n def __init__(self, bert, kernel_size, word_dim, out_dim, num_class, dropout=0.5):\r\n super(TextClassify, self).__init__()\r\n self.bert = bert\r\n self.cnn = nn.Sequential(nn.Conv1d(word_dim, out_dim, kernel_size=kernel_size, padding=1), nn.ReLU(inplace=True))\r\n self.drop = nn.Dropout(dropout, inplace=True)\r\n self.fc = nn.Linear(out_dim, num_class)\r\n\r\n def forward(self, word_input):\r\n batch_size = word_input.size(0)\r\n represent = self.bert(word_input)[0]\r\n represent = self.drop(represent)\r\n represent = represent.transpose(1, 2)\r\n contexts = self.cnn(represent)\r\n feature = F.max_pool1d(contexts, contexts.size(2)).contiguous().view(batch_size, -1)\r\n feature = self.drop(feature)\r\n feature = self.fc(feature)\r\n return feature\r\n```\r\n\r\n`train.py` is as follows:\r\n```\r\n# -*- coding: utf-8 -*-\r\n\"\"\"\r\n@time: 2019/8/6 20:14\r\n@author: wangjiawei\r\n\"\"\"\r\n\r\n\r\nimport torch\r\n\r\n\r\ndef train_model(model, optimizer, criterion, batch_num, dataloader, writer):\r\n model.train()\r\n\r\n for batch in dataloader:\r\n model.zero_grad()\r\n batch_num += 1\r\n text = batch['text']\r\n label = batch['label']\r\n if torch.cuda.is_available():\r\n text = text.cuda(0)\r\n label = label.cuda(0)\r\n\r\n feature = model(text)\r\n loss = criterion(feature, label)\r\n writer.add_scalar('Train/Loss', loss, batch_num)\r\n loss.backward()\r\n optimizer.step()\r\n\r\n return batch_num\r\n\r\n\r\ndef evaluate(model, dataloader):\r\n model.eval()\r\n correct_num = 0\r\n total_num = 0\r\n\r\n for batch in dataloader:\r\n text = batch['text']\r\n label = batch['label']\r\n if torch.cuda.is_available():\r\n text = text.cuda(0)\r\n label = label.cuda(0)\r\n\r\n with torch.no_grad():\r\n predictions = model(text)\r\n _, preds = torch.max(predictions, 1)\r\n correct_num += torch.sum((preds == label)).item()\r\n total_num += len(label)\r\n\r\n acc = (correct_num / total_num) * 100\r\n return acc\r\n```\r\n\r\n`utils.py` is as follows:\r\n```\r\n# -*- coding: utf-8 -*-\r\n\"\"\"\r\n@time: 2019/8/6 19:54\r\n@author: wangjiawei\r\n\"\"\"\r\n\r\n\r\nimport csv\r\nimport nltk\r\nimport numpy as np\r\nfrom torch.nn.utils.rnn import pad_sequence\r\nimport torch\r\n\r\n\r\ndef load_pretrain_emb(embedding_path, embedd_dim):\r\n embedd_dict = dict()\r\n with open(embedding_path, 'r', encoding=\"utf8\") as file:\r\n for line in file:\r\n line = line.strip()\r\n if len(line) == 0:\r\n continue\r\n tokens = line.split()\r\n if not embedd_dim + 1 == len(tokens):\r\n continue\r\n embedd = np.empty([1, embedd_dim])\r\n embedd[:] = tokens[1:]\r\n first_col = tokens[0]\r\n embedd_dict[first_col] = embedd\r\n return embedd_dict\r\n\r\n\r\ndef build_pretrain_embedding(embedding_path, vocab, embedd_dim=300):\r\n embedd_dict = dict()\r\n if embedding_path is not None:\r\n embedd_dict = load_pretrain_emb(embedding_path, embedd_dim)\r\n alphabet_size = vocab.size()\r\n scale = 0.1\r\n pretrain_emb = np.empty([vocab.size(), embedd_dim])\r\n perfect_match = 0\r\n case_match = 0\r\n not_match = 0\r\n for word, index in vocab.items():\r\n if word in embedd_dict:\r\n pretrain_emb[index, :] = embedd_dict[word]\r\n perfect_match += 1\r\n elif word.lower() in embedd_dict:\r\n pretrain_emb[index, :] = embedd_dict[word.lower()]\r\n case_match += 1\r\n else:\r\n pretrain_emb[index, :] = np.random.uniform(-scale, scale, [1, embedd_dim])\r\n not_match += 1\r\n\r\n pretrain_emb[0, :] = np.zeros((1, embedd_dim))\r\n pretrained_size = len(embedd_dict)\r\n print('pretrained_size:', pretrained_size)\r\n print(\"Embedding:\\n pretrain word:%s, prefect match:%s, case_match:%s, oov:%s, oov%%:%s\" % (\r\n pretrained_size, perfect_match, case_match, not_match, (not_match + 0.) / alphabet_size))\r\n return pretrain_emb\r\n\r\n\r\ndef get_default_folder(dataset):\r\n if dataset == \"agnews\":\r\n input = \"data/ag_news_csv\"\r\n output = \"output/ag_news/\"\r\n elif dataset == \"dbpedia\":\r\n input = \"data/dbpedia_csv\"\r\n output = \"output/dbpedia/\"\r\n elif dataset == \"yelp_review\":\r\n input = \"data/yelp_review_full_csv\"\r\n output = \"output/yelp_review_full/\"\r\n elif dataset == \"yelp_review_polarity\":\r\n input = \"data/yelp_review_polarity_csv\"\r\n output = \"output/yelp_review_polarity/\"\r\n elif dataset == \"amazon_review\":\r\n input = \"data/amazon_review_full_csv\"\r\n output = \"output/amazon_review_full/\"\r\n elif dataset == \"amazon_polarity\":\r\n input = \"data/amazon_review_polarity_csv\"\r\n output = \"output/amazon_review_polarity/\"\r\n elif dataset == \"sogou_news\":\r\n input = \"data/sogou_news_csv\"\r\n output = \"output/sogou_news/\"\r\n elif dataset == \"yahoo_answers\":\r\n input = \"data/yahoo_answers_csv\"\r\n output = \"output/yahoo_answers/\"\r\n return input, output\r\n\r\n\r\ndef my_collate(batch_tensor, key):\r\n if key == 'text':\r\n batch_tensor = pad_sequence(batch_tensor, batch_first=True, padding_value=0)\r\n else:\r\n batch_tensor = torch.stack(batch_tensor)\r\n return batch_tensor\r\n\r\n\r\ndef my_collate_fn(batch):\r\n return {key: my_collate([d[key] for d in batch], key) for key in batch[0]}\r\n\r\n\r\nclass Vocabulary(object):\r\n def __init__(self, filename):\r\n self._id_to_word = []\r\n self._word_to_id = {}\r\n self._pad = -1\r\n self._unk = -1\r\n self.index = 0\r\n\r\n self._id_to_word.append('<PAD>')\r\n self._word_to_id['<PAD>'] = self.index\r\n self._pad = self.index\r\n self.index += 1\r\n self._id_to_word.append('<UNK>')\r\n self._word_to_id['<UNK>'] = self.index\r\n self._unk = self.index\r\n self.index += 1\r\n\r\n word_num = dict()\r\n\r\n with open(filename, 'r', encoding='utf-8') as f1:\r\n reader = csv.reader(f1, quotechar='\"')\r\n for line in reader:\r\n text = \"\"\r\n for tx in line[1:]:\r\n text += tx\r\n text += \" \"\r\n\r\n text = nltk.word_tokenize(text)\r\n for word in text:\r\n if word not in word_num:\r\n word_num[word] = 0\r\n word_num[word] += 1\r\n\r\n for word, num in word_num.items():\r\n if num >= 3:\r\n self._id_to_word.append(word)\r\n self._word_to_id[word] = self.index\r\n self.index += 1\r\n\r\n def unk(self):\r\n return self._unk\r\n\r\n def pad(self):\r\n return self._pad\r\n\r\n def size(self):\r\n return len(self._id_to_word)\r\n\r\n def word_to_id(self, word):\r\n if word in self._word_to_id:\r\n return self._word_to_id[word]\r\n elif word.lower() in self._word_to_id:\r\n return self._word_to_id[word.lower()]\r\n return self.unk()\r\n\r\n def id_to_word(self, cur_id):\r\n return self._id_to_word[cur_id]\r\n\r\n def items(self):\r\n return self._word_to_id.items()\r\n```\r\n",
"when I use Google Cloud, it works. Before I used Google Colab. It is strange.",
"It's probably because your token embeddings size (vocab size) doesn't match with pre-trained model. Do `model.resize_token_embeddings(len(tokenizer))` before training. Please check #1848 and #1849 ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I also faced the same issue while encoding 20_newspaper dataset, After further investigation, I found there are some sentences which are not in the English language, for example :\r\n\r\n`'subject romanbmp pa of from pwisemansalmonusdedu cliff replyto pwisemansalmonusdedu cliff distribution usa organization university of south dakota lines maxaxaxaxaxaxaxaxaxaxaxaxaxaxax maxaxaxax39f8z51 wwizbhjbhjbhjbhjgiz mgizgizbhjgizm 1tei0lv9f9f9fra 5z46q0475vo4 mu34u34u m34w 084oo aug 0y5180 mc p8v5555555555965hwgv 7uqgy gp452gvbdigiz maxaxaxaxaxaxaxaxaxaxaxaxaxaxax maxaxax34t2php2app12a34u34tm11 6um6 mum8 zbjf0kvlvc9vde5e9e5g9vg9v38vc o3o n5 mardi 24y2 g92li6e0q8axaxaxaxaxaxaxax maxaxaxaxaxaxaxas9ne1whjn 1tei4pmf9l3u3 mr hjpm75u4u34u34u nfyn 46uo m5ug 0y4518hr8y3m15556tdy65c8u 47y m7 hsxgjeuaxaxaxaxaxaxaxaxaxaxaxax maxaxaxaxaxaxaxax34t2pchp2a2a2a234u m34u3 zexf3w21fp4 wm3t f9d uzi0mf8axaxaxaxaxaxrxkrldajj mkjznbbwp0nvmkvmkvnrkmkvmhi axaxaxaxax maxaxaxaxaxaxaxks8vc9vfiue949h g9v38v6un5 mg83q3x w5 3t pi0wsr4c362l zkn2axaxaxaxaxaxaxaxaxaxax maxaxaxaxaxaxaxaxaxaxaxaxaxaxax maxax39f9fpli6e1t1'`\r\n\r\n**This was causing the model to produce this error If you are facing this issue follow these steps:** \r\n\r\n1. Catch Exception and store the sentences in the new list to see if the sentence is having weird characters \r\n2. Reduce the batch size, If your batch size is too high (if you are using batch encoding)\r\n3. Check if your sentence length is too long for the model to encode, trim the sentence length ( try 200 first then 128 )",
"I am continuously getting the runtime error: CUDA error: device-side assert triggered, I am new to transformer library.\r\n\r\nI am creating the longformer classifier in the below format:\r\n\r\nmodel = LongformerForSequenceClassification.from_pretrained('allenai/longformer-base-4096', gradient_checkpointing = True, num_labels = 5)\r\n\r\nand tokenizer as :\r\n\r\ntokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')\r\n\r\nencoded_data_train = tokenizer.batch_encode_plus(\r\ndf[df.data_type=='train'].content.values,\r\nadd_special_tokens = True,\r\nreturn_attention_mask=True,\r\npadding = True,\r\nmax_length = 3800,\r\nreturn_tensors='pt'\r\n)\r\n\r\nCan anyone please help? I am using latest version of transformer library and using google colab GPU for building my classifier.",
"> I also faced the same issue while encoding 20_newspaper dataset, After further investigation, I found there are some sentences which are not in the English language, for example :\r\n> \r\n> `'subject romanbmp pa of from pwisemansalmonusdedu cliff replyto pwisemansalmonusdedu cliff distribution usa organization university of south dakota lines maxaxaxaxaxaxaxaxaxaxaxaxaxaxax maxaxaxax39f8z51 wwizbhjbhjbhjbhjgiz mgizgizbhjgizm 1tei0lv9f9f9fra 5z46q0475vo4 mu34u34u m34w 084oo aug 0y5180 mc p8v5555555555965hwgv 7uqgy gp452gvbdigiz maxaxaxaxaxaxaxaxaxaxaxaxaxaxax maxaxax34t2php2app12a34u34tm11 6um6 mum8 zbjf0kvlvc9vde5e9e5g9vg9v38vc o3o n5 mardi 24y2 g92li6e0q8axaxaxaxaxaxaxax maxaxaxaxaxaxaxas9ne1whjn 1tei4pmf9l3u3 mr hjpm75u4u34u34u nfyn 46uo m5ug 0y4518hr8y3m15556tdy65c8u 47y m7 hsxgjeuaxaxaxaxaxaxaxaxaxaxaxax maxaxaxaxaxaxaxax34t2pchp2a2a2a234u m34u3 zexf3w21fp4 wm3t f9d uzi0mf8axaxaxaxaxaxrxkrldajj mkjznbbwp0nvmkvmkvnrkmkvmhi axaxaxaxax maxaxaxaxaxaxaxks8vc9vfiue949h g9v38v6un5 mg83q3x w5 3t pi0wsr4c362l zkn2axaxaxaxaxaxaxaxaxaxax maxaxaxaxaxaxaxaxaxaxaxaxaxaxax maxax39f9fpli6e1t1'`\r\n> \r\n> **This was causing the model to produce this error If you are facing this issue follow these steps:**\r\n> \r\n> 1. Catch Exception and store the sentences in the new list to see if the sentence is having weird characters\r\n> \r\n> 2. Reduce the batch size, If your batch size is too high (if you are using batch encoding)\r\n> \r\n> 3. Check if your sentence length is too long for the model to encode, trim the sentence length ( try 200 first then 128 )\r\n\r\nWhat a bizzare dataset. \r\n\r\nSo, this error does occur if you use a an english tokenizer on some random garbage input! \r\n\r\n```python3\r\nfrom langdetect import detect\r\ndef detect_robust(x):\r\n try:\r\n out = detect(x) # 20 news managed to break this\r\n except Exception:\r\n out = 'not-en'\r\n return out\r\n\r\nlang_ = train_df['message'].map(detect_robust)\r\ntrain_df = train_df[lang_=='en']\r\n```\r\n^ this is ridiculously slow, but after the cleaning the error is gone!\r\n"
] | 1,573 | 1,617 | 1,579 | NONE | null | ## ❓ Questions & Help
. "RuntimeError: CUDA error: device-side assert triggered" occurs. My model is as follows:
```
class TextClassify(nn.Module):
def __init__(self, bert, kernel_size, word_dim, out_dim, num_class, dropout=0.5):
super(TextClassify, self).__init__()
self.bert = bert
self.cnn = nn.Sequential(nn.Conv1d(word_dim, out_dim, kernel_size=kernel_size, padding=1), nn.ReLU(inplace=True))
self.drop = nn.Dropout(dropout, inplace=True)
self.fc = nn.Linear(out_dim, num_class)
def forward(self, word_input):
batch_size = word_input.size(0)
represent = self.xlnet(word_input)[0]
represent = self.drop(represent)
represent = represent.transpose(1, 2)
contexts = self.cnn(represent)
feature = F.max_pool1d(contexts, contexts.size(2)).contiguous().view(batch_size, -1)
feature = self.drop(feature)
feature = self.fc(feature)
return feature
```
How can I solve it?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1805/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1805/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1804 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1804/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1804/comments | https://api.github.com/repos/huggingface/transformers/issues/1804/events | https://github.com/huggingface/transformers/pull/1804 | 521,481,220 | MDExOlB1bGxSZXF1ZXN0MzM5ODUxNDg5 | 1,804 | fix multi-gpu eval in torch examples | {
"login": "ronakice",
"id": 19197923,
"node_id": "MDQ6VXNlcjE5MTk3OTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/19197923?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ronakice",
"html_url": "https://github.com/ronakice",
"followers_url": "https://api.github.com/users/ronakice/followers",
"following_url": "https://api.github.com/users/ronakice/following{/other_user}",
"gists_url": "https://api.github.com/users/ronakice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ronakice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ronakice/subscriptions",
"organizations_url": "https://api.github.com/users/ronakice/orgs",
"repos_url": "https://api.github.com/users/ronakice/repos",
"events_url": "https://api.github.com/users/ronakice/events{/privacy}",
"received_events_url": "https://api.github.com/users/ronakice/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, good catch, thanks @ronakice "
] | 1,573 | 1,573 | 1,573 | CONTRIBUTOR | null | Although batch_size for eval is updated to include multiple GPUs, DataParallel is missing from the model and hence doesn't use multi-GPUs. This PR allows DataParallel (multi-GPU) model in eval. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1804/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1804",
"html_url": "https://github.com/huggingface/transformers/pull/1804",
"diff_url": "https://github.com/huggingface/transformers/pull/1804.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1804.patch",
"merged_at": 1573766645000
} |
https://api.github.com/repos/huggingface/transformers/issues/1803 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1803/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1803/comments | https://api.github.com/repos/huggingface/transformers/issues/1803/events | https://github.com/huggingface/transformers/pull/1803 | 521,400,213 | MDExOlB1bGxSZXF1ZXN0MzM5Nzg1Njc2 | 1,803 | fix run_squad.py during fine-tuning xlnet on squad2.0 | {
"login": "importpandas",
"id": 30891974,
"node_id": "MDQ6VXNlcjMwODkxOTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/30891974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/importpandas",
"html_url": "https://github.com/importpandas",
"followers_url": "https://api.github.com/users/importpandas/followers",
"following_url": "https://api.github.com/users/importpandas/following{/other_user}",
"gists_url": "https://api.github.com/users/importpandas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/importpandas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/importpandas/subscriptions",
"organizations_url": "https://api.github.com/users/importpandas/orgs",
"repos_url": "https://api.github.com/users/importpandas/repos",
"events_url": "https://api.github.com/users/importpandas/events{/privacy}",
"received_events_url": "https://api.github.com/users/importpandas/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This looks good, do you want to add your command and the results you mention in the README of the examples in `examples/README.md`?",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1803?src=pr&el=h1) Report\n> Merging [#1803](https://codecov.io/gh/huggingface/transformers/pull/1803?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8618bf15d6edc8774cedc0aae021d259d89c91fc?src=pr&el=desc) will **increase** coverage by `1.08%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1803?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1803 +/- ##\n==========================================\n+ Coverage 78.91% 79.99% +1.08% \n==========================================\n Files 131 131 \n Lines 19450 19450 \n==========================================\n+ Hits 15348 15559 +211 \n+ Misses 4102 3891 -211\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1803?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tests/utils.py](https://codecov.io/gh/huggingface/transformers/pull/1803/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3V0aWxzLnB5) | `91.17% <0%> (-2.95%)` | :arrow_down: |\n| [transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/1803/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3BpcGVsaW5lcy5weQ==) | `67.94% <0%> (+0.58%)` | :arrow_up: |\n| [transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1803/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `94.1% <0%> (+1.07%)` | :arrow_up: |\n| [transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1803/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `81.51% <0%> (+1.32%)` | :arrow_up: |\n| [transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1803/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `96.47% <0%> (+2.2%)` | :arrow_up: |\n| [transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1803/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `74.54% <0%> (+2.32%)` | :arrow_up: |\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1803/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `72.57% <0%> (+12%)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1803/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `94.39% <0%> (+17.24%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1803/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `90.06% <0%> (+80.79%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1803?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1803?src=pr&el=footer). Last update [8618bf1...8a2be93](https://codecov.io/gh/huggingface/transformers/pull/1803?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"ok, I have updated the readme.",
"@importpandas thanks for contributing this fix, works great! My results running this mod:\r\n```\r\n{\r\n \"exact\": 82.07698138633876,\r\n \"f1\": 85.898874470488,\r\n \"total\": 11873,\r\n \"HasAns_exact\": 79.60526315789474,\r\n \"HasAns_f1\": 87.26000954590184,\r\n \"HasAns_total\": 5928,\r\n \"NoAns_exact\": 84.54163162321278,\r\n \"NoAns_f1\": 84.54163162321278,\r\n \"NoAns_total\": 5945,\r\n \"best_exact\": 83.22243746315169,\r\n \"best_exact_thresh\": -11.112004280090332,\r\n \"best_f1\": 86.88541353813282,\r\n \"best_f1_thresh\": -11.112004280090332\r\n}\r\n```\r\nDistributed processing for batch size 48 using gradient accumulation consumes 10970 MiB on each of 2x NVIDIA 1080Ti.\r\n```",
"@ahotrod As for your first results, I will suggest you watch my changes on file `run_squad.py` in this pr. It just added a another input to the model and only a few lines. Without this changing, I got the same results with you, which is nearly to 0 on unanswerable questions. I think Distributed processing isn't responsible for that. When it comes to the running script, it doesn't matter since I had hardly changed it.",
"@importpandas re-run just completed (36 hours) and results are posted above.\r\nWorks great, thanks again!",
"> @importpandas re-run just completed (36 hours) and results are posted above.\r\n> Works great, thanks again!\r\n\r\nokay, pleasure",
"LGTM, merging cc @LysandreJik (the `run_squad` master)"
] | 1,573 | 1,576 | 1,576 | CONTRIBUTOR | null | The following is a piece of code in forward function of xlnet model, which obviously is the key point of training the model on unanswerable questions using cls token representations. But the default value of tensor `is_impossible`(using to indicate whether this example is answerable) is none, and we also hadn't passed this tensor into forward function. That's the problem.
```
if cls_index is not None and is_impossible is not None:
# Predict answerability from the representation of CLS and START
cls_logits = self.answer_class(hidden_states, start_positions=start_positions, cls_index=cls_index)
loss_fct_cls = nn.BCEWithLogitsLoss()
cls_loss = loss_fct_cls(cls_logits, is_impossible)
total_loss += cls_loss * 0.5
```
I added the `is_impossible` tensor to TensorDataset and model inputs, and got a reasonable result,
{
"exact": 80.4177545691906,
"f1": 84.07154997729623,
"total": 11873,
"HasAns_exact": 77.59784075573549,
"HasAns_f1": 84.83993323200234,
"HasAns_total": 5928,
"NoAns_exact": 84.0874684608915,
"NoAns_f1": 84.0874684608915,
"NoAns_total": 5945
}
My running command:
```
python run_squad.py \
--model_type xlnet \
--model_name_or_path xlnet-large-cased \
--do_train \
--do_eval \
--version_2_with_negative \
--train_file data/train-v2.0.json \
--predict_file data/dev-v2.0.json \
--learning_rate 3e-5 \
--num_train_epochs 4 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ./wwm_cased_finetuned_squad/ \
--per_gpu_eval_batch_size=2 \
--per_gpu_train_batch_size=2 \
--save_steps 5000
```
I run my code with 4 nvidia GTX 1080ti gpus and pytorch==1.2.0.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1803/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1803/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1803",
"html_url": "https://github.com/huggingface/transformers/pull/1803",
"diff_url": "https://github.com/huggingface/transformers/pull/1803.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1803.patch",
"merged_at": 1576931928000
} |
https://api.github.com/repos/huggingface/transformers/issues/1802 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1802/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1802/comments | https://api.github.com/repos/huggingface/transformers/issues/1802/events | https://github.com/huggingface/transformers/issues/1802 | 521,397,577 | MDU6SXNzdWU1MjEzOTc1Nzc= | 1,802 | pip cannot install transformers with python version 3.8.0 | {
"login": "Lyther",
"id": 29906124,
"node_id": "MDQ6VXNlcjI5OTA2MTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/29906124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lyther",
"html_url": "https://github.com/Lyther",
"followers_url": "https://api.github.com/users/Lyther/followers",
"following_url": "https://api.github.com/users/Lyther/following{/other_user}",
"gists_url": "https://api.github.com/users/Lyther/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lyther/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lyther/subscriptions",
"organizations_url": "https://api.github.com/users/Lyther/orgs",
"repos_url": "https://api.github.com/users/Lyther/repos",
"events_url": "https://api.github.com/users/Lyther/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lyther/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This looks like an error related to Google SentencePiece and in particular this issue: https://github.com/google/sentencepiece/issues/411",
"https://github.com/google/sentencepiece/issues/411#issuecomment-557596691\r\n\r\n```\r\npip install https://github.com/google/sentencepiece/releases/download/v0.1.84/sentencepiece-0.1.84-cp38-cp38-manylinux1_x86_64.whl\r\n```",
"I've got problem installing matplotlib in python. While installing these error massage is shown. What to do now?\r\n\r\n\r\nCommand \"C:\\Users\\tawfiq\\PycharmProjects\\untitled4\\venv\\Scripts\\python.exe -u -c \"import setuptools, tokenize;__file__='C:\\\\Users\\\\tawfiq\\\\AppData\\\\Local\\\\Temp\\\\pip-install-yfe_bqlr\\\\matplotlib\\\\setup.py';f=getatt\r\nr(tokenize, 'open', open)(__file__);code=f.read().replace('\\r\\n', '\\n');f.close();exec(compile(code, __file__, 'exec'))\" install --record C:\\Users\\tawfiq\\AppData\\Local\\Temp\\pip-record-vmiaih4n\\install-record.txt -\r\n-single-version-externally-managed --compile --install-headers C:\\Users\\tawfiq\\PycharmProjects\\untitled4\\venv\\include\\site\\python3.8\\matplotlib\" failed with error code 1 in C:\\Users\\tawfiq\\AppData\\Local\\Temp\\pip-i\r\nnstall-yfe_bqlr\\matplotlib\\\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This is still an issue on MacOS: https://github.com/google/sentencepiece/issues/411#issuecomment-578509088",
"> google/sentencepiece#411 (comment)\r\n\r\nThis error comes up: ``` ERROR: sentencepiece-0.1.84-cp38-cp38-manylinux1_x86_64.whl is not a supported wheel on this platform.```\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,573 | 1,589 | 1,589 | NONE | null | ## ❓ Questions & Help
The error message looks like this,
` ERROR: Command errored out with exit status 1:
command: 'c:\users\enderaoe\appdata\local\programs\python\python38-32\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Enderaoe\\AppData\\Local\\Temp\\pip-install-g7jzfokt\\sentencepiece\\setup.py'"'"'; __file__='"'"'C:\\Users\\Enderaoe\\AppData\\Local\\Temp\\pip-install-g7jzfokt\\sentencepiece\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\Enderaoe\AppData\Local\Temp\pip-install-g7jzfokt\sentencepiece\pip-egg-info'
cwd: C:\Users\Enderaoe\AppData\Local\Temp\pip-install-g7jzfokt\sentencepiece\
Complete output (7 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\Enderaoe\AppData\Local\Temp\pip-install-g7jzfokt\sentencepiece\setup.py", line 29, in <module>
with codecs.open(os.path.join('..', 'VERSION'), 'r', 'utf-8') as f:
File "c:\users\enderaoe\appdata\local\programs\python\python38-32\lib\codecs.py", line 905, in open
file = builtins.open(filename, mode, buffering)
FileNotFoundError: [Errno 2] No such file or directory: '..\\VERSION'
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.`
Should I lower my python version, or there is any other solution? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1802/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1802/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1801 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1801/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1801/comments | https://api.github.com/repos/huggingface/transformers/issues/1801/events | https://github.com/huggingface/transformers/issues/1801 | 521,335,428 | MDU6SXNzdWU1MjEzMzU0Mjg= | 1,801 | run_glue.py RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:3 | {
"login": "insublee",
"id": 39117829,
"node_id": "MDQ6VXNlcjM5MTE3ODI5",
"avatar_url": "https://avatars.githubusercontent.com/u/39117829?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/insublee",
"html_url": "https://github.com/insublee",
"followers_url": "https://api.github.com/users/insublee/followers",
"following_url": "https://api.github.com/users/insublee/following{/other_user}",
"gists_url": "https://api.github.com/users/insublee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/insublee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/insublee/subscriptions",
"organizations_url": "https://api.github.com/users/insublee/orgs",
"repos_url": "https://api.github.com/users/insublee/repos",
"events_url": "https://api.github.com/users/insublee/events{/privacy}",
"received_events_url": "https://api.github.com/users/insublee/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This problem comes out from multiple GPUs usage. The error you have reported says that you have parameters or the buffers of the model in **two different locations**. \r\nSaid this, it's probably related to #1504 issue. Reading comments in the #1504 issue, i saw that @h-sugi suggests 4 days ago to modify the source code in `run_**.py` like this:\r\n\r\nBEFORE: `device = torch.device(\"cuda\" if torch.cuda.is_available() and not args.no_cuda else \"cpu\")`\r\nAFTER: `device = torch.device(\"cuda:0\" if torch.cuda.is_available() and not args.no_cuda else \"cpu\").`\r\n\r\nMoreover, the person who opened the issue @ahotrod says this: _Have had many successful SQuAD fine-tuning runs on PyTorch 1.2.0 with Pytorch-Transformers 1.2.0, maybe even Transformers 2.0.0, and Apex 0.1. New environment built with the latest versions (Pytorch 1.3.0, Transformers 2.1.1) spawns data parallel related error above_\r\n\r\nPlease, keep us updated on this topic!\r\n\r\n> ## Bug\r\n> Model I am using (Bert, XLNet....): Bert\r\n> \r\n> Language I am using the model on (English, Chinese....): English\r\n> \r\n> The problem arise when using:\r\n> \r\n> * [ ] the official example scripts: (give details) : transformers/examples/run_glue.py\r\n> * [ ] my own modified scripts: (give details)\r\n> \r\n> The tasks I am working on is:\r\n> \r\n> * [ ] an official GLUE/SQUaD task: (give the name) : MRPC\r\n> * [ ] my own task or dataset: (give details)\r\n> \r\n> ## To Reproduce\r\n> Steps to reproduce the behavior:\r\n> \r\n> \r\n> I've tested using\r\n> python -m pytest -sv ./transformers/tests/\r\n> python -m pytest -sv ./examples/\r\n> and it works fine without couple of tesks.\r\n> \r\n> \r\n> after test, i downloaded glue datafile via\r\n> https://gist.github.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e\r\n> and tried run_glue.py\r\n> \r\n> pip install -r ./examples/requirements.txt\r\n> export GLUE_DIR=/path/to/glue\r\n> export TASK_NAME=MRPC\r\n> \r\n> \r\n> python ./examples/run_glue.py \r\n> --model_type bert \r\n> --model_name_or_path bert-base-uncased \r\n> --task_name $TASK_NAME \r\n> --do_train \r\n> --do_eval \r\n> --do_lower_case \r\n> --data_dir $GLUE_DIR/$TASK_NAME \r\n> --max_seq_length 128 \r\n> --per_gpu_eval_batch_size=8 \r\n> --per_gpu_train_batch_size=8 \r\n> --learning_rate 2e-5 \r\n> --num_train_epochs 3.0 \r\n> --output_dir /tmp/$TASK_NAME/\r\n> \r\n> and i got this error.\r\n> \r\n> `11/11/2019 21:10:50 - INFO - __main__ - Total optimization steps = 345 Epoch: 0%| | 0/3 [00:00<?, ?it/sTraceback (most recent call last): | 0/115 [00:00<?, ?it/s] File \"./examples/run_glue.py\", line 552, in <module> main() File \"./examples/run_glue.py\", line 503, in main global_step, tr_loss = train(args, train_dataset, model, tokenizer) File \"./examples/run_glue.py\", line 146, in train outputs = model(**inputs) File \"/home/insublee/anaconda3/envs/py_torch4/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 541, in __call__ result = self.forward(*input, **kwargs) File \"/home/insublee/anaconda3/envs/py_torch4/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py\", line 146, in forward \"them on device: {}\".format(self.src_device_obj, t.device)) RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:3`\r\n> \r\n> ## Environment\r\n> * OS: ubuntu16.04LTS\r\n> * Python version: 3.7.5\r\n> * PyTorch version: 1.2.0\r\n> * PyTorch Transformers version (or branch): 2.1.1\r\n> * Using GPU ? 4-way 2080ti\r\n> * Distributed of parallel setup ? cuda10.0 cudnn 7.6.4\r\n> * Any other relevant information:\r\n> \r\n> ## Additional context\r\n> thank you.",
"thanks a lot. it works!!!! :)",
"@TheEdoardo93 After the change of `cuda` to `cuda:0`, will we still have multiple GPU usage for the jobs?",
"As stated in the official [docs](https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html), if you use `torch.device('cuda:0')` you will use **only** a GPU. If you want to use **multiple** GPUs, you can easily run your operations on multiple GPUs by making your model run parallelly using DataParallel: `model = nn.DataParallel(model)`\r\n\r\nYou can read more information [here](https://discuss.pytorch.org/t/run-pytorch-on-multiple-gpus/20932/38) and [here](https://pytorch.org/tutorials/beginner/former_torchies/parallelism_tutorial.html).\r\n\r\n> @TheEdoardo93 After the change of `cuda` to `cuda:0`, will we still have multiple GPU usage for the jobs?",
"@TheEdoardo93 I tested `run_glue.py` on a multi-gpu machine. Even, after changing `\"cuda\"` to `\"cuda:0\"` in this line https://github.com/huggingface/transformers/blob/b0ee7c7df3d49a819c4d6cef977214bd91f5c075/examples/run_glue.py#L425 , the training job will still use both GPUs relying on `torch.nn.DataParallel`. It means that `torch.nn.DataParallel` is smart enough even if you are defining `torch.device` to be `cuda:0`, if there are several gpus available, it will use all of them. ",
"> @TheEdoardo93 I tested `run_glue.py` on a multi-gpu machine. Even, after changing `\"cuda\"` to `\"cuda:0\"` in this line\r\n> \r\n> https://github.com/huggingface/transformers/blob/b0ee7c7df3d49a819c4d6cef977214bd91f5c075/examples/run_glue.py#L425\r\n> \r\n> , the training job will still use both GPUs relying on `torch.nn.DataParallel`. It means that `torch.nn.DataParallel` is smart enough even if you are defining `torch.device` to be `cuda:0`, if there are several gpus available, it will use all of them.\r\n\r\nmanually set 'args.n_gpu = 1' works for me",
"Just a simple `os.environ['CUDA_VISIBLE_DEVICES'] = 'GPU_NUM'` at the beginning of the script should work.",
"> > @TheEdoardo93 I tested `run_glue.py` on a multi-gpu machine. Even, after changing `\"cuda\"` to `\"cuda:0\"` in this line\r\n> > https://github.com/huggingface/transformers/blob/b0ee7c7df3d49a819c4d6cef977214bd91f5c075/examples/run_glue.py#L425\r\n> > \r\n> > , the training job will still use both GPUs relying on `torch.nn.DataParallel`. It means that `torch.nn.DataParallel` is smart enough even if you are defining `torch.device` to be `cuda:0`, if there are several gpus available, it will use all of them.\r\n> \r\n> manually set 'args.n_gpu = 1' works for me\r\n\r\nbut then you are not able to use more than 1 gpu, right?",
"> @TheEdoardo93 I tested `run_glue.py` on a multi-gpu machine. Even, after changing `\"cuda\"` to `\"cuda:0\"` in this line\r\n> \r\n> https://github.com/huggingface/transformers/blob/b0ee7c7df3d49a819c4d6cef977214bd91f5c075/examples/run_glue.py#L425\r\n> \r\n> , the training job will still use both GPUs relying on `torch.nn.DataParallel`. It means that `torch.nn.DataParallel` is smart enough even if you are defining `torch.device` to be `cuda:0`, if there are several gpus available, it will use all of them.\r\n\r\nI have trid but it doesn't work for me.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,573 | 1,591 | 1,591 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): Bert
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [ ] the official example scripts: (give details) : transformers/examples/run_glue.py
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name) : MRPC
* [ ] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
1.
I've tested using
python -m pytest -sv ./transformers/tests/
python -m pytest -sv ./examples/
and it works fine without couple of tesks.
2.
after test, i downloaded glue datafile via
https://gist.github.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e
and tried run_glue.py
pip install -r ./examples/requirements.txt
export GLUE_DIR=/path/to/glue
export TASK_NAME=MRPC
3.
python ./examples/run_glue.py \
--model_type bert \
--model_name_or_path bert-base-uncased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--do_lower_case \
--data_dir $GLUE_DIR/$TASK_NAME \
--max_seq_length 128 \
--per_gpu_eval_batch_size=8 \
--per_gpu_train_batch_size=8 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/$TASK_NAME/
and i got this error.
`11/11/2019 21:10:50 - INFO - __main__ - Total optimization steps = 345
Epoch: 0%| | 0/3 [00:00<?, ?it/sTraceback (most recent call last): | 0/115 [00:00<?, ?it/s]
File "./examples/run_glue.py", line 552, in <module>
main()
File "./examples/run_glue.py", line 503, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "./examples/run_glue.py", line 146, in train
outputs = model(**inputs)
File "/home/insublee/anaconda3/envs/py_torch4/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/insublee/anaconda3/envs/py_torch4/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 146, in forward
"them on device: {}".format(self.src_device_obj, t.device))
RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:3`
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: ubuntu16.04LTS
* Python version: 3.7.5
* PyTorch version: 1.2.0
* PyTorch Transformers version (or branch): 2.1.1
* Using GPU ? 4-way 2080ti
* Distributed of parallel setup ? cuda10.0 cudnn 7.6.4
* Any other relevant information:
## Additional context
thank you. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1801/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1801/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1800 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1800/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1800/comments | https://api.github.com/repos/huggingface/transformers/issues/1800/events | https://github.com/huggingface/transformers/issues/1800 | 521,311,524 | MDU6SXNzdWU1MjEzMTE1MjQ= | 1,800 | Exact and F1 score do not increase when fine-tunes XLM on the SQuAD dataset | {
"login": "ZhengWeiH",
"id": 43492059,
"node_id": "MDQ6VXNlcjQzNDkyMDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/43492059?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhengWeiH",
"html_url": "https://github.com/ZhengWeiH",
"followers_url": "https://api.github.com/users/ZhengWeiH/followers",
"following_url": "https://api.github.com/users/ZhengWeiH/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhengWeiH/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhengWeiH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhengWeiH/subscriptions",
"organizations_url": "https://api.github.com/users/ZhengWeiH/orgs",
"repos_url": "https://api.github.com/users/ZhengWeiH/repos",
"events_url": "https://api.github.com/users/ZhengWeiH/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhengWeiH/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"duplicate #1799 "
] | 1,573 | 1,580 | 1,579 | NONE | null | ## ❓ Questions & Help
I am trying to fine-tune XLM on the SQuAD dataset.
The command is as following:
[CUDA_VISIBLE_DEVICES=0 python run_squad.py --model_type xlm --model_name_or_path xlm-mlm-tlm-xnli15-1024 --do_train --do_eval --train_file $SQUAD_DIR/train-v1.1.json --predict_file $SQUAD_DIR/dev-v1.1.json --per_gpu_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 100 --max_seq_length 384 --doc_stride 128 --eval_all_checkpoints --save_steps 50 --evaluate_during_training --output_dir /home/weihua/Sqad/transformers/xlm_out/ ]

All the parameters I set are in accordance with the example in the pytorch-transformers document. But Exact and F1 scores have barely increased. And the loss of each epoch also drops very slowly. Each epoch drops by about 0.01.

Is there a problem with my parameter settings? Or do I need to adjust some parts of the model when I use XLM?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1800/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1800/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1799 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1799/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1799/comments | https://api.github.com/repos/huggingface/transformers/issues/1799/events | https://github.com/huggingface/transformers/issues/1799 | 521,310,381 | MDU6SXNzdWU1MjEzMTAzODE= | 1,799 | Exact and F1 score do not increase when fine-tunes XLM on the SQuAD dataset | {
"login": "ZhengWeiH",
"id": 43492059,
"node_id": "MDQ6VXNlcjQzNDkyMDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/43492059?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhengWeiH",
"html_url": "https://github.com/ZhengWeiH",
"followers_url": "https://api.github.com/users/ZhengWeiH/followers",
"following_url": "https://api.github.com/users/ZhengWeiH/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhengWeiH/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhengWeiH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhengWeiH/subscriptions",
"organizations_url": "https://api.github.com/users/ZhengWeiH/orgs",
"repos_url": "https://api.github.com/users/ZhengWeiH/repos",
"events_url": "https://api.github.com/users/ZhengWeiH/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhengWeiH/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I haven't yet tried XLM for SQUad but I did try to finetune it on a similar (private) dataset. I managed to make it converge however the F1 is much worse than that of BERT (~0.75 vs ~0.80). XLM training seems to be very learning rate sensitive so you may want to tinker with that a bit.",
"Thank you for your answer. \r\nDid you have any changes to the code released in pytorch-transformers when you test on your own dataset? Could you tell me if you did any changes? Or cloud you share your parameter settings when you test on your own data set?\r\nThank you.",
"> Thank you for your answer.\r\n> Did you have any changes to the code released in pytorch-transformers when you test on your own dataset? Could you tell me if you did any changes? Or cloud you share your parameter settings when you test on your own data set?\r\n> Thank you.\r\n\r\nI think the only changes I made was adding language embeddings since I was finetuning `xlm-mlm-tlm-xnli15-1024` which requires that.",
"> > Thank you for your answer.\r\n> > Did you have any changes to the code released in pytorch-transformers when you test on your own dataset? Could you tell me if you did any changes? Or cloud you share your parameter settings when you test on your own data set?\r\n> > Thank you.\r\n> \r\n> I think the only changes I made was adding language embeddings since I was finetuning `xlm-mlm-tlm-xnli15-1024` which requires that.\r\n\r\nHi, I've tried all the XLM pre-training models and got random results. loss does not converge. can you help me with this problem? thanks @suicao @ZhengWeiH ",
"> > Thank you for your answer.\r\n> > Did you have any changes to the code released in pytorch-transformers when you test on your own dataset? Could you tell me if you did any changes? Or cloud you share your parameter settings when you test on your own data set?\r\n> > Thank you.\r\n> \r\n> I think the only changes I made was adding language embeddings since I was finetuning `xlm-mlm-tlm-xnli15-1024` which requires that.\r\n\r\nThanks @suicao !\r\nWhere did you obtain language embeddings? Doesn't pre-trained `XLM` already come with a multilingual BPE embedding dictionary?",
"Ok, I think I may have found the issue in this particular model: it requires language IDs to be specified, but the default is `0`, which for `xlm-mlm-tlm-xnli15-1024` is Arabic, not English (which SQuAD is in). English is `4`, based on the `XLMConfig` class.\r\nI think this will require passing an additional input tensor to all training and evaluation batches, that's all `4` `int64`s.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,573 | 1,586 | 1,586 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using XLM:
Language I am using the model on English:
The problem arise when using:
[CUDA_VISIBLE_DEVICES=0 python run_squad.py --model_type xlm --model_name_or_path xlm-mlm-tlm-xnli15-1024 --do_train --do_eval --train_file $SQUAD_DIR/train-v1.1.json --predict_file $SQUAD_DIR/dev-v1.1.json --per_gpu_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 100 --max_seq_length 384 --doc_stride 128 --eval_all_checkpoints --save_steps 50 --evaluate_during_training --output_dir /home/weihua/Sqad/transformers/xlm_out/ ]

The tasks I am working on is an official SQUaD task.
All the parameters I set are in accordance with the example in the pytorch-transformers document. But Exact and F1 scores have barely increased. And the loss of each epoch also drops very slowly. Each epoch drops by about 0.01.

Is there a problem with my parameter settings? Or do I need to adjust some parts of the model when I use XLM?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1799/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1798 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1798/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1798/comments | https://api.github.com/repos/huggingface/transformers/issues/1798/events | https://github.com/huggingface/transformers/issues/1798 | 521,306,501 | MDU6SXNzdWU1MjEzMDY1MDE= | 1,798 | Add an LSTM and CNN layer on top of BERT embeddings for sentiment analysis task | {
"login": "johnahug",
"id": 57651296,
"node_id": "MDQ6VXNlcjU3NjUxMjk2",
"avatar_url": "https://avatars.githubusercontent.com/u/57651296?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/johnahug",
"html_url": "https://github.com/johnahug",
"followers_url": "https://api.github.com/users/johnahug/followers",
"following_url": "https://api.github.com/users/johnahug/following{/other_user}",
"gists_url": "https://api.github.com/users/johnahug/gists{/gist_id}",
"starred_url": "https://api.github.com/users/johnahug/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/johnahug/subscriptions",
"organizations_url": "https://api.github.com/users/johnahug/orgs",
"repos_url": "https://api.github.com/users/johnahug/repos",
"events_url": "https://api.github.com/users/johnahug/events{/privacy}",
"received_events_url": "https://api.github.com/users/johnahug/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @johnahug, I would recommend checking out how the TFBertFor* models work and trying a similar method to add the desired layers via Keras, for example: https://github.com/huggingface/transformers/blob/155c782a2ccd103cf63ad48a2becd7c76a7d2115/transformers/modeling_tf_bert.py#L834\r\n\r\nI might be able to help out more at a later time, but I'm not sure exactly when just yet. So it might be worthwhile to go ahead and try it out :)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi @watkinsm any update on this?",
"hey @johnahug have you got what you were looking for? I am also trying to add CNN to my bert embedding but have no idea how to"
] | 1,573 | 1,580 | 1,579 | NONE | null | I am trying to add an LSTM and a convolutional layer on top of my BERT embeddings using the Transformers package in Tensorflow for a sentiment analysis task. Does someone know how I can go about that? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1798/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/1798/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1797 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1797/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1797/comments | https://api.github.com/repos/huggingface/transformers/issues/1797/events | https://github.com/huggingface/transformers/pull/1797 | 521,306,293 | MDExOlB1bGxSZXF1ZXN0MzM5NzA5NDQ3 | 1,797 | TF: model forwards can take an inputs_embeds param | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,573 | 1,573 | 1,573 | MEMBER | null | see https://github.com/huggingface/transformers/pull/1695 (non-TF) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1797/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1797/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1797",
"html_url": "https://github.com/huggingface/transformers/pull/1797",
"diff_url": "https://github.com/huggingface/transformers/pull/1797.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1797.patch",
"merged_at": 1573576162000
} |
https://api.github.com/repos/huggingface/transformers/issues/1796 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1796/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1796/comments | https://api.github.com/repos/huggingface/transformers/issues/1796/events | https://github.com/huggingface/transformers/pull/1796 | 521,277,736 | MDExOlB1bGxSZXF1ZXN0MzM5Njg3MTMw | 1,796 | Fix GPT2LMHeadModel.from_pretrained(from_tf=True) | {
"login": "leogao2",
"id": 54557097,
"node_id": "MDQ6VXNlcjU0NTU3MDk3",
"avatar_url": "https://avatars.githubusercontent.com/u/54557097?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leogao2",
"html_url": "https://github.com/leogao2",
"followers_url": "https://api.github.com/users/leogao2/followers",
"following_url": "https://api.github.com/users/leogao2/following{/other_user}",
"gists_url": "https://api.github.com/users/leogao2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leogao2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leogao2/subscriptions",
"organizations_url": "https://api.github.com/users/leogao2/orgs",
"repos_url": "https://api.github.com/users/leogao2/repos",
"events_url": "https://api.github.com/users/leogao2/events{/privacy}",
"received_events_url": "https://api.github.com/users/leogao2/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1796?src=pr&el=h1) Report\n> Merging [#1796](https://codecov.io/gh/huggingface/transformers/pull/1796?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b5d330d11820f4ac2cc8c909b1a6a77e0cd961e0?src=pr&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `0%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1796?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1796 +/- ##\n==========================================\n- Coverage 84.03% 84.02% -0.02% \n==========================================\n Files 94 94 \n Lines 14032 14034 +2 \n==========================================\n Hits 11792 11792 \n- Misses 2240 2242 +2\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1796?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/1796/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2dwdDIucHk=) | `83.91% <0%> (-0.54%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1796?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1796?src=pr&el=footer). Last update [b5d330d...60ebaa5](https://codecov.io/gh/huggingface/transformers/pull/1796?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Looks good to me!",
"Are you loading from a TF 1.0 or a TF 2.0 model?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,573 | 1,583 | 1,583 | CONTRIBUTOR | null | GPT2LMHeadModel.from_pretrained(from_tf=True) doesn't work because pointer points to the GPT2LMHeadModel instance, not the GPT2Model instance.
This bug causes errors like:
'GPT2LMHeadModel' object has no attribute 'h' | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1796/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1796",
"html_url": "https://github.com/huggingface/transformers/pull/1796",
"diff_url": "https://github.com/huggingface/transformers/pull/1796.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1796.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1795 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1795/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1795/comments | https://api.github.com/repos/huggingface/transformers/issues/1795/events | https://github.com/huggingface/transformers/issues/1795 | 521,267,846 | MDU6SXNzdWU1MjEyNjc4NDY= | 1,795 | RuntimeError: Connection timed out in Single node Multi GPU training | {
"login": "kamalravi",
"id": 9251058,
"node_id": "MDQ6VXNlcjkyNTEwNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9251058?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kamalravi",
"html_url": "https://github.com/kamalravi",
"followers_url": "https://api.github.com/users/kamalravi/followers",
"following_url": "https://api.github.com/users/kamalravi/following{/other_user}",
"gists_url": "https://api.github.com/users/kamalravi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kamalravi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kamalravi/subscriptions",
"organizations_url": "https://api.github.com/users/kamalravi/orgs",
"repos_url": "https://api.github.com/users/kamalravi/repos",
"events_url": "https://api.github.com/users/kamalravi/events{/privacy}",
"received_events_url": "https://api.github.com/users/kamalravi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"By any chance, did you open the port number you're using?",
"Yes, it is established. What is the master_addr and master_port when submitting a job with single node and multigpu config in GCP?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi, @kamalravi were you able to solve this issue?? I am facing the same issue right now where my `init_process_group` freezes and doesn't move forward for training. Could you take a look at this and suggest any relevant solutions if you have found them??\r\n\r\nThanks!\r\n\r\nhttps://github.com/pytorch/pytorch/issues/53395#issuecomment-954103393"
] | 1,573 | 1,635 | 1,580 | NONE | null | I am trying to pre-train DistilBERT with single node multigpu as given here https://github.com/huggingface/transformers/tree/master/examples/distillation.
I have set the IP address of my gcp instance and port number. I am getting this error. Any solutions?
> File "/opt/conda/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 400, in init_process_group
store, rank, world_size = next(rendezvous(url))
File "/opt/conda/lib/python3.6/site-packages/torch/distributed/rendezvous.py", line 143, in _env_rendezvous_handler
store = TCPStore(master_addr, master_port, world_size, start_daemon)
RuntimeError: Connection timed out | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1795/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1794 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1794/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1794/comments | https://api.github.com/repos/huggingface/transformers/issues/1794/events | https://github.com/huggingface/transformers/issues/1794 | 521,193,988 | MDU6SXNzdWU1MjExOTM5ODg= | 1,794 | Confused by GPT2DoubleHeadsModel example | {
"login": "weiguowilliam",
"id": 31396452,
"node_id": "MDQ6VXNlcjMxMzk2NDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/31396452?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/weiguowilliam",
"html_url": "https://github.com/weiguowilliam",
"followers_url": "https://api.github.com/users/weiguowilliam/followers",
"following_url": "https://api.github.com/users/weiguowilliam/following{/other_user}",
"gists_url": "https://api.github.com/users/weiguowilliam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/weiguowilliam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/weiguowilliam/subscriptions",
"organizations_url": "https://api.github.com/users/weiguowilliam/orgs",
"repos_url": "https://api.github.com/users/weiguowilliam/repos",
"events_url": "https://api.github.com/users/weiguowilliam/events{/privacy}",
"received_events_url": "https://api.github.com/users/weiguowilliam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"For 2: It's next word prediction: https://openai.com/blog/better-language-models/",
"> For 2: It's next word prediction: https://openai.com/blog/better-language-models/\r\n\r\nThank you for your reply. From my perspective, I believe the next word prediction is what the language model does, which means it is the \"unsupervised\" part in the first head. \r\n\r\nThe evidence is that gpt-2 is based on gpt. Based on figure 1 in gpt paper, the left head is for \"text prediction\"(next word prediction), and the right head is for \"task prediction\".",
"Hi, the GPT2DoubleHeadsModel, as defined in [the documentation](https://huggingface.co/transformers/model_doc/gpt2.html#gpt2doubleheadsmodel), is: \"_The GPT2 Model transformer with a language modeling and a multiple-choice classification head on top e.g. for RocStories/SWAG tasks. The two heads are two linear layers. The language modeling head has its weights tied to the input embeddings, the classification head takes as input the input of a specified classification token index in the input sequence)._\".\r\n\r\nYou use a double head model when you want to have two separate losses. In this case the two linear layers compute two different losses: one of the heads is a language modeling loss, with a linear layer of size `hidden_size` x `vocab_size` and the other is a classification loss with a linear layer of size `hidden size` x `number_of_choices`.\r\n\r\nYou can read [this blog post](https://medium.com/huggingface/how-to-build-a-state-of-the-art-conversational-ai-with-transfer-learning-2d818ac26313) by @thomwolf which uses GPT2DoubleHeadsModel with a previous version of this library (pytorch-pretrained-BERT).",
"@LysandreJik \r\nHi, thank you for your help!"
] | 1,573 | 1,573 | 1,573 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Your library is really helpful. And I have 2 questions about the example of GPT2DoubleHeadsModel.
1.
In source code of model,
> choices = ["Hello, my dog is cute [CLS]", "Hello, my cat is cute [CLS]"]
Here the comment said that "number of choices: 2". So is it a single sample or two samples? What are the two label classes?
2.
What task is the GPT2DOUBLEHeadsModel trained on? I don't find the information from the document, issues or the original paper.
Please help me. Thank you in advance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1794/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1793 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1793/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1793/comments | https://api.github.com/repos/huggingface/transformers/issues/1793/events | https://github.com/huggingface/transformers/issues/1793 | 521,192,076 | MDU6SXNzdWU1MjExOTIwNzY= | 1,793 | MNLI: BERT No Training Progress | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I'm assuming you did multiple runs with ≠ seeds?",
"Yes -- I have tried with multiple different seeds.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> Yes -- I have tried with multiple different seeds.\r\n\r\n@evanweissburg Have you found a solution? I'm also having issues with MNLI data. ",
"@anthonyfuller7 Nope. I think one resolution was to not use BERT large.",
"I too am facing the same issue with BERT and Albert as well. The model does not converge on fine-tuning and the loss does not decrease even over 5 epochs. Did someone manage to solve this issue?"
] | 1,573 | 1,609 | 1,579 | NONE | null | ## 🐛 Bug
I am using BERT and have successfully set up pipelines for 7/8 GLUE tasks, and I find comparably good accuracy on all of them. However, for MNLI task, training loss does not converge at all. I am correctly using 3 classes with `num_classes`. In fact, I have even tried reduced the scope of MNLI to a 2-class problem (entailment vs neutral/contradiction) for testing purposes to see if this is the issue, and the model fails to converge on this training task as well. However, the exact same code (with only the dataset switched) works for MRPC, for example.
Model I am using (Bert, XLNet....): **BERT**
Language I am using the model on (English, Chinese....): **English**
The problem arise when using:
* [ ] the official example scripts: (give details)
* [x] my own modified scripts: **see co-lab**
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: **MNLI**
* [ ] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
See linked co-lab for minimum viable example
https://colab.research.google.com/drive/1Thdrpvp0uX2TCaCpqsLoUVn7DuRLlILQ
## Expected behavior
I expect to see good training progress; instead, training loss does not converge.
## Environment
* OS: **Linux**
* Python version: **3.6**
* PyTorch version: **1.3.0**
* PyTorch Transformers version (or branch): **2.1.1**
* Using GPU ? **Yes**
* Distributed of parallel setup ? **No, single GPU**
* Any other relevant information: **None**
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1793/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1792 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1792/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1792/comments | https://api.github.com/repos/huggingface/transformers/issues/1792/events | https://github.com/huggingface/transformers/pull/1792 | 521,064,883 | MDExOlB1bGxSZXF1ZXN0MzM5NTE4MTQ2 | 1,792 | DistilBERT for token classification | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is great, thanks @stefan-it!",
"This is great, thanks a lot @stefan-it.\r\nI've added your quick benchmark in the readme.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1792?src=pr&el=h1) Report\n> Merging [#1792](https://codecov.io/gh/huggingface/transformers/pull/1792?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b5d330d11820f4ac2cc8c909b1a6a77e0cd961e0?src=pr&el=desc) will **increase** coverage by `0.03%`.\n> The diff coverage is `97.29%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1792?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1792 +/- ##\n==========================================\n+ Coverage 84.03% 84.07% +0.03% \n==========================================\n Files 94 94 \n Lines 14032 14069 +37 \n==========================================\n+ Hits 11792 11828 +36 \n- Misses 2240 2241 +1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1792?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tests/modeling\\_distilbert\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1792/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2Rpc3RpbGJlcnRfdGVzdC5weQ==) | `99.18% <100%> (+0.08%)` | :arrow_up: |\n| [transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1792/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2Rpc3RpbGJlcnQucHk=) | `95.87% <96.15%> (+0.02%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1792?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1792?src=pr&el=footer). Last update [b5d330d...05db5bc](https://codecov.io/gh/huggingface/transformers/pull/1792?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"I just wanted to say thanks for this PR, it was just what I was looking for at the time."
] | 1,573 | 1,574 | 1,573 | COLLABORATOR | null | Hi,
this PR adds a `DistilBertForTokenClassification` implementation (mainly inspired by the BERT implementation) that allows to perform sequence labeling tasks like NER or PoS tagging.
Additionally, the `run_ner.py` example script was modified to fully support DistilBERT for NER tasks.
I did a small comparison between BERT (large, cased), RoBERTa (large, cased) and DistilBERT (base, uncased) with the same hyperparameters as specified in the [example documentation](https://huggingface.co/transformers/examples.html#named-entity-recognition) (one run):
| Model | F-Score Dev | F-Score Test
| --------------------------------- | ------- | --------
| `bert-large-cased` | 95.59 | 91.70
| `roberta-large` | 95.96 | 91.87
| `distilbert-base-uncased` | 94.34 | 90.32
Unit test for the `DistilBertForTokenClassification` implementation is also added. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1792/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1792/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1792",
"html_url": "https://github.com/huggingface/transformers/pull/1792",
"diff_url": "https://github.com/huggingface/transformers/pull/1792.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1792.patch",
"merged_at": 1573768074000
} |
https://api.github.com/repos/huggingface/transformers/issues/1791 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1791/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1791/comments | https://api.github.com/repos/huggingface/transformers/issues/1791/events | https://github.com/huggingface/transformers/issues/1791 | 520,982,329 | MDU6SXNzdWU1MjA5ODIzMjk= | 1,791 | token indices sequence length is longer than the specified maximum sequence length | {
"login": "cswangjiawei",
"id": 33107884,
"node_id": "MDQ6VXNlcjMzMTA3ODg0",
"avatar_url": "https://avatars.githubusercontent.com/u/33107884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cswangjiawei",
"html_url": "https://github.com/cswangjiawei",
"followers_url": "https://api.github.com/users/cswangjiawei/followers",
"following_url": "https://api.github.com/users/cswangjiawei/following{/other_user}",
"gists_url": "https://api.github.com/users/cswangjiawei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cswangjiawei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cswangjiawei/subscriptions",
"organizations_url": "https://api.github.com/users/cswangjiawei/orgs",
"repos_url": "https://api.github.com/users/cswangjiawei/repos",
"events_url": "https://api.github.com/users/cswangjiawei/events{/privacy}",
"received_events_url": "https://api.github.com/users/cswangjiawei/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This means you're encoding a sequence that is larger than the max sequence the model can handle (which is 512 tokens). This is not an error but a warning; if you pass that sequence to the model it will crash as it cannot handle such a long sequence.\r\n\r\nYou can truncate the sequence: `seq = seq[:512]` or use the `max_length` tokenizer parameter so that it handles it on its own.",
"Thank you. I truncate the sequence and it worked. But I use the parameter `max_length` of the method \"encode\" of the class of Tokenizer , it do not works.",
"Hi, could you show me how you're using the `max_length` parameter?\r\n\r\nEdit:\r\n\r\nThe recommended way is to call the tokenizer directly instead of using the `encode` method, so the following is the recommended way of handling it:\r\n\r\n```py\r\nfrom transformers import GPT2Tokenizer\r\n\r\ntext = \"This is a sequence\"\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\nx = tokenizer(text, truncation=True, max_length=2)\r\n\r\nprint(len(x)) # 2\r\n```\r\n\r\n--- \r\n\r\nPrevious answer:\r\n\r\nIf you use it as such it should truncate your sequences:\r\n\r\n```py\r\nfrom transformers import GPT2Tokenizer\r\n\r\ntext = \"This is a sequence\"\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\nx = tokenizer.encode(text, max_length=2)\r\n\r\nprint(len(x)) # 2\r\n```",
"I use `max_length` is as follows:\r\n```\r\nmodel_class, tokenizer_class, pretrained_weights = BertModel, BertTokenizer, 'bert-base-uncased'\r\ntokenizer = tokenizer_class.from_pretrained(pretrained_weights)\r\nmodel = model_class.from_pretrained(pretrained_weights)\r\n\r\ntext = \"After a morning of Thrift Store hunting, a friend and I were thinking of lunch, and he ... \" #this sentence is very long, Its length is more than 512. In order to save space, not all of them are shown here\r\n# text = tokenizer.tokenize(text)\r\n# if len(text) > 512:\r\n# text = text[:512]\r\n#text = \"After a morning of Thrift Store hunting, a friend and I were thinking of lunch\"\r\n\r\ntext = tokenizer.encode(text, add_special_tokens=True, max_length=10)\r\nprint(text)\r\nprint(len(text))\r\n```\r\nIt works. I previously set `max_length` to 512, just output the encoded list, so I didn't notice that the length has changed. But the warning still occurs:\r\n\r\n\r\n",
"Glad it works! Indeed, we should do something about this warning, it shouldn't appear when a max length is specified.",
"Thank you very much!",
"What if I need the sequence length to be longer than 512 (e.g., to retrieve the answer in a QA model)?",
"Hi @LukasMut this question might be better suited to Stack Overflow.",
"I have the same doubt as @LukasMut . Did you open a Stack Overflow question?",
"did you got the solution @LukasMut @paulogaspar ",
"Not really. All solutions point to using only the 512 tokens, and choosing what to place in those tokens (for example, picking which part of the text)",
"Having the same issue @paulogaspar any update on this? I'm having sequences with more than 512 tokens.",
"> Having the same issue @paulogaspar any update on this? I'm having sequences with more than 512 tokens.\r\n\r\nTake a look at my last answer, that's the point I'm at.",
"Also dealing with this issue and thought I'd post what's going through my head, correct me if I'm wrong but I think the maximum sequence length is determined when the model is first trained? In which case training a model with a larger sequence length is the solution? And I'm wondering if fine-tuning can be used to increase the sequence length.",
"Same question. What to do if text is long?",
"That's a research questions guys",
"This might help people looking for further details https://github.com/pytorch/fairseq/issues/1685 & https://github.com/google-research/bert/issues/27",
"Hi,\r\nThe question i have is almost the same.\r\nBert has some configuration options. As far as i know about transformers, it's not constrain by sequence length at all. \r\nCan I change the config to have more than 512 tokens ?",
"Most transformers are unfortunately completely constrained, which is the case for BERT (512 tokens max).\r\n\r\nIf you want to use transformers without being limited to a sequence length, you should take a look at Transformer-XL or XLNet.",
"@LysandreJik \r\n\r\nI thought [XLNet](https://arxiv.org/pdf/1906.08237.pdf) has a max length of 512 as well. \r\n\r\n[Transformer-XL](https://arxiv.org/pdf/1906.08237.pdf) is still is a mystery to me because it seems like the length is still 512 for downstream tasks, unlike language modeling (pre-training).\r\n\r\nPlease let me know if my understanding is incorrect.\r\n\r\nThanks!",
"XLNet was pre-trained/fine-tuned with a maximum length of 512, indeed. However, the model is not limited to such a length:\r\n\r\n```py\r\nfrom transformers import XLNetLMHeadModel, XLNetTokenizer\r\n\r\ntokenizer = XLNetTokenizer.from_pretrained(\"xlnet-base-cased\")\r\nmodel = XLNetLMHeadModel.from_pretrained(\"xlnet-base-cased\")\r\n\r\nencoded = tokenizer.encode_plus(\"Alright, let's do this\" * 500, return_tensors=\"pt\")\r\nprint(encoded[\"input_ids\"].shape) # torch.Size([1, 3503])\r\nprint(model(**encoded)[0].shape) # torch.Size([1, 3503, 32000])\r\n```\r\n\r\nThe model is not limited to a specific length because it doesn't leverage absolute positional embeddings, instead leveraging the same relative positional embeddings that Transformer-XL used. Please note that since the model isn't trained on larger sequences thant 512, no results are guaranteed on larger sequences, even if the model can still handle them.",
"I was going to try this out, but after reading this out few times now, I still have no idea how I'm supposed to truncate the token stream for the pipeline.\r\n\r\nI got some results by combining @cswangjiawei 's advice of running the tokenizer, but it returns a truncated sequence that is slightly longer than the limit I set.\r\n\r\nOtherwise the results are good, although they come out slow and I may have to figure how to activate cuda on py torch.\r\n\r\nUpdate: There is an article that shows how to run the summarizer on large texts, I got it to work with this one: https://www.thepythoncode.com/article/text-summarization-using-huggingface-transformers-python",
"> What if I need the sequence length to be longer than 512 (e.g., to retrieve the answer in a QA model)?\r\n\r\nYou can go for BigBird as it takes a input token size of 4096 tokens(but can take upto 16K size)",
"Let me help with what I have understood. Correct me if I am wrong. The reason you can't use sequence length more than max_length is because of the positional encoding. Let's have a look at the positional encoding in the original Transformer paper\r\n\r\n\r\nSo the _pos_ in the formula is the index of the words, and they have set 10000 as the scale to cover the usual length of most of the sentences. Now, if you look at the visualization of these functions, you will notice until the _pos_ value is less than 10000 we will get a unique temporal representation of each word. But once it's length is more than 10000 representation won't be unique for each word (e.g. 1st and 10001 will have the same representation). So if max_length = scale (512 as discussed here) and sequence_length > max_length positional encoding will not work.\r\nI didn't check what scale value (you can check it by yourself) BERT uses, but probably this may be the reason.\r\n\r\n.",
"> > What if I need the sequence length to be longer than 512 (e.g., to retrieve the answer in a QA model)?\r\n> \r\n> You can go for BigBird as it takes a input token size of 4096 tokens(but can take upto 16K size)\r\n\r\nThe code and weights for BigBird haven't been published yet, am I right?",
"> > > What if I need the sequence length to be longer than 512 (e.g., to retrieve the answer in a QA model)?\r\n> > \r\n> > \r\n> > You can go for BigBird as it takes a input token size of 4096 tokens(but can take upto 16K size)\r\n> \r\n> The code and weights for BigBird haven't been published yet, am I right?\r\n\r\nYes and in that case you have Longformers, Reformers which can handle the long sequences.",
"My model was pretrained with max_seq_len of 128 and max_posi_embeddings of 512 using the original BERT code release.\r\nI am having the same problem here. I have tried a couple of fixes, but none of them is working for me.\r\n\r\n```\r\nexport MAX_LENGTH=120\r\nexport MODEL=\"./bert-1.5M\"\r\n\r\npython3 preprocess.py ./data/train.txt $MODEL $MAX_LENGTH > train.txt\r\npython3 preprocess.py ./data/dev.txt $MODEL $MAX_LENGTH > dev.txt\r\npython3 preprocess.py ./data/test.txt $MODEL $MAX_LENGTH > test.txt\r\n```\r\n\r\n```export MAX_LENGTH=512 # I have tried 128, 256 \r\n\r\nI am running run_ner_old.py file.\r\n\r\nCan anyone help.",
"> I use `max_length` is as follows:\r\n> \r\n> ```\r\n> model_class, tokenizer_class, pretrained_weights = BertModel, BertTokenizer, 'bert-base-uncased'\r\n> tokenizer = tokenizer_class.from_pretrained(pretrained_weights)\r\n> model = model_class.from_pretrained(pretrained_weights)\r\n> \r\n> text = \"After a morning of Thrift Store hunting, a friend and I were thinking of lunch, and he ... \" #this sentence is very long, Its length is more than 512. In order to save space, not all of them are shown here\r\n> # text = tokenizer.tokenize(text)\r\n> # if len(text) > 512:\r\n> # text = text[:512]\r\n> #text = \"After a morning of Thrift Store hunting, a friend and I were thinking of lunch\"\r\n> \r\n> text = tokenizer.encode(text, add_special_tokens=True, max_length=10)\r\n> print(text)\r\n> print(len(text))\r\n> ```\r\n> \r\n> It works. I previously set `max_length` to 512, just output the encoded list, so I didn't notice that the length has changed. But the warning still occurs:\r\n> \r\n\r\nHow to apply this method in csv file i have csv file \"data.csv\" in 2nd column it contains news that to be pass in bert of 512 length ",
"I am trying to create an arbitrary length text summarizer using Huggingface; should I just partition the input text to the max model length, summarize each part to, say, half its original length, and repeat this procedure as long as necessary to reach the target length for the whole sequence? \r\n\r\nIt feels to me that this is quite a general problem. Shouldn't this be supported as part of the `pipeline` API itself? (I can do a PR if it's a good fit for the API.)",
"Not sure if this is the best approach, but I did something like this and it solves the problem ^\r\n\r\n```python\r\nsummarizer = pipeline(\"summarization\", model=\"facebook/bart-large-cnn\")\r\ndef summarize_text(text: str, max_len: int) -> str:\r\n try:\r\n summary = summarizer(text, max_length=max_len, min_length=10, do_sample=False)\r\n return summary[0][\"summary_text\"]\r\n except IndexError as ex:\r\n logging.warning(\"Sequence length too large for model, cutting text in half and calling again\")\r\n return summarize_text(text=text[:(len(text) // 2)], max_len=max_len//2) + summarize_text(text=text[(len(text) // 2):], max_len=max_len//2)\r\n```"
] | 1,573 | 1,707 | 1,575 | NONE | null | ## ❓ Questions & Help
When I use Bert, the "token indices sequence length is longer than the specified maximum sequence length for this model (1017 > 512)" occurs. How can I solve this error? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1791/reactions",
"total_count": 54,
"+1": 37,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 17
} | https://api.github.com/repos/huggingface/transformers/issues/1791/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1790 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1790/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1790/comments | https://api.github.com/repos/huggingface/transformers/issues/1790/events | https://github.com/huggingface/transformers/issues/1790 | 520,891,030 | MDU6SXNzdWU1MjA4OTEwMzA= | 1,790 | transformers vs pytorch_pretrained_bert giving different scores for BertForNextSentencePrediction | {
"login": "AjitAntony",
"id": 46282348,
"node_id": "MDQ6VXNlcjQ2MjgyMzQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/46282348?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AjitAntony",
"html_url": "https://github.com/AjitAntony",
"followers_url": "https://api.github.com/users/AjitAntony/followers",
"following_url": "https://api.github.com/users/AjitAntony/following{/other_user}",
"gists_url": "https://api.github.com/users/AjitAntony/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AjitAntony/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AjitAntony/subscriptions",
"organizations_url": "https://api.github.com/users/AjitAntony/orgs",
"repos_url": "https://api.github.com/users/AjitAntony/repos",
"events_url": "https://api.github.com/users/AjitAntony/events{/privacy}",
"received_events_url": "https://api.github.com/users/AjitAntony/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, since `pytorch_pretrained_BERT`, many breaking changes have happened, two of which are causing confusion in your snippet:\r\n\r\n- The order of arguments in the forward call has been slightly changed [(v2.0.0)](https://github.com/huggingface/transformers/releases/tag/v2.0.0)\r\n- The models now always return tuples [(v1.0.0)](https://github.com/huggingface/transformers/releases/tag/v1.0.0).\r\n\r\nThe attention mask and token type ids order has been changed for the forward method to better respect the order of importance, which is important when compiling a model with torchscript.\r\n\r\nUpdate the forward call of your model as such to obtain identical results on both:\r\n\r\n```py\r\nprediction = model(tokens_tensor, token_type_ids=segments_tensors)\r\n```\r\n\r\nTo prevent further breaking changes from affecting your workflow, we recommend using named arguments when calling different methods, like it is done in the aforementioned snippet.",
"@LysandreJik thanks for the information .Dose this apply for all the bert models or only for the next sentence prediction alone ?",
"I applies to all models."
] | 1,573 | 1,575 | 1,575 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
import torch
from transformers import BertTokenizer, BertModel, BertForMaskedLM,BertForNextSentencePrediction
tokenizer=BertTokenizer.from_pretrained('bert-base-uncased')
BertNSP=BertForNextSentencePrediction.from_pretrained('bert-base-uncased')
text1 = "How old are you?"
text2 = "The Eiffel Tower is in Paris"
text1_toks = ["[CLS]"] + tokenizer.tokenize(text1) + ["[SEP]"]
text2_toks = tokenizer.tokenize(text2) + ["[SEP]"]
text=text1_toks+text2_toks
indexed_tokens = tokenizer.convert_tokens_to_ids(text)
segments_ids = [0]*len(text1_toks) + [1]*len(text2_toks)
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
BertNSP.eval()
prediction = BertNSP(tokens_tensor, segments_tensors)
prediction=prediction[0] # tuple to tensor
print(predictions)
softmax = torch.nn.Softmax(dim=1)
prediction_sm = softmax(prediction)
print (prediction_sm)
output:
tensor([[ 2.1772, -0.8097]], grad_fn=)
tensor([[0.9923, 0.0077]], grad_fn=)
Now when trying with pytorch_pretrained_bert :
!pip install pytorch_pretrained_bert
import torch
import pytorch_pretrained_bert
from pytorch_pretrained_bert import BertTokenizer, BertAdam, BertForNextSentencePrediction
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForNextSentencePrediction.from_pretrained('bert-base-uncased')
text1 = "How old are you?"
text2 = "The Eiffel Tower is in Paris"
text1_toks = ["[CLS]"] + tokenizer.tokenize(text1) + ["[SEP]"]
text2_toks = tokenizer.tokenize(text2) + ["[SEP]"]
text=text1_toks+text2_toks
print(text)
indexed_tokens = tokenizer.convert_tokens_to_ids(text)
segments_ids = [0]*len(text1_toks) + [1]*len(text2_toks)
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
print(indexed_tokens)
print(segments_ids)
model.eval()
prediction = model(tokens_tensor, segments_tensors)
print(prediction)
softmax = torch.nn.Softmax(dim=1)
prediction_sm = softmax(prediction)
print (prediction_sm)
output:
tensor([[-2.3808, 5.4018]], grad_fn=)
tensor([[4.1673e-04, 9.9958e-01]], grad_fn=)
1. which is the correct score here ? is the softmax score from transformers model correct or pytorch_pretrained_bert model correct ?
2.Also the output of model in pytorch_pretrained_bert is tensor but output of model in transformers is tuple .Why again is it like this ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1790/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/1790/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1789 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1789/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1789/comments | https://api.github.com/repos/huggingface/transformers/issues/1789/events | https://github.com/huggingface/transformers/issues/1789 | 520,872,507 | MDU6SXNzdWU1MjA4NzI1MDc= | 1,789 | BertForMultipleChoice QuickTour issue with weights? | {
"login": "ChrisPalmerNZ",
"id": 11279395,
"node_id": "MDQ6VXNlcjExMjc5Mzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/11279395?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChrisPalmerNZ",
"html_url": "https://github.com/ChrisPalmerNZ",
"followers_url": "https://api.github.com/users/ChrisPalmerNZ/followers",
"following_url": "https://api.github.com/users/ChrisPalmerNZ/following{/other_user}",
"gists_url": "https://api.github.com/users/ChrisPalmerNZ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChrisPalmerNZ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChrisPalmerNZ/subscriptions",
"organizations_url": "https://api.github.com/users/ChrisPalmerNZ/orgs",
"repos_url": "https://api.github.com/users/ChrisPalmerNZ/repos",
"events_url": "https://api.github.com/users/ChrisPalmerNZ/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChrisPalmerNZ/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The same error occurs to me too. I write a small check code for BertForMultipleChoice and it works as expected (taken from the [documentation](https://github.com/huggingface/transformers/blob/albert/transformers/modeling_bert.py) - rows 945-951). Here the code I wrote.\r\n```\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\nmodel = BertForMultipleChoice.from_pretrained('bert-base-uncased', output_hidden_states=True, output_attentions=True)\r\nchoices = [\"Hello, my dog is cute\", \"Hello, my cat is amazing\"]\r\ninput_ids = torch.tensor([tokenizer.encode(s) for s in choices]).unsqueeze(0) # Batch size 1, 2 choices\r\nlabels = torch.tensor(1).unsqueeze(0) # Batch size 1\r\noutputs = model(input_ids, labels=labels)\r\nloss, classification_scores, hidden_states, attentions = outputs\r\n```\r\n\r\n> ## Bug\r\n> Model I am using (BertForMultipleChoice):\r\n> \r\n> Language I am using the model on (English.):\r\n> \r\n> The problem arise when using:\r\n> \r\n> * [x] the official example scripts:\r\n> Arises when running through the last piece of example code found here:\r\n> https://github.com/huggingface/transformers#quick-tour\r\n> * [ ] my own modified scripts: (give details)\r\n> \r\n> The tasks I am working on is:\r\n> \r\n> * [ ] an official GLUE/SQUaD task: (give the name)\r\n> * [ ] my own task or dataset: (give details)\r\n> \r\n> ## To Reproduce\r\n> Steps to reproduce the behavior:\r\n> \r\n> 1. Run the sample code from the last loop of the Quick Tour\r\n> 2. Need to supply directory names, see my code (below) for how I did that...\r\n> 3. When BertForMultipleChoice runs the line `reshaped_logits = logits.view(-1, num_choices)` in modeling_bert.py we get a runtime error `RuntimeError: shape '[-1, 16]' is invalid for input of size 1`\r\n> \r\n> ```\r\n> # Each architecture is provided with several class for fine-tuning on down-stream tasks, e.g.\r\n> BERT_MODEL_CLASSES = [BertModel, BertForPreTraining, BertForMaskedLM, BertForNextSentencePrediction,\r\n> BertForSequenceClassification, BertForMultipleChoice, BertForTokenClassification,\r\n> BertForQuestionAnswering]\r\n> ```\r\n> \r\n> ```\r\n> # All the classes for an architecture can be initiated from pretrained weights for this architecture\r\n> # Note that additional weights added for fine-tuning are only initialized\r\n> # and need to be trained on the down-stream task\r\n> pretrained_weights = 'bert-base-uncased'\r\n> tokenizer = BertTokenizer.from_pretrained(pretrained_weights)\r\n> for model_class in BERT_MODEL_CLASSES:\r\n> \r\n> print(\"Processing\", model_class.__name__, \"...\")\r\n> \r\n> # Store class name as target directory\r\n> model_dir_name = model_class.__name__+\"/\"\r\n> \r\n> # Load pretrained model/tokenizer\r\n> model = model_class.from_pretrained(pretrained_weights)\r\n> \r\n> # Models can return full list of hidden-states & attentions weights at each layer\r\n> model = model_class.from_pretrained(pretrained_weights,\r\n> output_hidden_states=True,\r\n> output_attentions=True)\r\n> input_ids = torch.tensor([tokenizer.encode(\"Let's see all hidden-states and attentions on this text\")])\r\n> all_hidden_states, all_attentions = model(input_ids)[-2:]\r\n> \r\n> # Models are compatible with Torchscript\r\n> model = model_class.from_pretrained(pretrained_weights, torchscript=True)\r\n> traced_model = torch.jit.trace(model, (input_ids,))\r\n> \r\n> save_directory = 'BERT_test/'+ model_dir_name\r\n> if not os.path.isdir(save_directory):\r\n> os.mkdir(save_directory)\r\n> \r\n> # Simple serialization for models and tokenizers\r\n> model.save_pretrained(save_directory) # save\r\n> model = model_class.from_pretrained(save_directory) # re-load\r\n> tokenizer.save_pretrained(save_directory) # save\r\n> tokenizer = BertTokenizer.from_pretrained(save_directory) # re-load\r\n> \r\n> # SOTA examples for GLUE, SQUAD, text generation...\r\n> ```\r\n> \r\n> The error:\r\n> \r\n> ```\r\n> I1111 20:47:30.383128 21676 modeling_utils.py:383] loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-pytorch_model.bin from cache at C:\\Users\\User\\.cache\\torch\\transformers\\aa1ef1aede4482d0dbcd4d52baad8ae300e60902e88fcb0bebdec09afd232066.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157\r\n> I1111 20:47:33.363116 21676 modeling_utils.py:453] Weights of BertForMultipleChoice not initialized from pretrained model: ['classifier.weight', 'classifier.bias']\r\n> I1111 20:47:33.365122 21676 modeling_utils.py:456] Weights from pretrained model not used in BertForMultipleChoice: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']\r\n> ---------------------------------------------------------------------------\r\n> RuntimeError Traceback (most recent call last)\r\n> <ipython-input-10-da6597bd484d> in <module>()\r\n> 19 output_attentions=True)\r\n> 20 input_ids = torch.tensor([tokenizer.encode(\"Let's see all hidden-states and attentions on this text\")])\r\n> ---> 21 all_hidden_states, all_attentions = model(input_ids)[-2:]\r\n> 22 \r\n> 23 # Models are compatible with Torchscript\r\n> \r\n> G:\\Anaconda3\\envs\\pytorch1\\lib\\site-packages\\torch\\nn\\modules\\module.py in __call__(self, *input, **kwargs)\r\n> 539 result = self._slow_forward(*input, **kwargs)\r\n> 540 else:\r\n> --> 541 result = self.forward(*input, **kwargs)\r\n> 542 for hook in self._forward_hooks.values():\r\n> 543 hook_result = hook(self, input, result)\r\n> \r\n> g:\\deeplearning\\huggingface\\transformers\\transformers\\modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels)\r\n> 1096 pooled_output = self.dropout(pooled_output)\r\n> 1097 logits = self.classifier(pooled_output)\r\n> -> 1098 reshaped_logits = logits.view(-1, num_choices)\r\n> 1099 \r\n> 1100 outputs = (reshaped_logits,) + outputs[2:] # add hidden states and attention if they are here\r\n> \r\n> RuntimeError: shape '[-1, 16]' is invalid for input of size 1\r\n> ```\r\n> \r\n> ## Expected behavior\r\n> Should run without error!\r\n> \r\n> ## Environment\r\n> * OS: Windows 10\r\n> * Python version: 3.6.6\r\n> * PyTorch version: 1.3.0\r\n> * PyTorch Transformers version (or branch): 2.1.1\r\n> * Using GPU ? Yes\r\n> * Distributed of parallel setup ? No\r\n> * Any other relevant information:\r\n> \r\n> ## Additional context",
"Thanks! This works for me too. But since the Quick Tour is an example on the github repo I believe it would be a good idea to fix it there... Is there anything you can see wrong with the code?",
"Re-opened as I believe we should have the sample code fixed - *and* like [this issue](https://github.com/huggingface/transformers/issues/1787) it should work with Pytorch >= 1.0.0 or at least Pytorch >= 1.2.0 not just with >= 1.3.0",
"Indeed thanks, it's fixed now on master",
"Thanks - I note that it's been fixed by removing the use of BertForMultipleChoice in the evaluation section!"
] | 1,573 | 1,573 | 1,573 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (BertForMultipleChoice):
Language I am using the model on (English.):
The problem arise when using:
* [x] the official example scripts:
Arises when running through the last piece of example code found here:
https://github.com/huggingface/transformers#quick-tour
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
1. Run the sample code from the last loop of the Quick Tour
2. Need to supply directory names, see my code (below) for how I did that...
3. When BertForMultipleChoice runs the line `reshaped_logits = logits.view(-1, num_choices)` in modeling_bert.py we get a runtime error `RuntimeError: shape '[-1, 16]' is invalid for input of size 1`
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
```
# Each architecture is provided with several class for fine-tuning on down-stream tasks, e.g.
BERT_MODEL_CLASSES = [BertModel, BertForPreTraining, BertForMaskedLM, BertForNextSentencePrediction,
BertForSequenceClassification, BertForMultipleChoice, BertForTokenClassification,
BertForQuestionAnswering]
```
```
# All the classes for an architecture can be initiated from pretrained weights for this architecture
# Note that additional weights added for fine-tuning are only initialized
# and need to be trained on the down-stream task
pretrained_weights = 'bert-base-uncased'
tokenizer = BertTokenizer.from_pretrained(pretrained_weights)
for model_class in BERT_MODEL_CLASSES:
print("Processing", model_class.__name__, "...")
# Store class name as target directory
model_dir_name = model_class.__name__+"/"
# Load pretrained model/tokenizer
model = model_class.from_pretrained(pretrained_weights)
# Models can return full list of hidden-states & attentions weights at each layer
model = model_class.from_pretrained(pretrained_weights,
output_hidden_states=True,
output_attentions=True)
input_ids = torch.tensor([tokenizer.encode("Let's see all hidden-states and attentions on this text")])
all_hidden_states, all_attentions = model(input_ids)[-2:]
# Models are compatible with Torchscript
model = model_class.from_pretrained(pretrained_weights, torchscript=True)
traced_model = torch.jit.trace(model, (input_ids,))
save_directory = 'BERT_test/'+ model_dir_name
if not os.path.isdir(save_directory):
os.mkdir(save_directory)
# Simple serialization for models and tokenizers
model.save_pretrained(save_directory) # save
model = model_class.from_pretrained(save_directory) # re-load
tokenizer.save_pretrained(save_directory) # save
tokenizer = BertTokenizer.from_pretrained(save_directory) # re-load
# SOTA examples for GLUE, SQUAD, text generation...
```
The error:
```
I1111 20:47:30.383128 21676 modeling_utils.py:383] loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-pytorch_model.bin from cache at C:\Users\User\.cache\torch\transformers\aa1ef1aede4482d0dbcd4d52baad8ae300e60902e88fcb0bebdec09afd232066.36ca03ab34a1a5d5fa7bc3d03d55c4fa650fed07220e2eeebc06ce58d0e9a157
I1111 20:47:33.363116 21676 modeling_utils.py:453] Weights of BertForMultipleChoice not initialized from pretrained model: ['classifier.weight', 'classifier.bias']
I1111 20:47:33.365122 21676 modeling_utils.py:456] Weights from pretrained model not used in BertForMultipleChoice: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-10-da6597bd484d> in <module>()
19 output_attentions=True)
20 input_ids = torch.tensor([tokenizer.encode("Let's see all hidden-states and attentions on this text")])
---> 21 all_hidden_states, all_attentions = model(input_ids)[-2:]
22
23 # Models are compatible with Torchscript
G:\Anaconda3\envs\pytorch1\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
g:\deeplearning\huggingface\transformers\transformers\modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels)
1096 pooled_output = self.dropout(pooled_output)
1097 logits = self.classifier(pooled_output)
-> 1098 reshaped_logits = logits.view(-1, num_choices)
1099
1100 outputs = (reshaped_logits,) + outputs[2:] # add hidden states and attention if they are here
RuntimeError: shape '[-1, 16]' is invalid for input of size 1
```
## Expected behavior
Should run without error!
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: Windows 10
* Python version: 3.6.6
* PyTorch version: 1.3.0
* PyTorch Transformers version (or branch): 2.1.1
* Using GPU ? Yes
* Distributed of parallel setup ? No
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1789/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1789/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1788 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1788/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1788/comments | https://api.github.com/repos/huggingface/transformers/issues/1788/events | https://github.com/huggingface/transformers/issues/1788 | 520,871,682 | MDU6SXNzdWU1MjA4NzE2ODI= | 1,788 | BertForNextSentencePrediction is giving high score for non similar sentences . | {
"login": "AjitAntony",
"id": 46282348,
"node_id": "MDQ6VXNlcjQ2MjgyMzQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/46282348?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AjitAntony",
"html_url": "https://github.com/AjitAntony",
"followers_url": "https://api.github.com/users/AjitAntony/followers",
"following_url": "https://api.github.com/users/AjitAntony/following{/other_user}",
"gists_url": "https://api.github.com/users/AjitAntony/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AjitAntony/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AjitAntony/subscriptions",
"organizations_url": "https://api.github.com/users/AjitAntony/orgs",
"repos_url": "https://api.github.com/users/AjitAntony/repos",
"events_url": "https://api.github.com/users/AjitAntony/events{/privacy}",
"received_events_url": "https://api.github.com/users/AjitAntony/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"As explained in #1790, you're passing the `token_type_ids` as the attention mask. Change the model forward pass as such:\r\n\r\n```py\r\nprediction = BertNSP(tokens_tensor, token_type_ids=segments_tensors)\r\n```\r\nYour results will be more accurate:\r\n```py\r\ntensor([[-2.3808, 5.4018]], grad_fn=<AddmmBackward>)\r\ntensor([[4.1673e-04, 9.9958e-01]], grad_fn=<SoftmaxBackward>)\r\n```",
"@LysandreJik thanks for the information .Dose this apply for all the bert models or only for the next sentence prediction alone ?",
"It would be better to use named arguments in all the models. They are bound to change with breaking changes as new versions come up.\r\n\r\nI recommend specifying the arguments' names when possible.",
"> As explained in #1790, you're passing the `token_type_ids` as the attention mask. Change the model forward pass as such:\r\n> \r\n> ```python\r\n> prediction = BertNSP(tokens_tensor, token_type_ids=segments_tensors)\r\n> ```\r\n> \r\n> Your results will be more accurate:\r\n> \r\n> ```python\r\n> tensor([[-2.3808, 5.4018]], grad_fn=<AddmmBackward>)\r\n> tensor([[4.1673e-04, 9.9958e-01]], grad_fn=<SoftmaxBackward>)\r\n> ```\r\n\r\nHi,\r\n@LysandreJik Can you explain me what are the scores for? \r\nWhat is the 4.1673e-04 used for and what is the 9.9958e-01 used for?\r\nWhich one of them says that: X% sure that A sentence is followed by B sentence ? And what is the other for?\r\n\r\nThanks!",
"I'm having the same problem, and I think I've followed all the directions mentioned above.\r\n\r\ndef line_continues(model, tokenizer, line1, line2):\r\n line1 = [tokenizer.cls_token] + tokenizer.tokenize(line1) + [tokenizer.sep_token]\r\n line2 = tokenizer.tokenize(line2) + [tokenizer.sep_token]\r\n input_idx = tokenizer.convert_tokens_to_ids(line1 + line2)\r\n segment_idx = [0]*len(line1) + [1]*len(line2)\r\n tokens_tensor = torch.tensor([input_idx])\r\n segment_tensor = torch.tensor([segment_idx])\r\n predictions = model(tokens_tensor, token_type_ids=segment_tensor)\r\n probs = F.softmax(predictions[0], dim=1)\r\n return probs[0][0]\r\n\r\nmodel = BertForNextSentencePrediction.from_pretrained('bert-base-cased')\r\nmodel.eval()\r\n\r\n\r\nrandom sentences:\r\n\r\nline1='these articles tell us about where leadership communication is going and where it'\r\nline2='issues gave us the chance to engage with many well-established and emerging experts'\r\nprob = line_continues(model, tokenizer, line1, line2)\r\n0.9993\r\n\r\n\r\ncontiguous sentences:\r\n\r\nline1='these articles tell us about where leadership communication is going and where it'\r\nline2='needs to go in addition to using the model.'\r\nprob = line_continues(model, tokenizer, line1, line2)\r\n0.9991\r\n\r\n\r\nThanks!",
"I am also experiencing this kind of issue. After experimenting with some sequence pairs, I think the relation between two sequence should be zero (and also non-sensical). For example:\r\n```\r\nSent1: Paris is the capital of France.\r\nSent2: Cow is a domestic animal. \r\n\r\n```\r\nThis level of stupid sequence.\r\nif there is any slightest connection on word level, it outputs that it is 90% sure that two sequence is coherent. \r\nIn your example, may be `leadership` and `experts` are two near words in semantic space. \r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This is still a problem and should be re-opened. From the documentation: https://huggingface.co/transformers/model_doc/bert.html#bertfornextsentenceprediction\r\n\r\n```\r\n# documentation example - good\r\nIn Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced.\r\nThe sky is blue due to the shorter wavelength of blue light.\r\nlogits[0, 0]: -3.072946548461914, logits[0, 1]: 5.905644416809082, is random: True\r\n\r\n# my own example - ???\r\nI took my money to the bank on 23rd street\r\nMy monkey was cake and cockroaches have radiation\r\nlogits[0, 0]: 3.0128183364868164, logits[0, 1]: -1.984398365020752, is random: False\r\n```\r\n\r\nI'm not sure how to interpret this.",
"I think that it would also depend on train data on which the BERT model was trained on (from the paper \" For the pre-training corpus we use the BooksCorpus (800M words) (Zhu et al., 2015) and English Wikipedia (2,500M words).\"[1]. \r\n\r\n[1] - https://arxiv.org/pdf/1810.04805.pdf "
] | 1,573 | 1,643 | 1,588 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
import torch
from transformers import BertTokenizer, BertModel, BertForMaskedLM,BertForNextSentencePrediction
tokenizer=BertTokenizer.from_pretrained('bert-base-uncased')
BertNSP=BertForNextSentencePrediction.from_pretrained('bert-base-uncased')
text1 = "How old are you?"
text2 = "The Eiffel Tower is in Paris"
text1_toks = ["[CLS]"] + tokenizer.tokenize(text1) + ["[SEP]"]
text2_toks = tokenizer.tokenize(text2) + ["[SEP]"]
text=text1_toks+text2_toks
print(text)
indexed_tokens = tokenizer.convert_tokens_to_ids(text1_toks + text2_toks)
segments_ids = [0]*len(text1_toks) + [1]*len(text2_toks)
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
print(indexed_tokens)
print(segments_ids)
BertNSP.eval()
prediction = BertNSP(tokens_tensor, segments_tensors)
prediction=prediction[0] # tuple to tensor
print(predictions)
softmax = torch.nn.Softmax(dim=1)
prediction_sm = softmax(prediction)
print (prediction_sm)
o/p of predictions
tensor([[ 2.1772, -0.8097]], grad_fn=<AddmmBackward>)
o/p of prediction_sm
tensor([[0.9923, 0.0077]], grad_fn=<SoftmaxBackward>)
why is the score still high 0.9923 even after apply softmax ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1788/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/1788/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1787 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1787/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1787/comments | https://api.github.com/repos/huggingface/transformers/issues/1787/events | https://github.com/huggingface/transformers/issues/1787 | 520,805,250 | MDU6SXNzdWU1MjA4MDUyNTA= | 1,787 | Invalid argument with CTRLModel | {
"login": "ChrisPalmerNZ",
"id": 11279395,
"node_id": "MDQ6VXNlcjExMjc5Mzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/11279395?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChrisPalmerNZ",
"html_url": "https://github.com/ChrisPalmerNZ",
"followers_url": "https://api.github.com/users/ChrisPalmerNZ/followers",
"following_url": "https://api.github.com/users/ChrisPalmerNZ/following{/other_user}",
"gists_url": "https://api.github.com/users/ChrisPalmerNZ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChrisPalmerNZ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChrisPalmerNZ/subscriptions",
"organizations_url": "https://api.github.com/users/ChrisPalmerNZ/orgs",
"repos_url": "https://api.github.com/users/ChrisPalmerNZ/repos",
"events_url": "https://api.github.com/users/ChrisPalmerNZ/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChrisPalmerNZ/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> ## Bug\r\n> Model I am using (CTRLModel):\r\n> \r\n> Language I am using the model on (English):\r\n> \r\n> The problem arise when using:\r\n> \r\n> * [X ] the official example scripts: (give details)\r\n> \r\n> The tasks I am working on is:\r\n> \r\n> * [X ] the Quick Tour for transformers\r\n> \r\n> ## To Reproduce\r\n> Steps to reproduce the behavior:\r\n> \r\n> 1. run the quick tour as found here https://github.com/huggingface/transformers#quick-tour\r\n> 2. in the loop `for model_class, tokenizer_class, pretrained_weights in MODELS` it will crash out when processing the CTRLModel\r\n> \r\n> ```\r\n> ---------------------------------------------------------------------------\r\n> OSError Traceback (most recent call last)\r\n> <ipython-input-5-25349833af91> in <module>\r\n> 5 \r\n> 6 tokenizer = tokenizer_class.from_pretrained(pretrained_weights)\r\n> ----> 7 model = model_class.from_pretrained(pretrained_weights)\r\n> 8 \r\n> 9 # Encode text\r\n> \r\n> g:\\deeplearning\\huggingface\\transformers\\transformers\\modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n> 389 \r\n> 390 if state_dict is None and not from_tf:\r\n> --> 391 state_dict = torch.load(resolved_archive_file, map_location='cpu')\r\n> 392 \r\n> 393 missing_keys = []\r\n> \r\n> G:\\Anaconda3\\envs\\fastai\\lib\\site-packages\\torch\\serialization.py in load(f, map_location, pickle_module, **pickle_load_args)\r\n> 384 f = f.open('rb')\r\n> 385 try:\r\n> --> 386 return _load(f, map_location, pickle_module, **pickle_load_args)\r\n> 387 finally:\r\n> 388 if new_fd:\r\n> \r\n> G:\\Anaconda3\\envs\\fastai\\lib\\site-packages\\torch\\serialization.py in _load(f, map_location, pickle_module, **pickle_load_args)\r\n> 578 for key in deserialized_storage_keys:\r\n> 579 assert key in deserialized_objects\r\n> --> 580 deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)\r\n> 581 if offset is not None:\r\n> 582 offset = f.tell()\r\n> \r\n> OSError: [Errno 22] Invalid argument\r\n> ```\r\n> \r\n> ## What should have happened\r\n> Should have run to completion...\r\n> \r\n> ## Environment\r\n> * OS:\r\n> * Python version: 3.3.9\r\n> * PyTorch version: 1.2.0+cu92\r\n> * PyTorch Transformers version (or branch): 2.1.1\r\n> * Using GPU ? Yes\r\n> * Distributed of parallel setup ? no\r\n> * Any other relevant information:\r\n> \r\n> ## Additional context\r\n\r\nCan you test the same code with Python version >= 3.5 and with PyTorch version 1.3.0?",
"Sure, l'll get back to you... but I realize that I had a typo with my Python, its 3.6.9, which I've corrected. \r\nAdditionally, I had compiled this with `pip install -e .` from a fresh clone of the repo yesterday",
"OK, under Pytorch 1.3.0 it seems good - I got the following output (which I wasn't getting from 1.2.0):\r\n```\r\nI1111 20:26:06.461704 21676 tokenization_utils.py:374] loading file https://raw.githubusercontent.com/salesforce/ctrl/master/ctrl-vocab.json from cache at C:\\Users\\User\\.cache\\torch\\transformers\\a858ad854d3847b02da3aac63555142de6a05f2a26d928bb49e881970514e186.285c96a541cf6719677cfb634929022b56b76a0c9a540186ba3d8bbdf02bca42\r\nI1111 20:26:06.464703 21676 tokenization_utils.py:374] loading file https://raw.githubusercontent.com/salesforce/ctrl/master/ctrl-merges.txt from cache at C:\\Users\\User\\.cache\\torch\\transformers\\aa2c569e6648690484ade28535a8157aa415f15202e84a62e82cc36ea0c20fa9.26153bf569b71aaf15ae54be4c1b9254dbeff58ca6fc3e29468c4eed078ac142\r\nI1111 20:26:07.624670 21676 configuration_utils.py:152] loading configuration file https://storage.googleapis.com/sf-ctrl/pytorch/ctrl-config.json from cache at C:\\Users\\User\\.cache\\torch\\transformers\\d6492ca334c2a4e079f43df30956acf935134081b2b3844dc97457be69b623d0.1ebc47eb44e70492e0c20494a084f108332d20fea7fe5ad408ef5e7a8f2baef4\r\nI1111 20:26:07.628657 21676 configuration_utils.py:169] Model config {\r\n \"attn_pdrop\": 0.1,\r\n \"dff\": 8192,\r\n \"embd_pdrop\": 0.1,\r\n \"finetuning_task\": null,\r\n \"from_tf\": false,\r\n \"initializer_range\": 0.02,\r\n \"is_decoder\": false,\r\n \"layer_norm_epsilon\": 1e-06,\r\n \"n_ctx\": 512,\r\n \"n_embd\": 1280,\r\n \"n_head\": 16,\r\n \"n_layer\": 48,\r\n \"n_positions\": 50000,\r\n \"num_labels\": 1,\r\n \"output_attentions\": false,\r\n \"output_hidden_states\": false,\r\n \"output_past\": true,\r\n \"pruned_heads\": {},\r\n \"resid_pdrop\": 0.1,\r\n \"summary_activation\": null,\r\n \"summary_first_dropout\": 0.1,\r\n \"summary_proj_to_labels\": true,\r\n \"summary_type\": \"cls_index\",\r\n \"summary_use_proj\": true,\r\n \"torchscript\": false,\r\n \"use_bfloat16\": false,\r\n \"vocab_size\": 246534\r\n}\r\n\r\nI1111 20:26:08.047282 21676 modeling_utils.py:383] loading weights file https://storage.googleapis.com/sf-ctrl/pytorch/seqlen256_v1.bin from cache at C:\\Users\\User\\.cache\\torch\\transformers\\c146cc96724f27295a0c3ada1fbb3632074adf87e9aef8269e44c9208787f8c8.b986347cbab65fa276683efbb9c2f7ee22552277bcf6e1f1166557ed0852fdf0\r\nW1111 20:31:14.310047 21676 tokenization_utils.py:936] This tokenizer does not make use of special tokens. Input is returned with no modification.\r\nW1111 20:31:14.650039 21676 tokenization_utils.py:936] This tokenizer does not make use of special tokens. Input is returned with no modification.\r\nW1111 20:31:14.653039 21676 tokenization_utils.py:923] This tokenizer does not make use of special tokens.\r\n```",
"[Here](https://colab.research.google.com/drive/1d23JwkEkS_s6XUY2_Me_MHnOrTHmNrqR#scrollTo=K4-yYInLt6YP) you can view a Google Colab written by me and **it works as expected**. I think you can close this issue.\r\n\r\n> OK, under Pytorch 1.3.0 it seems good - I got the following output (which I wasn't getting from 1.2.0):\r\n> \r\n> ```\r\n> I1111 20:26:06.461704 21676 tokenization_utils.py:374] loading file https://raw.githubusercontent.com/salesforce/ctrl/master/ctrl-vocab.json from cache at C:\\Users\\User\\.cache\\torch\\transformers\\a858ad854d3847b02da3aac63555142de6a05f2a26d928bb49e881970514e186.285c96a541cf6719677cfb634929022b56b76a0c9a540186ba3d8bbdf02bca42\r\n> I1111 20:26:06.464703 21676 tokenization_utils.py:374] loading file https://raw.githubusercontent.com/salesforce/ctrl/master/ctrl-merges.txt from cache at C:\\Users\\User\\.cache\\torch\\transformers\\aa2c569e6648690484ade28535a8157aa415f15202e84a62e82cc36ea0c20fa9.26153bf569b71aaf15ae54be4c1b9254dbeff58ca6fc3e29468c4eed078ac142\r\n> I1111 20:26:07.624670 21676 configuration_utils.py:152] loading configuration file https://storage.googleapis.com/sf-ctrl/pytorch/ctrl-config.json from cache at C:\\Users\\User\\.cache\\torch\\transformers\\d6492ca334c2a4e079f43df30956acf935134081b2b3844dc97457be69b623d0.1ebc47eb44e70492e0c20494a084f108332d20fea7fe5ad408ef5e7a8f2baef4\r\n> I1111 20:26:07.628657 21676 configuration_utils.py:169] Model config {\r\n> \"attn_pdrop\": 0.1,\r\n> \"dff\": 8192,\r\n> \"embd_pdrop\": 0.1,\r\n> \"finetuning_task\": null,\r\n> \"from_tf\": false,\r\n> \"initializer_range\": 0.02,\r\n> \"is_decoder\": false,\r\n> \"layer_norm_epsilon\": 1e-06,\r\n> \"n_ctx\": 512,\r\n> \"n_embd\": 1280,\r\n> \"n_head\": 16,\r\n> \"n_layer\": 48,\r\n> \"n_positions\": 50000,\r\n> \"num_labels\": 1,\r\n> \"output_attentions\": false,\r\n> \"output_hidden_states\": false,\r\n> \"output_past\": true,\r\n> \"pruned_heads\": {},\r\n> \"resid_pdrop\": 0.1,\r\n> \"summary_activation\": null,\r\n> \"summary_first_dropout\": 0.1,\r\n> \"summary_proj_to_labels\": true,\r\n> \"summary_type\": \"cls_index\",\r\n> \"summary_use_proj\": true,\r\n> \"torchscript\": false,\r\n> \"use_bfloat16\": false,\r\n> \"vocab_size\": 246534\r\n> }\r\n> \r\n> I1111 20:26:08.047282 21676 modeling_utils.py:383] loading weights file https://storage.googleapis.com/sf-ctrl/pytorch/seqlen256_v1.bin from cache at C:\\Users\\User\\.cache\\torch\\transformers\\c146cc96724f27295a0c3ada1fbb3632074adf87e9aef8269e44c9208787f8c8.b986347cbab65fa276683efbb9c2f7ee22552277bcf6e1f1166557ed0852fdf0\r\n> W1111 20:31:14.310047 21676 tokenization_utils.py:936] This tokenizer does not make use of special tokens. Input is returned with no modification.\r\n> W1111 20:31:14.650039 21676 tokenization_utils.py:936] This tokenizer does not make use of special tokens. Input is returned with no modification.\r\n> W1111 20:31:14.653039 21676 tokenization_utils.py:923] This tokenizer does not make use of special tokens.\r\n> ```",
"Is there any way to get this working correctly under Pytorch 1.2? I have some work I need to continue with in that version. It doesn't matter too much as I wasn't intending to use that model, but I guess it might matter to someone else...",
"Thanks very much for the Colab notebook - yes it works as expected there. I am now having an issue with running the last part of the Quick Tour, but I'll post that separately, and close this one!",
"I'm very honest with you: I don't have the exact answer to your notable question. I'm aware of the fact that PyTorch v1.3.0 has got many changes and improvements from PyTorch v1.2, as stated [here](url). Some changes have been made on the `torch.load()` method too.\r\n\r\n> Is there any way to get this working correctly under Pytorch 1.2? I have some work I need to continue with in that version. It doesn't matter too much as I wasn't intending to use that model, but I guess it might matter to someone else...",
"@ChrisPalmerNZ Would you care to re-open this issue?\r\nThere really should not be a bug with the intro sample code.",
"OK, re-opened - interesting to see what fixes this for Pytorch 1.2 - but I felt that @TheEdoardo93 was indicating that this will only work under Pytorch >= 1.3.0 ",
"Hi, you seem to be having problems when loading the CTRLModel in memory. Usually problems like this can occur if behind a firewall or if the file was corrupted. Could you try to load the model outside of the script, for example in a python console? The following lines should work:\r\n\r\n```py\r\nfrom transformers import CTRLModel\r\n\r\nmodel = CTRLModel.from_pretrained(\"ctrl\", force_download=True)\r\n```\r\n\r\nThe `force_download` option will re-download the file, this will confirm that the file being corrupted is not an issue.\r\n\r\n",
"Here the results:\r\n```\r\nPython 3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) \r\n[GCC 7.3.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import platform\r\n>>> platform.platform()\r\n'Linux-4.15.0-66-generic-x86_64-with-debian-buster-sid'\r\n>>> platform.python_version()\r\n'3.6.9'\r\n>>> import transformers\r\n>>> transformers.__version__\r\n'2.1.1'\r\n>>> import torch\r\n>>> torch.__version__\r\n'1.2.0'\r\n>>> from transformers import CTRLModel\r\n>>> model = CTRLModel.from_pretrained('ctrl', force_download=True)\r\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 611/611 [00:00<00:00, 259358.34B/s]\r\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6552025106/6552025106 [09:45<00:00, 11197418.01B/s]\r\n>>> CTRLModel\r\n<class 'transformers.modeling_ctrl.CTRLModel'>\r\n>>> \r\n```\r\n\r\n> Hi, you seem to be having problems when loading the CTRLModel in memory. Usually problems like this can occur if behind a firewall or if the file was corrupted. Could you try to load the model outside of the script, for example in a python console? The following lines should work:\r\n> \r\n> ```python\r\n> from transformers import CTRLModel\r\n> \r\n> model = CTRLModel.from_pretrained(\"ctrl\", force_download=True)\r\n> ```\r\n> \r\n> The `force_download` option will re-download the file, this will confirm that the file being corrupted is not an issue."
] | 1,573 | 1,575 | 1,575 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (CTRLModel):
Language I am using the model on (English):
The problem arise when using:
* [X ] the official example scripts: (give details)
The tasks I am working on is:
* [X ] the Quick Tour for transformers
## To Reproduce
Steps to reproduce the behavior:
1. run the quick tour as found here https://github.com/huggingface/transformers#quick-tour
2. in the loop `for model_class, tokenizer_class, pretrained_weights in MODELS` it will crash out when processing the CTRLModel
```
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-5-25349833af91> in <module>
5
6 tokenizer = tokenizer_class.from_pretrained(pretrained_weights)
----> 7 model = model_class.from_pretrained(pretrained_weights)
8
9 # Encode text
g:\deeplearning\huggingface\transformers\transformers\modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
389
390 if state_dict is None and not from_tf:
--> 391 state_dict = torch.load(resolved_archive_file, map_location='cpu')
392
393 missing_keys = []
G:\Anaconda3\envs\fastai\lib\site-packages\torch\serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
384 f = f.open('rb')
385 try:
--> 386 return _load(f, map_location, pickle_module, **pickle_load_args)
387 finally:
388 if new_fd:
G:\Anaconda3\envs\fastai\lib\site-packages\torch\serialization.py in _load(f, map_location, pickle_module, **pickle_load_args)
578 for key in deserialized_storage_keys:
579 assert key in deserialized_objects
--> 580 deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
581 if offset is not None:
582 offset = f.tell()
OSError: [Errno 22] Invalid argument
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## What should have happened
Should have run to completion...
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS:
* Python version: 3.3.9
[EDIT] - type on the Python, its 3.6.9.
* PyTorch version: 1.2.0+cu92
* PyTorch Transformers version (or branch): 2.1.1
* Using GPU ? Yes
* Distributed of parallel setup ? no
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1787/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1786 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1786/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1786/comments | https://api.github.com/repos/huggingface/transformers/issues/1786/events | https://github.com/huggingface/transformers/issues/1786 | 520,708,585 | MDU6SXNzdWU1MjA3MDg1ODU= | 1,786 | a BertForMaskedLM.from_pretrained error | {
"login": "zhujun5164",
"id": 49580602,
"node_id": "MDQ6VXNlcjQ5NTgwNjAy",
"avatar_url": "https://avatars.githubusercontent.com/u/49580602?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhujun5164",
"html_url": "https://github.com/zhujun5164",
"followers_url": "https://api.github.com/users/zhujun5164/followers",
"following_url": "https://api.github.com/users/zhujun5164/following{/other_user}",
"gists_url": "https://api.github.com/users/zhujun5164/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhujun5164/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhujun5164/subscriptions",
"organizations_url": "https://api.github.com/users/zhujun5164/orgs",
"repos_url": "https://api.github.com/users/zhujun5164/repos",
"events_url": "https://api.github.com/users/zhujun5164/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhujun5164/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Masked language modeling is an example of autoencoding language modeling , typically mask one or more of words in a sentence and have the model predict those masked words given the other words in sentence. When you changed the vocab size, the number of word to predic from the model also changed.\r\nThis unofficial methods leaved the output embedding and prediction bias unchanged while changed the vocab size, and caused these errors . \r\nI think the output embeddings should be copied from input embeddings, but the bias needs you to tune-up manually.\r\n\r\n------ \r\nFor now, these two lines may make you avoid the error\r\n```\r\nmodel_dict['cls.predictions.bias'] = model_dict['cls.predictions.bias'][:2]\r\nmodel_dict['cls.predictions.decoder.weight'] = model_dict['cls.predictions.decoder.weight'][:2]\r\n\r\n\r\n```",
"thank!"
] | 1,573 | 1,575 | 1,575 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (BertForMaskedLM):
Language I am using the model on (Chinese):
When i want to do a limit vocabulary fine-tune in BerForMaskedLM, the new config/ vocab and model.bin in word_embedding are all been change.
for example:
`config = BertConfig.from_pretrained('bert-base-chinese')`
`config.vocab_size = 2`
`model_dict = torch.load(old_model_path)`
`model_dict['bert.embeddings.word_embeddings.weight'] = model_dict['bert.embeddings.word_embeddings.weight'][:2]`
`torch.save(model_dict, new_model_path)`
`model = BertForMaskedLM.from_pretrained(new_model_path, config=config)`
But when i use the from_pretrained code for the model, an errors has appeared:
` Error(s) in loading state_dict for BertForMaskedLM:
size mismatch for cls.predictions.bias: copying a param with shape torch.Size([21128]) from checkpoint, the shape in current model is torch.Size([2]).
size mismatch for cls.predictions.decoder.weight: copying a param with shape torch.Size([21128, 768]) from checkpoint, the shape in current model is torch.Size([2, 768]).`
And i have try to use the BertModel.from_pretrained, the error does not appear, and use the code to check the data shape.
`torch.load(new_model_path)['bert.embeddings.word_embeddings.weight'].shape `
it appear torch.Size([2, 768]) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1786/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1785 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1785/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1785/comments | https://api.github.com/repos/huggingface/transformers/issues/1785/events | https://github.com/huggingface/transformers/issues/1785 | 520,678,327 | MDU6SXNzdWU1MjA2NzgzMjc= | 1,785 | "Write with Transformer" source code? | {
"login": "AIsysxd",
"id": 54706002,
"node_id": "MDQ6VXNlcjU0NzA2MDAy",
"avatar_url": "https://avatars.githubusercontent.com/u/54706002?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AIsysxd",
"html_url": "https://github.com/AIsysxd",
"followers_url": "https://api.github.com/users/AIsysxd/followers",
"following_url": "https://api.github.com/users/AIsysxd/following{/other_user}",
"gists_url": "https://api.github.com/users/AIsysxd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AIsysxd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AIsysxd/subscriptions",
"organizations_url": "https://api.github.com/users/AIsysxd/orgs",
"repos_url": "https://api.github.com/users/AIsysxd/repos",
"events_url": "https://api.github.com/users/AIsysxd/events{/privacy}",
"received_events_url": "https://api.github.com/users/AIsysxd/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1565794707,
"node_id": "MDU6TGFiZWwxNTY1Nzk0NzA3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Write%20With%20Transformer",
"name": "Write With Transformer",
"color": "a84bf4",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hello,\r\n\r\nSorry, the web app is not open source right now.",
"Thanks @julien-c . Can you comment on how elastic inference with pytorch models is done?",
"Is there any chance this could be reconsidered? I'd love to use it for experimenting with custom fine-tuned models.",
"Would also love for this for experimenting with custom models.\r\n",
"It'd be very interesting to open source this.\r\n\r\nIs there any intention of doing so in the future?",
"Is there any possibility of this becoming reconsidered? I've been trying to look for this as well... :/",
"Any chance of this being public? @thomwolf ",
"> Would also love for this for experimenting with custom models.\r\n\r\nSame!",
"the microsoft/fastformers repo says \"Write With Transformer, built by the Hugging Face team at transformer.huggingface.co, is the official demo of this repo’s text generation capabilities\" according to this [readme](https://github.com/microsoft/fastformers/blob/main/README_transformers.md#online-demo). Last commit October 2020\r\n\r\n"
] | 1,573 | 1,630 | 1,573 | NONE | null | ## ❓ Questions & Help
Hello, can't find source code of it. May you help, please?
<!-- A clear and concise description of the question. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1785/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1785/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1784 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1784/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1784/comments | https://api.github.com/repos/huggingface/transformers/issues/1784/events | https://github.com/huggingface/transformers/issues/1784 | 520,667,819 | MDU6SXNzdWU1MjA2Njc4MTk= | 1,784 | Unclear documentation for special_tokens_mask | {
"login": "Evpok",
"id": 1656541,
"node_id": "MDQ6VXNlcjE2NTY1NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1656541?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Evpok",
"html_url": "https://github.com/Evpok",
"followers_url": "https://api.github.com/users/Evpok/followers",
"following_url": "https://api.github.com/users/Evpok/following{/other_user}",
"gists_url": "https://api.github.com/users/Evpok/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Evpok/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Evpok/subscriptions",
"organizations_url": "https://api.github.com/users/Evpok/orgs",
"repos_url": "https://api.github.com/users/Evpok/repos",
"events_url": "https://api.github.com/users/Evpok/events{/privacy}",
"received_events_url": "https://api.github.com/users/Evpok/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, this is a documentation error! Thank you for letting us know!",
"@LysandreJik This was reintroduced in https://github.com/huggingface/transformers/pull/2989 apparently",
"Thanks @Evpok for letting me know, this flew under my radar.",
"My pleasure 😄"
] | 1,573 | 1,589 | 1,573 | CONTRIBUTOR | null | According to [the docs](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.get_special_tokens_mask), the `special_tokens_mask` returned by e.g. `encode_plus` should have
> 0 for a special token, 1 for a sequence token
Yet when I try
```python
tokenizer = transformers.BertTokenizer.from_pretrained('bert-base-uncased')
tokenizer.encode_plus("This is a test sentence", add_special_tokens=True)
```
I get
```python
{'special_tokens_mask': [1, 0, 0, 0, 0, 0, 1],
'input_ids': [101, 2023, 2003, 1037, 3231, 6251, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0]}
```
Which — to me — would suggest that it is in fact 1 for special tokens and 0 for sequence tokens.
The implementations for [BERT](https://github.com/huggingface/transformers/blob/700331b5ece63381ad1b775fc8661cf3ae4493fd/transformers/tokenization_bert.py#L210) and [RoBERTa](https://github.com/huggingface/transformers/blob/1c542df7e554a2014051dd09becf60f157fed524/transformers/tokenization_roberta.py#L110) support this, so should the documentation be made clearer or is it just me who understood it backward? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1784/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1784/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1783 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1783/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1783/comments | https://api.github.com/repos/huggingface/transformers/issues/1783/events | https://github.com/huggingface/transformers/issues/1783 | 520,582,301 | MDU6SXNzdWU1MjA1ODIzMDE= | 1,783 | How to measure similarity of words? | {
"login": "RakshaAg",
"id": 43982672,
"node_id": "MDQ6VXNlcjQzOTgyNjcy",
"avatar_url": "https://avatars.githubusercontent.com/u/43982672?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RakshaAg",
"html_url": "https://github.com/RakshaAg",
"followers_url": "https://api.github.com/users/RakshaAg/followers",
"following_url": "https://api.github.com/users/RakshaAg/following{/other_user}",
"gists_url": "https://api.github.com/users/RakshaAg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RakshaAg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RakshaAg/subscriptions",
"organizations_url": "https://api.github.com/users/RakshaAg/orgs",
"repos_url": "https://api.github.com/users/RakshaAg/repos",
"events_url": "https://api.github.com/users/RakshaAg/events{/privacy}",
"received_events_url": "https://api.github.com/users/RakshaAg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You should check out BLEU and ROUGE scores. These are often referred to as the precision and recall of NLP.\r\n\r\nYou may be able to craft a pseudo-f1 score out of these as well \r\n`f1 = 2*(bleu*rouge)/(bleu + rouge)`\r\n\r\nI haven't tried this myself yet, but hopefully this helps!\r\n\r\n[https://en.wikipedia.org/wiki/BLEU](https://en.wikipedia.org/wiki/BLEU)\r\n[https://en.wikipedia.org/wiki/ROUGE_(metric)](https://en.wikipedia.org/wiki/ROUGE_(metric))",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,573 | 1,579 | 1,579 | NONE | null | I want to know it BERT based contextual embedding can be used to measure similarity of identical words in different contexts.
And, can I estimate a threshold for that.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1783/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/1783/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1782 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1782/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1782/comments | https://api.github.com/repos/huggingface/transformers/issues/1782/events | https://github.com/huggingface/transformers/issues/1782 | 520,569,684 | MDU6SXNzdWU1MjA1Njk2ODQ= | 1,782 | model = GPT2LMHeadModel.from_pretrained(args.model_path) try loads in json format | {
"login": "lucasjinreal",
"id": 21303438,
"node_id": "MDQ6VXNlcjIxMzAzNDM4",
"avatar_url": "https://avatars.githubusercontent.com/u/21303438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucasjinreal",
"html_url": "https://github.com/lucasjinreal",
"followers_url": "https://api.github.com/users/lucasjinreal/followers",
"following_url": "https://api.github.com/users/lucasjinreal/following{/other_user}",
"gists_url": "https://api.github.com/users/lucasjinreal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucasjinreal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucasjinreal/subscriptions",
"organizations_url": "https://api.github.com/users/lucasjinreal/orgs",
"repos_url": "https://api.github.com/users/lucasjinreal/repos",
"events_url": "https://api.github.com/users/lucasjinreal/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucasjinreal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"What string was passed under `args.model_path`? Is it a directory or a file name? (It should be a directory)",
"model_path is a model trained with xxx.pth format.\r\n\r\nIs that a directory? contains what?",
"The `model_path` should link to a directory holding the pytorch model and the configuration file associated with it, as if it was saved with `model.save_pretrained(directory)`.\r\n\r\nWhat is your `xxx.pth` format, how did you obtain such a file?",
"Oh, no, I just got others trained xx.pth model, no config file.\r\n\r\nWhy a pytorch model need additional config.json? And people really don't know what's the format inside that file at all..... no convinient",
"The Pytorch models require configuration files because the architectures in the library can have very different sizes. For example GPT-2 small, medium, large and XL all share the same model but have different sizes. Configuration files are essential for this purpose.\r\n\r\nThe format is defined by the `save_pretrained` method on the models. Models acquired from other sources than our library may not be loadable, or may need to be converted using one of our conversion scripts.\r\n\r\nPlease read the [quickstart portion of the documentation](https://huggingface.co/transformers/quickstart.html) to get a deeper understanding of the way the library works."
] | 1,573 | 1,573 | 1,573 | NONE | null | ## 🐛 Bug
Initial a model in this code:
```
logging.info('loading model from: {}'.format(args.model_path))
model = GPT2LMHeadModel.from_pretrained(args.model_path)
```
It seems inside transforms reading our model in json format which cause error:
```
INFO 11-10 16:18:49 generate.py:167 - loading model from: ./weights/ProseInChinese/pytorch_model.bin
I1110 16:18:49.226168 139982371047232 configuration_utils.py:148] loading configuration file ./weights/ProseInChinese/pytorch_model.bin
Traceback (most recent call last):
File "generate.py", line 225, in <module>
main()
File "generate.py", line 168, in main
model = GPT2LMHeadModel.from_pretrained(args.model_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py", line 287, in from_pretrained
**kwargs
File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 154, in from_pretrained
config = cls.from_json_file(resolved_config_file)
File "/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py", line 186, in from_json_file
text = reader.read()
File "/usr/lib/python3.6/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1782/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1781 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1781/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1781/comments | https://api.github.com/repos/huggingface/transformers/issues/1781/events | https://github.com/huggingface/transformers/issues/1781 | 520,566,004 | MDU6SXNzdWU1MjA1NjYwMDQ= | 1,781 | Dose the file /examples/run_lm_finetuning.py provide a demo to pre-train a BERT | {
"login": "bytekongfrombupt",
"id": 33115565,
"node_id": "MDQ6VXNlcjMzMTE1NTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/33115565?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bytekongfrombupt",
"html_url": "https://github.com/bytekongfrombupt",
"followers_url": "https://api.github.com/users/bytekongfrombupt/followers",
"following_url": "https://api.github.com/users/bytekongfrombupt/following{/other_user}",
"gists_url": "https://api.github.com/users/bytekongfrombupt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bytekongfrombupt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bytekongfrombupt/subscriptions",
"organizations_url": "https://api.github.com/users/bytekongfrombupt/orgs",
"repos_url": "https://api.github.com/users/bytekongfrombupt/repos",
"events_url": "https://api.github.com/users/bytekongfrombupt/events{/privacy}",
"received_events_url": "https://api.github.com/users/bytekongfrombupt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, BERT uses the NSP task as well as the MLM task during its pre-training phase. The `run_lm_finetuning.py` script only does MLM so you would need to modify it to pre-train BERT the same way it was done in the paper."
] | 1,573 | 1,575 | 1,575 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hi, I'm curious that if the file run_lm_finetuning.py can pre-train a BERT. But if so, why the file named finetuning instead of pre-training? Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1781/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1781/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1780 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1780/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1780/comments | https://api.github.com/repos/huggingface/transformers/issues/1780/events | https://github.com/huggingface/transformers/issues/1780 | 520,556,363 | MDU6SXNzdWU1MjA1NTYzNjM= | 1,780 | Problems when restoring the pretrain weights for TFbert | {
"login": "BraceLau",
"id": 43031180,
"node_id": "MDQ6VXNlcjQzMDMxMTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/43031180?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BraceLau",
"html_url": "https://github.com/BraceLau",
"followers_url": "https://api.github.com/users/BraceLau/followers",
"following_url": "https://api.github.com/users/BraceLau/following{/other_user}",
"gists_url": "https://api.github.com/users/BraceLau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BraceLau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BraceLau/subscriptions",
"organizations_url": "https://api.github.com/users/BraceLau/orgs",
"repos_url": "https://api.github.com/users/BraceLau/repos",
"events_url": "https://api.github.com/users/BraceLau/events{/privacy}",
"received_events_url": "https://api.github.com/users/BraceLau/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, could you provide information about your setup? Which TensorFlow version are you using, which python version, which Transformers version? Thank you.",
"> Hi, could you provide information about your setup? Which TensorFlow version are you using, which python version, which Transformers version? Thank you.\r\n\r\nHi, the information about my setup:\r\nPython version: 3.6.9\r\nTensorflow Version: 2.0.0b1\r\nPyTorch Transformers version (or branch):2.1.1\r\n\r\nThank you",
"I can't reproduce this on my side while on the same versions. Would you happen to have a folder called \"bert-base-uncased\" in the same directory from which you're calling the python script?",
"Hi,\r\n\r\nI have exactly the same error while doing the same task...\r\n\r\nPython version: Python 3.5.3\r\nTensorflow Version: 2.0.0-beta1\r\nPyTorch Transformers version : transformers (2.1.1)\r\n",
"In my environment, **it works as expected**!\r\n@YumingLiu1996 @LeonardoGracioS \r\n\r\nN.B: It works without `force_download=True` parameter too.\r\n\r\n```\r\n>>> from transformers import TFBertForSequenceClassification\r\n2019-11-12 14:38:24.462663: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA\r\n2019-11-12 14:38:24.466451: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3408000000 Hz\r\n2019-11-12 14:38:24.466942: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x56248b3c66f0 executing computations on platform Host. Devices:\r\n2019-11-12 14:38:24.466956: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version\r\n>>> model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased', force_download=True)\r\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 313/313 [00:00<00:00, 154303.85B/s]\r\n100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 536063208/536063208 [00:48<00:00, 11140340.62B/s]\r\n>>> import platform\r\n>>> platform.platform()\r\n'Linux-4.15.0-66-generic-x86_64-with-debian-buster-sid'\r\n>>> platform.python_version()\r\n'3.6.9'\r\n>>> import torch\r\n>>> torch.__version__\r\n'1.0.1'\r\n>>> import tensorflow as tf\r\n>>> tf.__version__\r\n'2.0.0'\r\n>>> import transformers\r\n>>> transformers.__version__\r\n'2.1.1'\r\n```",
"> I can't reproduce this on my side while on the same versions. Would you happen to have a folder called \"bert-base-uncased\" in the same directory from which you're calling the python script?\r\n\r\n\r\n\r\n> I can't reproduce this on my side while on the same versions. Would you happen to have a folder called \"bert-base-uncased\" in the same directory from which you're calling the python script?\r\n\r\nNo, I don't have such as folder.. ",
"> I can't reproduce this on my side while on the same versions. Would you happen to have a folder called \"bert-base-uncased\" in the same directory from which you're calling the python script?\r\n\r\nI redownload tensorflow 2.0.0 and transformers. And now I encounter another problem:\r\n\r\n```\r\nh5py/_objects.pyx in h5py._objects.with_phil.wrapper()\r\n\r\nh5py/_objects.pyx in h5py._objects.with_phil.wrapper()\r\n\r\nh5py/h5f.pyx in h5py.h5f.open()\r\n\r\nOSError: Unable to open file (truncated file: eof = 485070229, sblock->base_addr = 0, stored_eof = 497933648)\r\n```\r\nDo you know the solution for this problem?\r\n",
"@YumingLiu1996 the error `OSError: Unable to open file` tells you that the file was not downloaded in its entirety. Please add the `force_download=True` parameter when downloading the pre-trained model:\r\n\r\n```py\r\nmodel = TFBertForSequenceClassification.from_pretrained('bert-base-uncased', force_download=True)\r\n```",
"I solved the issue by using a Google Cloud Platform virtual machine with a TF 2.0 environment.",
"> I solved the issue by using a Google Cloud Platform virtual machine with a TF 2.0 environment.\r\n\r\nI am using a Google Cloud Platform with an K80 and I have the TF2 installed but it doesn't works...\r\nthe same problem here guys. \r\n\r\nI don't know why, but it only works on Google Colab until this comment. ",
"> > I solved the issue by using a Google Cloud Platform virtual machine with a TF 2.0 environment.\r\n> \r\n> I am using a Google Cloud Platform with an K80 and I have the TF2 installed but it doesn't works...\r\n> the same problem here guys.\r\n> \r\n> I don't know why, but it only works on Google Colab until this comment.\r\n\r\nDo you have exactly the same error ? Did you start your virtual machine with a TF 2.0 environment and not just a \"pip install tensorflow=2.0.0\" ?",
"I did through the pip install, because I have this machine running some\nother things. So, the problem is the same but I think could be the\nTensorflow version.\n\nI will try to build TF 2 from source.\n\n\n\nEm sex, 15 de nov de 2019 06:07, Samuel Leonardo Gracio <\[email protected]> escreveu:\n\n> I solved the issue by using a Google Cloud Platform virtual machine with a\n> TF 2.0 environment.\n>\n> I am using a Google Cloud Platform with an K80 and I have the TF2\n> installed but it doesn't works...\n> the same problem here guys.\n>\n> I don't know why, but it only works on Google Colab until this comment.\n>\n> Do you have exactly the same error ? Did you start your virtual machine\n> with a TF 2.0 environment and not just a \"pip install tensorflow=2.0.0\" ?\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/1780?email_source=notifications&email_token=AB4RK3ZWJWXUQDPECFSVTPLQTZRGXA5CNFSM4JLK64N2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEEEZC4Q#issuecomment-554275186>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AB4RK32QNPOWOL3OVKAP2FTQTZRGXANCNFSM4JLK64NQ>\n> .\n>\n",
"same problem here:\r\n\r\n OS:\r\n Python version: 3.6.8\r\n PyTorch version: 1.2.0\r\n TF version: 2.0.0b1\r\n PyTorch Transformers version (or branch): master (Nov 25, 2019)\r\n\r\nmodel = TFBertForSequenceClassification.from_pretrained('bert-base-cased')\r\n\r\nValueError: Expected floating point type, got <dtype: 'int32'>.\r\n\r\nlooks like a tensorflow problem\r\n",
"I've found your problem! It's a TensorFlow bug in version 2.0.0beta1. If you uninstall this version with `pip uninstall tensorflow` and after that you install the TensorFlow with version 2.0.0a0 with `pip install tensorflow==2.0.0,` **the code works as expected**!\r\n\r\nI've tried different settings and it works as expected!\r\n- PyTorch 1.2.0, TensorFlow(CPU-version) 2.0.0\r\n- PyTorch 1.2.0, TensorFlow(GPU-version) 2.0.0\r\n- PyTorch 1.3.1, TensorFlow(CPU-version) 2.0.0\r\n- PyTorch 1.3.1, TensorFlow(GPU-version) 2.0.0\r\n\r\n> same problem here:\r\n> \r\n> ```\r\n> OS:\r\n> Python version: 3.6.8\r\n> PyTorch version: 1.2.0\r\n> TF version: 2.0.0b1\r\n> PyTorch Transformers version (or branch): master (Nov 25, 2019)\r\n> ```\r\n> \r\n> model = TFBertForSequenceClassification.from_pretrained('bert-base-cased')\r\n> \r\n> ValueError: Expected floating point type, got <dtype: 'int32'>.\r\n> \r\n> looks like a tensorflow problem",
"thanks. I tryied with a0 but it has even more problem, like \r\n\r\n'tensorflow.python.keras.api._v2.keras.layers' has no attribute 'LayerNormalization'\r\n\r\nwhich are said to be bugs in a0 and patched in b1. \r\n\r\nHowever b1 does not work because\r\nValueError: Expected floating point type, got <dtype: 'int32'>.\r\n\r\nLooks like this is a deadlock.\r\n\r\n> I've found your problem! It's a TensorFlow bug in version 2.0.0beta1. If you uninstall this version with `pip uninstall tensorflow` and after that you install the TensorFlow with version 2.0.0a0 with `pip install tensorflow==2.0.0,` **the code works as expected**!\r\n> \r\n> I've tried different settings and it works as expected!\r\n> \r\n> * PyTorch 1.2.0, TensorFlow(CPU-version) 2.0.0\r\n> \r\n> * PyTorch 1.2.0, TensorFlow(GPU-version) 2.0.0\r\n> \r\n> * PyTorch 1.3.1, TensorFlow(CPU-version) 2.0.0\r\n> \r\n> * PyTorch 1.3.1, TensorFlow(GPU-version) 2.0.0\r\n> \r\n> \r\n> > same problem here:\r\n> > ```\r\n> > OS:\r\n> > Python version: 3.6.8\r\n> > PyTorch version: 1.2.0\r\n> > TF version: 2.0.0b1\r\n> > PyTorch Transformers version (or branch): master (Nov 25, 2019)\r\n> > ```\r\n> > \r\n> > \r\n> > model = TFBertForSequenceClassification.from_pretrained('bert-base-cased')\r\n> > ValueError: Expected floating point type, got <dtype: 'int32'>.\r\n> > looks like a tensorflow problem\r\n\r\n",
"I don't know what to say. I've just tested with Transformers v-2.2.0 and it works as expected. Sorry, but in my environment it works as I've said some days ago. How can i help you? How can i replicate your problem? Which OS are you using?\r\n\r\n> thanks. I tryied with a0 but it has even more problem, like\r\n> \r\n> 'tensorflow.python.keras.api._v2.keras.layers' has no attribute 'LayerNormalization'\r\n> \r\n> which are said to be bugs in a0 and patched in b1.\r\n> \r\n> However b1 does not work because\r\n> ValueError: Expected floating point type, got <dtype: 'int32'>.\r\n> \r\n> Looks like this is a deadlock.\r\n> \r\n> > I've found your problem! It's a TensorFlow bug in version 2.0.0beta1. If you uninstall this version with `pip uninstall tensorflow` and after that you install the TensorFlow with version 2.0.0a0 with `pip install tensorflow==2.0.0,` **the code works as expected**!\r\n> > I've tried different settings and it works as expected!\r\n> > ```\r\n> > * PyTorch 1.2.0, TensorFlow(CPU-version) 2.0.0\r\n> > \r\n> > * PyTorch 1.2.0, TensorFlow(GPU-version) 2.0.0\r\n> > \r\n> > * PyTorch 1.3.1, TensorFlow(CPU-version) 2.0.0\r\n> > \r\n> > * PyTorch 1.3.1, TensorFlow(GPU-version) 2.0.0\r\n> > ```\r\n> > \r\n> > \r\n> > > same problem here:\r\n> > > ```\r\n> > > OS:\r\n> > > Python version: 3.6.8\r\n> > > PyTorch version: 1.2.0\r\n> > > TF version: 2.0.0b1\r\n> > > PyTorch Transformers version (or branch): master (Nov 25, 2019)\r\n> > > ```\r\n> > > \r\n> > > \r\n> > > model = TFBertForSequenceClassification.from_pretrained('bert-base-cased')\r\n> > > ValueError: Expected floating point type, got <dtype: 'int32'>.\r\n> > > looks like a tensorflow problem",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,573 | 1,580 | 1,580 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using Bert:
Language I am using the model on (English, Chinese....):
The problem arise when using:
* [ ] the official example scripts: (give details)
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
1.when I try to reload the weights for TFbert by using:
model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased')
2. I got the error
ValueError: Expected floating point type, got <dtype: 'int32'>.
3. Does anybody know how to fix this?
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS:
* Python version:
* PyTorch version:
* PyTorch Transformers version (or branch):
* Using GPU ?
* Distributed of parallel setup ?
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1780/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1779 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1779/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1779/comments | https://api.github.com/repos/huggingface/transformers/issues/1779/events | https://github.com/huggingface/transformers/issues/1779 | 520,555,242 | MDU6SXNzdWU1MjA1NTUyNDI= | 1,779 | Multi GPU dataparallel crash | {
"login": "devroy73",
"id": 12408145,
"node_id": "MDQ6VXNlcjEyNDA4MTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/12408145?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/devroy73",
"html_url": "https://github.com/devroy73",
"followers_url": "https://api.github.com/users/devroy73/followers",
"following_url": "https://api.github.com/users/devroy73/following{/other_user}",
"gists_url": "https://api.github.com/users/devroy73/gists{/gist_id}",
"starred_url": "https://api.github.com/users/devroy73/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/devroy73/subscriptions",
"organizations_url": "https://api.github.com/users/devroy73/orgs",
"repos_url": "https://api.github.com/users/devroy73/repos",
"events_url": "https://api.github.com/users/devroy73/events{/privacy}",
"received_events_url": "https://api.github.com/users/devroy73/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Even I'm facing this issue @LysandreJik @thomwolf can you throw some input on this?\r\nall the input is of the same length still this issue occurs.\r\n@devroy73 meanwhile in data loader you can set drop_last = True",
"Hey @anandhperumal thanks for that it solved my crashing issue. ",
"I tried setting drop_last=True, but it did not fix the issue for me.",
"maybe we should add parameter to Trainer. \r\nor it should added to the doc for others "
] | 1,573 | 1,589 | 1,573 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): GPT2
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [ ] the official example scripts: run_lm_finetuning
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: I am finetuning the GPT2 model with a dataset that I have used in the past to fine-tune the BERT model among others
## To Reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
raceback (most recent call last):████████████████████████████████████████████████████████████████████████████████████████████████████████████▉| 7587/7588 [1:42:01<00:00, 1.26it/s]
File "run_lm_finetuning.py", line 551, in <module>
main()
File "run_lm_finetuning.py", line 503, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_lm_finetuning.py", line 228, in train
outputs = model(inputs, masked_lm_labels=labels) if args.mlm else model(inputs, labels=labels)
File "/home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 153, in forward
return self.gather(outputs, self.output_device)
File "/home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 165, in gather
return gather(outputs, output_device, dim=self.dim)
File "/home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather
res = gather_map(outputs)
File "/home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map
return type(out)(map(gather_map, zip(*outputs)))
File "/home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map
return type(out)(map(gather_map, zip(*outputs)))
File "/home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 55, in gather_map
return Gather.apply(target_device, dim, *outputs)
File "/home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 68, in forward
return comm.gather(inputs, ctx.dim, ctx.target_device)
File "/home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/cuda/comm.py", line 165, in gather
return torch._C._gather(tensors, dim, destination)
**RuntimeError: Gather got an input of invalid size: got [2, 2, 12, 1024, 64], but expected [2, 3, 12, 1024, 64] (gather at /opt/conda/conda-**bld/pytorch_1565272279342/work/torch/csrc/cuda/comm.cpp:226)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x47 (0x7f47a13d5e37 in /home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: torch::cuda::gather(c10::ArrayRef<at::Tensor>, long, c10::optional<int>) + 0x3c7 (0x7f4720c61327 in /home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/lib/libtorch.so)
frame #2: <unknown function> + 0x5fa742 (0x7f47a420f742 in /home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #3: <unknown function> + 0x1c8316 (0x7f47a3ddd316 in /home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
frame #14: THPFunction_apply(_object*, _object*) + 0x98f (0x7f47a40024bf in /home/dev/deeplearning/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
## Expected behavior
it crashes at the last batch all the time. I expect it to move to the next epoch
## Environment
* OS:
* Python version: 3.6
* PyTorch version: 1.2
* PyTorch Transformers version (or branch):
* Using GPU ? 4 Quadro 8000
* Distributed of parallel setup ? parallel
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1779/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1778 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1778/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1778/comments | https://api.github.com/repos/huggingface/transformers/issues/1778/events | https://github.com/huggingface/transformers/pull/1778 | 520,473,461 | MDExOlB1bGxSZXF1ZXN0MzM5MDU4NTkw | 1,778 | from_pretrained: convert DialoGPT format | {
"login": "yet-another-account",
"id": 10374151,
"node_id": "MDQ6VXNlcjEwMzc0MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/10374151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yet-another-account",
"html_url": "https://github.com/yet-another-account",
"followers_url": "https://api.github.com/users/yet-another-account/followers",
"following_url": "https://api.github.com/users/yet-another-account/following{/other_user}",
"gists_url": "https://api.github.com/users/yet-another-account/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yet-another-account/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yet-another-account/subscriptions",
"organizations_url": "https://api.github.com/users/yet-another-account/orgs",
"repos_url": "https://api.github.com/users/yet-another-account/repos",
"events_url": "https://api.github.com/users/yet-another-account/events{/privacy}",
"received_events_url": "https://api.github.com/users/yet-another-account/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1778?src=pr&el=h1) Report\n> Merging [#1778](https://codecov.io/gh/huggingface/transformers/pull/1778?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7a9aae1044aa4699310a8004f631fc0a4bdf1b65?src=pr&el=desc) will **increase** coverage by `0.06%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1778?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1778 +/- ##\n==========================================\n+ Coverage 84.03% 84.09% +0.06% \n==========================================\n Files 94 94 \n Lines 14032 14036 +4 \n==========================================\n+ Hits 11792 11804 +12 \n+ Misses 2240 2232 -8\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1778?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1778/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `95.13% <100%> (+2.18%)` | :arrow_up: |\n| [transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1778/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `91.05% <100%> (+1.41%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1778?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1778?src=pr&el=footer). Last update [7a9aae1...90f6e73](https://codecov.io/gh/huggingface/transformers/pull/1778?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Looks good to me, thanks @eukaryote31 !",
"also cc @sshleifer ",
"LGTM!",
"Ok great, merging!",
"This was incorrect: we have multiple models that contain a valid `lm_head.decoder` layer (RoBERTa and its variants, CamemBERT, XLM-R...) and this PR broke `from_pretrained` for those models in the `ForMaskedLM` case.\r\n\r\nThe way to go in the particular `DialoGPT` case would be to either:\r\n- create a conversion script and upload the GPT2 compatible weights somewhere.\r\n- or, add a special test in from_pretrained that the current instance is an instance of `GPT2...Head` (this is not pretty in terms of OOP encapsulation 💩)\r\n- or, expose some kind of overrideable list of key mapping (that can be implemented by the GPT2 subclass)\r\n\r\n(1. seems like the easier solution) \r\n\r\ncc @LysandreJik @sshleifer @thomwolf ",
"Also cc @patrickvonplaten 🤯"
] | 1,573 | 1,583 | 1,574 | CONTRIBUTOR | null | DialoGPT checkpoints have "lm_head.decoder.weight" instead of "lm_head.weight".
(see: https://www.reddit.com/r/MachineLearning/comments/dt5woy/p_dialogpt_state_of_the_art_conversational_model/f6vmwuy?utm_source=share&utm_medium=web2x) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1778/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1778/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1778",
"html_url": "https://github.com/huggingface/transformers/pull/1778",
"diff_url": "https://github.com/huggingface/transformers/pull/1778.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1778.patch",
"merged_at": 1574953718000
} |
https://api.github.com/repos/huggingface/transformers/issues/1777 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1777/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1777/comments | https://api.github.com/repos/huggingface/transformers/issues/1777/events | https://github.com/huggingface/transformers/issues/1777 | 520,444,971 | MDU6SXNzdWU1MjA0NDQ5NzE= | 1,777 | Could you support albert? | {
"login": "zhu1090093659",
"id": 46916148,
"node_id": "MDQ6VXNlcjQ2OTE2MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/46916148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhu1090093659",
"html_url": "https://github.com/zhu1090093659",
"followers_url": "https://api.github.com/users/zhu1090093659/followers",
"following_url": "https://api.github.com/users/zhu1090093659/following{/other_user}",
"gists_url": "https://api.github.com/users/zhu1090093659/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhu1090093659/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhu1090093659/subscriptions",
"organizations_url": "https://api.github.com/users/zhu1090093659/orgs",
"repos_url": "https://api.github.com/users/zhu1090093659/repos",
"events_url": "https://api.github.com/users/zhu1090093659/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhu1090093659/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Duplicate of #1649 #1420 #1522 #1564 🤣🤣"
] | 1,573 | 1,573 | 1,573 | NONE | null | ## ❓ Questions & Help
For many students,who haven't large GPU,so they will use small model(eg albert),hence I hope you will support loading albert, Tkanks so much!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1777/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1776 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1776/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1776/comments | https://api.github.com/repos/huggingface/transformers/issues/1776/events | https://github.com/huggingface/transformers/issues/1776 | 520,378,697 | MDU6SXNzdWU1MjAzNzg2OTc= | 1,776 | Extracting the output layer of HuggingFace GPT2DoubleHeadsModel | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please do not open duplicate issues (#1774)"
] | 1,573 | 1,573 | 1,573 | NONE | null | Hello,
Suppose that I have two GPT2DoubleHeadsModel (let’s call it model A and B).
Is there any way that I can:
1. take the hidden state of a given input at the n-th layer of the model A and feed it directly into the output layer of model B to compute output
AND
2. Take the output obtained from 1. and calculate the loss based on an appropriate label
Some coding example would be a great help.
Thank you,
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1776/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1775 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1775/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1775/comments | https://api.github.com/repos/huggingface/transformers/issues/1775/events | https://github.com/huggingface/transformers/issues/1775 | 520,169,743 | MDU6SXNzdWU1MjAxNjk3NDM= | 1,775 | pip install transformers not downloading gpt2-xl | {
"login": "samer-noureddine",
"id": 32775563,
"node_id": "MDQ6VXNlcjMyNzc1NTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/32775563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samer-noureddine",
"html_url": "https://github.com/samer-noureddine",
"followers_url": "https://api.github.com/users/samer-noureddine/followers",
"following_url": "https://api.github.com/users/samer-noureddine/following{/other_user}",
"gists_url": "https://api.github.com/users/samer-noureddine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samer-noureddine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samer-noureddine/subscriptions",
"organizations_url": "https://api.github.com/users/samer-noureddine/orgs",
"repos_url": "https://api.github.com/users/samer-noureddine/repos",
"events_url": "https://api.github.com/users/samer-noureddine/events{/privacy}",
"received_events_url": "https://api.github.com/users/samer-noureddine/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"We haven't updated the pip version yet, we'll do so in the following weeks. Please install it from source in the meantime:\r\n\r\n```\r\npip install git+https://github.com/huggingface/transformers\r\n```",
"Traceback (most recent call last):\r\n File \"run_pplm.py\", line 936, in <module>\r\n run_pplm_example(**vars(args))\r\n File \"run_pplm.py\", line 738, in run_pplm_example\r\n tokenizer = GPT2Tokenizer.from_pretrained(pretrained_model)\r\n File \"/home/xps/anaconda3/envs/nlg_entity/lib/python3.7/site-packages/transformers/tokenization_utils.py\", line 911, in from_pretrained\r\n return cls._from_pr\r\netrained(*inputs, **kwargs)\r\n File \"/home/xps/anaconda3/envs/nlg_entity/lib/python3.7/site-packages/transformers/tokenization_utils.py\", line 1014, in _from_pretraine\r\n list(cls.vocab_files_names.values()),\r\nOSError: Model name 'gpt2-medium' was not found in tokenizers model name list (gpt2, gpt2-medium, gpt2-large, gpt2-xl, distilgpt2). We asswas a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find sucat this path or url.\r\n\r\n\r\n\r\n\r\nI also reported the same error....@LysandreJik\r\n",
"> 回溯(最近一次调用):\r\n> 文件“run_pplm.py”,第 936 行,在\r\n> run_pplm_example(**vars(args))\r\n> 文件“run_pplm.py”,第 738 行,在 run_pplm_example\r\n> tokenizer = GPT2Tokenizer.from_pretrained(pretrained_model)\r\n> 文件“ /home/xps/anaconda3/envs/nlg_entity/lib/python3.7/site-packages/transformers/tokenization_utils.py\", line 911, in from_pretrained\r\n> return\r\n> cls._from_pretrained(*inputs, **kwargs)\r\n> File \"/home /xps/anaconda3/envs/nlg_entity/lib/python3.7/site-packages/transformers/tokenization_utils.py”,第 1014 行,在 _from_pretraine\r\n> 列表(cls.vocab_files_names.values())中,\r\n> OSError:在标记器模型名称列表(gpt2、gpt2-medium、gpt2-large、gpt2-xl、distlgpt2)中找不到模型名称“gpt2-medium”。我们为包含名为 ['vocab.json', 'merges.txt'] 的词汇文件的目录添加了路径、模型标识符或 url,但找不到此路径或 url。\r\n> \r\n> 我也报了同样的错误.... @LysandreJik\r\n\r\nhi,you can run this model?"
] | 1,573 | 1,630 | 1,573 | NONE | null | Following the release of 1.5 B parameter model, I attempted to upgrade my version using the following command:
pip install transformers --upgrade
The installed library does not have gpt2-xl, and throws this error when I try calling it:
OSError: Model name 'gpt2-xl' was not found in model name list (gpt2, gpt2-medium, gpt2-large, distilgpt2). We assumed 'gpt2-xl' was a path or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url.
Please update the pip installer to allow the easy installation of the latest model | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1775/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1774 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1774/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1774/comments | https://api.github.com/repos/huggingface/transformers/issues/1774/events | https://github.com/huggingface/transformers/issues/1774 | 520,106,246 | MDU6SXNzdWU1MjAxMDYyNDY= | 1,774 | For HuggingFace GPT2DoubleHeadsModel, is there a way to directly provide a hidden state for an input? | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! You have access to the models' internals so you could definitely rewrite the main loop function to handle such a use-case!\r\n\r\nYou can access the hidden layers as follows:\r\n\r\n```py\r\nblocks = model.transformer.h # this is a list of \"block\"s\r\nblock[0] # contains the MLP/LayerNorm/attention layers\r\n```\r\n\r\nI'd recommend taking a look at the `forward` method of the [`GPT2DoubleLMHeadModel`](https://github.com/huggingface/transformers/blob/master/transformers/modeling_gpt2.py#L642) to \r\nunderstand how the `transformer` is managed, and to take a look at the [loop inside the `GPT2Model`](https://github.com/huggingface/transformers/blob/master/transformers/modeling_gpt2.py#L451) which you will probably have to replicate a little to achieve what you want to do.",
"Hello, \r\nThank you very much for your reply.\r\nSo if I am understanding this correctly, the part of the loop inside GPT2 model below is what I am supposed to use to generate output based on a given hidden state, am I correct?\r\n```\r\noutputs = block(hidden_states,\r\n layer_past=layer_past,\r\n attention_mask=attention_mask,\r\n head_mask=head_mask[i])\r\n```\r\nAlso, I would like to clarify what ‘block’ actually refers to in this code. Is ‘block’ same as GPT-2 ‘layer’? It’s not very clear to me what `block[1]`, `block[2]`, etc. represents.\r\n\r\nThank you again,",
"Yes.",
"Hello,\r\n\r\nWhat exactly is the `block`? Does `block[0]` refer to the first gtp-2 layer, and `block[1]` refers to the second gtp-2 layer and so on? How can I utilize `block` to have access to the uppermost output layer of a gtp-2?\r\n\r\nThank you,",
"Yes, `block` refers to the GPT-2 layers. `block[0]` is the first layer, ... `block[-1]` is the last layer.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,573 | 1,579 | 1,579 | NONE | null | Hello,
Say I have two custom trained HuggingFace GPT2DoubleHeadsModels (Model 1 and 2).
I want to take an hidden state of m-th layer of model 1, and use that hidden state as my input for model 2.
Is this possible with HuggingFace GPT2DoubleHeadsModels?
Some coding example would be a great help!
Thank you,
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1774/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1773 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1773/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1773/comments | https://api.github.com/repos/huggingface/transformers/issues/1773/events | https://github.com/huggingface/transformers/pull/1773 | 520,059,736 | MDExOlB1bGxSZXF1ZXN0MzM4NzEzOTg1 | 1,773 | [WIP] BertAbs summarization | {
"login": "rlouf",
"id": 3885044,
"node_id": "MDQ6VXNlcjM4ODUwNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3885044?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rlouf",
"html_url": "https://github.com/rlouf",
"followers_url": "https://api.github.com/users/rlouf/followers",
"following_url": "https://api.github.com/users/rlouf/following{/other_user}",
"gists_url": "https://api.github.com/users/rlouf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rlouf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rlouf/subscriptions",
"organizations_url": "https://api.github.com/users/rlouf/orgs",
"repos_url": "https://api.github.com/users/rlouf/repos",
"events_url": "https://api.github.com/users/rlouf/events{/privacy}",
"received_events_url": "https://api.github.com/users/rlouf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1773?src=pr&el=h1) Report\n> Merging [#1773](https://codecov.io/gh/huggingface/transformers/pull/1773?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0cb163865a4c761c226b151283309eedb2b1ca4d?src=pr&el=desc) will **increase** coverage by `1.06%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1773?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1773 +/- ##\n==========================================\n+ Coverage 82.67% 83.74% +1.06% \n==========================================\n Files 111 108 -3 \n Lines 16162 15749 -413 \n==========================================\n- Hits 13362 13189 -173 \n+ Misses 2800 2560 -240\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1773?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1773/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2JlcnQucHk=) | `87.7% <100%> (+0.02%)` | :arrow_up: |\n| [transformers/hf\\_api.py](https://codecov.io/gh/huggingface/transformers/pull/1773/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2hmX2FwaS5weQ==) | `96.87% <0%> (-0.63%)` | :arrow_down: |\n| [transformers/tests/modeling\\_bert\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1773/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2JlcnRfdGVzdC5weQ==) | `96.5% <0%> (-0.58%)` | :arrow_down: |\n| [transformers/tests/modeling\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1773/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2NvbW1vbl90ZXN0LnB5) | `74.23% <0%> (-0.31%)` | :arrow_down: |\n| [transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1773/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9vcGVuYWkucHk=) | `81.81% <0%> (-0.3%)` | :arrow_down: |\n| [transformers/tests/modeling\\_openai\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1773/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX29wZW5haV90ZXN0LnB5) | `93.2% <0%> (-0.2%)` | :arrow_down: |\n| [transformers/tests/modeling\\_xlnet\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1773/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3hsbmV0X3Rlc3QucHk=) | `94.68% <0%> (-0.2%)` | :arrow_down: |\n| [transformers/tests/modeling\\_albert\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1773/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2FsYmVydF90ZXN0LnB5) | `95.08% <0%> (-0.16%)` | :arrow_down: |\n| [transformers/tests/modeling\\_gpt2\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1773/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2dwdDJfdGVzdC5weQ==) | `94.01% <0%> (-0.15%)` | :arrow_down: |\n| [transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/1773/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl94bG0ucHk=) | `83.2% <0%> (-0.14%)` | :arrow_down: |\n| ... and [40 more](https://codecov.io/gh/huggingface/transformers/pull/1773/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1773?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1773?src=pr&el=footer). Last update [0cb1638...6fb9900](https://codecov.io/gh/huggingface/transformers/pull/1773?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@rlouf Thanks for the the PR!\r\nI'm trying to run run_summarization.py with the following command, but got an error. Any idea why?\r\nI have unzipped the cnn and dailymail folders in my data directory. \r\n\r\n```\r\npython examples/run_summarization.py \\\r\n --data_dir /data/home/hlu/notebooks/summarization/data \\\r\n --output_dir ./summarization_output \\\r\n --do_train True \\\r\n --model_name_or_path bert-base-cased \\\r\n --num_train_epochs 1 \\\r\n```\r\n\r\nW1108 21:29:27.643532 139837444753152 run_summarization.py:488] Process rank: 0, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: False\r\nI1108 21:29:27.643716 139837444753152 run_summarization.py:491] Training/evaluation parameters Namespace(data_dir='/data/home/hlu/notebooks/summarization/data', device=device(type='cuda'), do_evaluate=False, do_overwrite_output_dir=False, do_train=True, gradient_accumulation_steps=1, max_grad_norm=1.0, max_steps=-1, model_name_or_path='bert-base-cased', model_type='bert', n_gpu=1, num_train_epochs=1, output_dir='./summarization_output', per_gpu_eval_batch_size=4, per_gpu_train_batch_size=4, seed=42, to_cpu=False)\r\nI1108 21:30:40.805522 139837444753152 run_summarization.py:198] ***** Running training *****\r\nI1108 21:30:40.805693 139837444753152 run_summarization.py:199] Num examples = 312085\r\nI1108 21:30:40.806245 139837444753152 run_summarization.py:200] Num Epochs = 1\r\nI1108 21:30:40.806311 139837444753152 run_summarization.py:201] Instantaneous batch size per GPU = 4\r\nI1108 21:30:40.806385 139837444753152 run_summarization.py:204] Total train batch size (w. parallel, distributed & accumulation) = 4\r\nI1108 21:30:40.806446 139837444753152 run_summarization.py:207] Gradient Accumulation steps = 1\r\nI1108 21:30:40.806501 139837444753152 run_summarization.py:208] Total optimization steps = 78022\r\nTraceback (most recent call last):\r\n File \"examples/run_summarization.py\", line 545, in <module>\r\n main()\r\n File \"examples/run_summarization.py\", line 497, in main\r\n global_step, tr_loss = train(args, model, tokenizer)\r\n File \"examples/run_summarization.py\", line 236, in train\r\n decoder_lm_labels=lm_labels,\r\n File \"/data/anaconda/envs/nlp_gpu/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 547, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/data/home/hlu/notebooks/huggingface/transformers/transformers/modeling_encoder_decoder.py\", line 240, in forward\r\n decoder_outputs = self.decoder(decoder_input_ids, **kwargs_decoder)\r\n File \"/data/anaconda/envs/nlp_gpu/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 547, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/data/home/hlu/notebooks/huggingface/transformers/transformers/modeling_bert.py\", line 865, in forward\r\n encoder_attention_mask=encoder_attention_mask)\r\n File \"/data/anaconda/envs/nlp_gpu/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 547, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/data/home/hlu/notebooks/huggingface/transformers/transformers/modeling_bert.py\", line 677, in forward\r\n extended_attention_mask = causal_mask[:, None, :, :] * attention_mask[:, None, None, :]\r\nRuntimeError: expected device cuda:0 and dtype Long but got device cuda:0 and dtype Bool\r\n",
"Hi! Thank you for raising the issue. I think it comes from the following lines in `modeling_bert.py`:\r\n\r\n```python\r\ncausal_mask = seq_ids[None, None, :].repeat(batch_size, seq_length, 1) <= seq_ids[None, :, None]\r\nextended_attention_mask = causal_mask[:, None, :, :] * attention_mask[:, None, None, :]\r\n```\r\n\r\n`causal_mask` as defined is a boolean tensor, while the `attention_mask` being passed is of type `long`. I am very confused because I never got the error when I tried locally. I will fix the issue asap. In the meantime, would you mind executing the following code in your environment and pasting the result here?\r\n\r\n```python\r\nimport platform; print(\"Platform\", platform.platform())\r\nimport sys; print(\"Python\", sys.version)\r\nimport torch; print(\"PyTorch\", torch.__version__)\r\n```",
"@rlouf Thanks! Please see the environment info below.\r\n>>> import platform; print(\"Platform\", platform.platform())\r\nPlatform Linux-4.15.0-1061-azure-x86_64-with-debian-stretch-sid\r\n>>> import sys; print(\"Python\", sys.version)\r\nPython 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34) \r\n[GCC 7.3.0]\r\n>>> import torch; print(\"Pytorch\", torch.__version__)\r\nPytorch 1.2.0",
"Thank you! I’m working with PyTorch 1.3.0, and they introduced type promotion in this release (https://github.com/pytorch/pytorch/releases) which surely explains why it worked for me but not for you. I will work on a fix this week, in the meantime you can simply convert causal mask manually to long, or upgrade to pytorch 1.3 if that is possible.\r\n\r\nEdit: I pushed the changes. Let me know if there still is an issue.",
"Here is the latest on summarization:\r\n\r\n- I added utilities to load the CNN/DailyMail dataset, to separate stories from summaries. The utilities are tested;\r\n- I wrapped the BertAbs model along with its configuration to be able to use it with the library's usual API `BertAbs.from_pretrained`. I uploaded the finetuned weights and configuration on S3 and tested that model loads properly;\r\n- I also wrapped the sequence generation so we can generate summaries from any text using Beam Search;\r\n- I removed my previous attempt at Beam Search and saved it for later;\r\n- I added a small fix in `modeling_bert.py` for users that use PyTorch v < 1.30. We convert the causal mask explicitly from `bool` to `long` since type promotion was only introduce in PyTorch 1.3\r\n- I added ROUGE evaluation and am able to reproduce the authors' results on CNN/DailyMail\r\n\r\n- [ ] revert changes in `modeling_encoderdecoder.py`\r\n\r\n\r\n\r\nI squashed my commits and rebased on the repo's master branch. The docs are updated. Good to go on my side."
] | 1,573 | 1,575 | 1,575 | CONTRIBUTOR | null | This PR builds on the encoder-decoder mechanism to do abstractive summarizaton. Contributions:
- A BeamSearch class that takes any `PreTranedEncoderDecoder` as an input;
- A script `run_summarization.py` that allows to pre-train the model and generate summaries.
Note that to save the checkpoints I had to add a parameter in the `save_pretrained_model` method, but I am not sure this is the best solution. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1773/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1773/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1773",
"html_url": "https://github.com/huggingface/transformers/pull/1773",
"diff_url": "https://github.com/huggingface/transformers/pull/1773.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1773.patch",
"merged_at": 1575941876000
} |
https://api.github.com/repos/huggingface/transformers/issues/1772 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1772/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1772/comments | https://api.github.com/repos/huggingface/transformers/issues/1772/events | https://github.com/huggingface/transformers/pull/1772 | 520,047,867 | MDExOlB1bGxSZXF1ZXN0MzM4NzA0MjA0 | 1,772 | Fix run_squad.py | {
"login": "tailorck",
"id": 7613002,
"node_id": "MDQ6VXNlcjc2MTMwMDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7613002?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tailorck",
"html_url": "https://github.com/tailorck",
"followers_url": "https://api.github.com/users/tailorck/followers",
"following_url": "https://api.github.com/users/tailorck/following{/other_user}",
"gists_url": "https://api.github.com/users/tailorck/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tailorck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tailorck/subscriptions",
"organizations_url": "https://api.github.com/users/tailorck/orgs",
"repos_url": "https://api.github.com/users/tailorck/repos",
"events_url": "https://api.github.com/users/tailorck/events{/privacy}",
"received_events_url": "https://api.github.com/users/tailorck/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,573 | 1,583 | 1,583 | NONE | null | The `run_squad.py` was looking in the output directory for `config.json` when really it was one level lower in the checkpoint directories. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1772/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1772",
"html_url": "https://github.com/huggingface/transformers/pull/1772",
"diff_url": "https://github.com/huggingface/transformers/pull/1772.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1772.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1771 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1771/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1771/comments | https://api.github.com/repos/huggingface/transformers/issues/1771/events | https://github.com/huggingface/transformers/issues/1771 | 519,934,080 | MDU6SXNzdWU1MTk5MzQwODA= | 1,771 | Error when Fine-tuning XLM on SQuA | {
"login": "ZhengWeiH",
"id": 43492059,
"node_id": "MDQ6VXNlcjQzNDkyMDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/43492059?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhengWeiH",
"html_url": "https://github.com/ZhengWeiH",
"followers_url": "https://api.github.com/users/ZhengWeiH/followers",
"following_url": "https://api.github.com/users/ZhengWeiH/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhengWeiH/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhengWeiH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhengWeiH/subscriptions",
"organizations_url": "https://api.github.com/users/ZhengWeiH/orgs",
"repos_url": "https://api.github.com/users/ZhengWeiH/repos",
"events_url": "https://api.github.com/users/ZhengWeiH/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhengWeiH/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"> ## Bug\r\n> Model I am using (XLM):\r\n> \r\n> Language I am using the model on (English and Chinese):\r\n> \r\n> The problem arise when using:\r\n> \r\n> [CUDA_VISIBLE_DEVICES=2 python run_squad.py --model_type xlm --model_name_or_path xlm-mlm-tlm-xnli15-1024 --do_train --do_eval --train_file $SQUAD_DIR/Crosslingual_QZH_train.json --predict_file $SQUAD_DIR/Crosslingual_QZH_valid.json --per_gpu_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 100 --max_seq_length 384 --doc_stride 128 --eval_all_checkpoints --save_steps 50 --evaluate_during_training --output_dir /home/weihua/Sqad/transformers/xlm_out/ ]\r\n> \r\n> error:\r\n> Traceback (most recent call last):\r\n> File \"run_squad.py\", line 569, in \r\n> main()\r\n> File \"run_squad.py\", line 515, in main\r\n> global_step, tr_loss = train(args, train_dataset, model, tokenizer)\r\n> File \"run_squad.py\", line 179, in train\r\n> results = evaluate(args, model, tokenizer)\r\n> File \"run_squad.py\", line 275, in evaluate\r\n> args.version_2_with_negative, tokenizer, args.verbose_logging)\r\n> File \"/home/weihua/Sqad/transformers/examples/utils_squad.py\", line 814, in write_predictions_extended\r\n> final_text = get_final_text(tok_text, orig_text, tokenizer.do_lower_case,\r\n> AttributeError: 'XLMTokenizer' object has no attribute 'do_lower_case'\r\n> \r\n> \r\n> \r\n> \r\n> How can I deal with this problem?\r\n\r\nLook [here](https://github.com/huggingface/transformers/blob/master/transformers/tokenization_xlm.py). This is the source code of the XLM tokenizer. In the ___init__()_ method, you can see that the _do_lower_case_ parameter doesn't exist, but it exists the **do_lowercase_and_remove_accent** parameter.\r\n\r\nSaid this, I tell you a solution to your problem (a wordaround).\r\nIf you see the source code of [run_squad.py](https://github.com/huggingface/transformers/blob/master/examples/run_squad.py) at rows 487-489, you see that here the tokenizer (different according to the model chosen by you - in your case XLM) is loaded, passing the _do_lower_case_ parameter. But as I said before, the tokenizer for XLM model accepts a different parameter. So, you can insert an if/else statement (in pseudo-code):\r\n```\r\nif args.model_type == 'XLM':\r\n tokenizer = tokenizer_class.from_pretrained(..., do_lower_case_and_remove_accent=args.do_lower_case, ...)\r\nelse:\r\n tokenizer = tokenizer_class.from_pretrained(..., do_lower_case=args.do_lower_case, ...)\r\n```\r\nN.B. if you want to be more comfortable of my solution to your problem, please see the source code of the `tokenization_xlm.py` in your HuggingFace version whether it contains the _do_lowercase_and_remove_accent_ parameter and not the _do_lowercase_ parameter. \r\n\r\nKeep us updated on your problem! Good luck",
"Actually, I just replace the line 814 in utils_squad.py with 'final_text = get_final_text(tok_text, orig_text,verbose_logging)'. It means I no longer pass ‘tokenizer.do_lower_case’ to this function. Because I find that in the tokenization_xlm.py, the parameter 'do_lower_case_and_remove_accent' is already set. Although this allows my program to run, it does not give good results. So I suspect that my change is wrong.\r\nI also tried to follow your suggestion above. But the results did not improve. \r\nThis is the result of the 700th step. \r\n\r\nThe exact and f1 is very low. I used bert-base-multilingual-cased to do SQuAD with the same data set before. The exact and f1 increased very fast. At around 200th step, the f1 can reach 40+, and exact is about 30.\r\nIs it wrong with my parameter settings that caused these two scores to rise slowly when I use XLM?\r\nOr is it necessary to take longer to finetune the SQuAD task in XLM than in BERT?\r\nThis is my first time to do the SQuAD. I sincerely hope that you can give me some advice.\r\nThank you.",
"I also tried to train and test on the official dataset,\r\nthat is \r\n\r\nin the examples document.\r\n\r\nbut the situation was the same as what I got on my own dataset.\r\nAt the same time, I find that when I use the XLM model with learning-rate= 3e-5, the loss decreases very slowly. \r\nIs this due to the inappropriateness of my parameter settings? Or is it because the loss function of the model needs to be changed?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,573 | 1,579 | 1,579 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (XLM):
Language I am using the model on (English and Chinese):
The problem arise when using:
[CUDA_VISIBLE_DEVICES=2 python run_squad.py --model_type xlm --model_name_or_path xlm-mlm-tlm-xnli15-1024 --do_train --do_eval --train_file $SQUAD_DIR/Crosslingual_QZH_train.json --predict_file $SQUAD_DIR/Crosslingual_QZH_valid.json --per_gpu_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 100 --max_seq_length 384 --doc_stride 128 --eval_all_checkpoints --save_steps 50 --evaluate_during_training --output_dir /home/weihua/Sqad/transformers/xlm_out/ ]
error:
Traceback (most recent call last):
File "run_squad.py", line 569, in <module>
main()
File "run_squad.py", line 515, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File "run_squad.py", line 179, in train
results = evaluate(args, model, tokenizer)
File "run_squad.py", line 275, in evaluate
args.version_2_with_negative, tokenizer, args.verbose_logging)
File "/home/weihua/Sqad/transformers/examples/utils_squad.py", line 814, in write_predictions_extended
final_text = get_final_text(tok_text, orig_text, tokenizer.do_lower_case,
AttributeError: 'XLMTokenizer' object has no attribute 'do_lower_case'


How can I deal with this problem?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1771/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1771/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1770 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1770/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1770/comments | https://api.github.com/repos/huggingface/transformers/issues/1770/events | https://github.com/huggingface/transformers/pull/1770 | 519,932,514 | MDExOlB1bGxSZXF1ZXN0MzM4NjA4NzEx | 1,770 | Only init encoder_attention_mask if stack is decoder | {
"login": "rlouf",
"id": 3885044,
"node_id": "MDQ6VXNlcjM4ODUwNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3885044?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rlouf",
"html_url": "https://github.com/rlouf",
"followers_url": "https://api.github.com/users/rlouf/followers",
"following_url": "https://api.github.com/users/rlouf/following{/other_user}",
"gists_url": "https://api.github.com/users/rlouf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rlouf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rlouf/subscriptions",
"organizations_url": "https://api.github.com/users/rlouf/orgs",
"repos_url": "https://api.github.com/users/rlouf/repos",
"events_url": "https://api.github.com/users/rlouf/events{/privacy}",
"received_events_url": "https://api.github.com/users/rlouf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1770?src=pr&el=h1) Report\n> Merging [#1770](https://codecov.io/gh/huggingface/transformers/pull/1770?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1c542df7e554a2014051dd09becf60f157fed524?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `90%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1770?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1770 +/- ##\n==========================================\n+ Coverage 84.03% 84.03% +<.01% \n==========================================\n Files 94 94 \n Lines 14032 14034 +2 \n==========================================\n+ Hits 11792 11794 +2 \n Misses 2240 2240\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1770?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1770/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2JlcnQucHk=) | `87.82% <90%> (+0.04%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1770?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1770?src=pr&el=footer). Last update [1c542df...cd286c2](https://codecov.io/gh/huggingface/transformers/pull/1770?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Ok great @rlouf!",
"I wonder why encoder_attention_mask use decoder inputs shape when it's not specified. \r\n`encoder_attention_mask = torch.ones(input_shape, device=device)`",
"Indeed, wanna have a look @rlouf?",
"That's a very good point, thanks @efeiefei ! See #2107"
] | 1,573 | 1,575 | 1,574 | CONTRIBUTOR | null | We currently initialize `encoder_attention_mask` when it is `None`,
whether the stack is that of an encoder or a decoder. Since this
may lead to bugs that are difficult to tracks down later, I added a condition
that assesses whether the current stack is a decoder. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1770/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1770",
"html_url": "https://github.com/huggingface/transformers/pull/1770",
"diff_url": "https://github.com/huggingface/transformers/pull/1770.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1770.patch",
"merged_at": 1574953616000
} |
https://api.github.com/repos/huggingface/transformers/issues/1769 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1769/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1769/comments | https://api.github.com/repos/huggingface/transformers/issues/1769/events | https://github.com/huggingface/transformers/issues/1769 | 519,870,890 | MDU6SXNzdWU1MTk4NzA4OTA= | 1,769 | [XLM-R] by Facebook AI Research | {
"login": "TheEdoardo93",
"id": 19664571,
"node_id": "MDQ6VXNlcjE5NjY0NTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/19664571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheEdoardo93",
"html_url": "https://github.com/TheEdoardo93",
"followers_url": "https://api.github.com/users/TheEdoardo93/followers",
"following_url": "https://api.github.com/users/TheEdoardo93/following{/other_user}",
"gists_url": "https://api.github.com/users/TheEdoardo93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TheEdoardo93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheEdoardo93/subscriptions",
"organizations_url": "https://api.github.com/users/TheEdoardo93/orgs",
"repos_url": "https://api.github.com/users/TheEdoardo93/repos",
"events_url": "https://api.github.com/users/TheEdoardo93/events{/privacy}",
"received_events_url": "https://api.github.com/users/TheEdoardo93/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"cc @aconneau 😬",
"is there any update in the XLM-R model? ",
"Let me know if you need some-help in porting the xlm-r models to HF.",
"I think that's maybe not the correct way, but I adjusted the `convert_roberta_original_pytorch_checkpoint_to_pytorch.py` script to convert the `fairseq` model into a `transformers` compatible model file. I used the `sentencepiece` BPE loader and adjusted the vocab size.\r\n\r\nThen I used the `CamemBERT` model class to perform some evaluations on NER. But the result is not really good (I tried to replicate the CoNLL-2003 for English).\r\n\r\nSo I guess it is not as simple as this first attempt 😅\r\n\r\n---\r\n\r\nGist for the conversion script is [here](https://gist.github.com/stefan-it/1eaf537a853aee6ab8fe5a425bf09ed7).\r\n\r\nThe `CamemBERT` model configuration looks pretty much the same as XLM-R large?!",
"> I think that's maybe not the correct way, but I adjusted the `convert_roberta_original_pytorch_checkpoint_to_pytorch.py` script to convert the `fairseq` model into a `transformers` compatible model file. I used the `sentencepiece` BPE loader and adjusted the vocab size.\r\n> \r\n> Then I used the `CamemBERT` model class to perform some evaluations on NER. But the result is not really good (I tried to replicate the CoNLL-2003 for English).\r\n> \r\n> So I guess it is not as simple as this first attempt 😅\r\n> \r\n> Gist for the conversion script is [here](https://gist.github.com/stefan-it/1eaf537a853aee6ab8fe5a425bf09ed7).\r\n> \r\n> The `CamemBERT` model configuration looks pretty much the same as XLM-R large?!\r\n\r\nHi @stefan-it, do you have any update for your attempt?",
"The final models have been released today 😍\r\n\r\nhttps://github.com/pytorch/fairseq/tree/master/examples/xlmr\r\n\r\nSo I'm going to try the conversion with these models tomorrow/in the next days :)",
"I think the model conversion is done correctly. But: the `CamembertTokenizer` implementation can't be used, because it adds some special tokens. I had to modify the tokenizer to match the output of the `fairseq` tokenization/`.encode()` method :) I'll report back some results on NER later.\r\n\r\n**update**: I could achieve 90.41% on CoNLL-2003 (English), paper reports 92.74 (using Flair).\r\n**update 2**: Using the `run_ner.py` example (incl. some hours of tokenization debugging...): 96.22 (dev) and 91.91 (test).",
"Btw I was using the XLM-R v0 checkpoints in a project I'm working on and the v0 checkpoints worked slightly better than the checkpoints added today. Is it possible to also add the older checkpoints?",
"I think it's the best solution to offer both checkpoint versions! In my opinion, the _ideal case_ is that, as like to other models in Transformers, you can select which version of XLM-R checkpoints to use, e.g.\r\n```\r\n> from transformers import XLMRModel\r\n> base_model = XLMRModel.from_pretrained('xlmr-base') # 250M parameters\r\n> large_model = XLMRModel.from_pretrained('xlmr-large') # 560M parameters\r\n```\r\n\r\n> Btw I was using the XLM-R v0 checkpoints in a project I'm working on and the v0 checkpoints worked slightly better than the checkpoints added today. Is it possible to also add the older checkpoints?",
"Btw using XLM-R I encounter this issue: \r\nBatch size affecting output. #2401\r\n\r\nThis is really annoying and makes it hard to use the model.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@ricardorei Did you happen to successfully use the XLM-R model ?\r\n\r\nI'm trying to see how this model can be used as pretrained step for NMT tasks, I tried raw version from XLM facebook repo and ran into multiple OOM issues. \r\n\r\nThe best suggestion so far I got is to try smaller version of Fairseq xlmr (base) on p3dn.24xlarge instance or the Google TPU Pytorch way. \r\n\r\nThanks ! ",
"@mohammedayub44\r\n\r\nI am using the base model which runs well in a 12GB GPU with batch size of 8. Depending on your implementation and task you can run even bigger batches (16, 24 for example).\r\n\r\nAnd I am also using the version directly from Fairseq, because you can load the v0 checkpoint.\r\n\r\nThe variability in my prediction with different batch sizes I could never figure out. Probably some floating-point precision issues going on under the hood. It doesn't change overall performance but it is annoying...\r\n\r\n ",
"BTW, I am using the TF variant from https://huggingface.co/jplu/tf-xlm-roberta-base and https://huggingface.co/jplu/tf-xlm-roberta-large . I have successfully finetuned even the large model on a 16GB GPU and it was performing substantially better than the base model (on Czech Q&A).",
"@ricardorei \r\nThanks for the confirmation. I'm okay with v0 checkpoints, I just need to check if the model can be fine-tuned for NMT. I'm guessing you're fine tuning for Classification tasks. \r\n\r\nIf you could share the prepare and train commands you are using. It would be easier than digging deep into every fairseq hyperparamter. \r\n\r\nThanks ! ",
"@foxik Is TF variant more suitable for fine-tuning. Any particular preprocessing steps you carried out for fine-tuning. If you can share them, I can map the same for NMT task. \r\n\r\nThanks ! ",
"@mohammedayub44 Yes I was using it for classification/regression. In your case, you need the encoder and decoder part which would take a lot more space. I would suggest that you share parameters between you encoder and decoder. \r\n\r\nI know that, with the right hyperparameter, you can achieve good results by sharing the parameters between your encoder and decoder -> [A Simple and Effective Approach to Automatic Post-Editing\r\nwith Transfer Learning](https://arxiv.org/pdf/1906.06253.pdf)\r\n\r\nIn terms of hyperparameters that I am using, they are very simple. I freeze the encoder for 1 epoch while fine-tuning the classification head and then I fine-tune the entire model. My classification-head has a learning rate of 0.00003 while XLM-R has 0.00001. The optimizer is a standard Adam. This combination of gradual unfreezing with discriminative learning rates works well in my task.",
"@ricardorei Thanks for sharing the paper. Some interesting results there. \r\nAny hints on how I can setup both encoder and decoder of XLM-R and share the parameters using HuggingFace library. I could only find [LM fine-tuning](https://github.com/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb) examples and notebook file. Nothing on NMT based fine-tuning. "
] | 1,573 | 1,588 | 1,584 | NONE | null | # 🌟New model addition
## Model description
Yesterday, Facebook has released _open source_ its new NLG model called **XLM-R** (XLM-RoBERTa) on [arXiv](https://arxiv.org/abs/1911.02116). This model uses self-supervised training techniques to achieve state-of-the-art performance in **cross-lingual understanding**, a task in which a model is trained in one language and then used with other languages without additional training data. Our model improves upon previous multilingual approaches by incorporating **more training data and languages** — including so-called low-resource languages, which lack extensive labeled and unlabeled data sets.
## Open Source status
* [ ] the model implementation is available: [here](https://github.com/pytorch/fairseq/blob/master/fairseq/models/roberta/model.py) under the **XLMRModel** Python class (row 198)
* [ ] the model weights are available: **Yes**, [here](https://github.com/pytorch/fairseq/blob/master/examples/xlmr/README.md) more details
* [ ] who are the authors: **FacebookAI Research** (Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov)
## Additional context
Facebook says these two sentences about this new model in their [blog](https://ai.facebook.com/blog/-xlm-r-state-of-the-art-cross-lingual-understanding-through-self-supervision/?__xts__[0]=68.ARAEwOOYcfFnZxSeyb88a9qwrCTeTF8aO869NMP3lKL7Xp6aYnOGmS1j9jO6wYt89qDPAiHkpVboO3nuo_GhLR2GM8uVTx2q8m3HFehnIkQjn96QtyD33FxpHipTbT4hC1bohM6Y8tNQirCLKZ28VTRiHBbEpgEP1cr9NuwRUkyVfS0KcQpXv0GFLRFQt4MJvU-nnmrMbBN56MXQ7pi3IGHLVmanXhJbgQFJu7Utq80xa8EGVR3b5SqAZgN6yL7jVwKkcYpnHVdgdJJ6dLdXRm46xK03XbxxrL8ghktmSxXyzhykfPEo_akj0u06syf3HYBsZReDdF178xiEZIgn_2VXIA&__tn__=-UK-R):
> XLM-R represents an important step toward our vision of providing the best possible experience on our platforms for everyone, regardless of what language they speak
> We hope to improve the performance of multilingual models created by the research community, particularly systems that use self-supervised training methods to better understand low-resource languages.
> XLM-R has been trained on **2.5T of data** across **100 languages** data filtered from Common Crawl | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1769/reactions",
"total_count": 8,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1769/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1768 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1768/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1768/comments | https://api.github.com/repos/huggingface/transformers/issues/1768/events | https://github.com/huggingface/transformers/issues/1768 | 519,848,586 | MDU6SXNzdWU1MTk4NDg1ODY= | 1,768 | BERT: Uncased vocabulary has 30,552 tokens whereas cased has 28,996 tokens. Why this difference? | {
"login": "culcomp",
"id": 23293114,
"node_id": "MDQ6VXNlcjIzMjkzMTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/23293114?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/culcomp",
"html_url": "https://github.com/culcomp",
"followers_url": "https://api.github.com/users/culcomp/followers",
"following_url": "https://api.github.com/users/culcomp/following{/other_user}",
"gists_url": "https://api.github.com/users/culcomp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/culcomp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/culcomp/subscriptions",
"organizations_url": "https://api.github.com/users/culcomp/orgs",
"repos_url": "https://api.github.com/users/culcomp/repos",
"events_url": "https://api.github.com/users/culcomp/events{/privacy}",
"received_events_url": "https://api.github.com/users/culcomp/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,573 | 1,578 | 1,578 | NONE | null | ## ❓ Questions & Help
I was expecting cased vocabulary to be larger than uncased as it needs to include both tokens with capitalizations and without. But, to the contrary, I found the cased vocabulary to be smaller. May I know reason for this?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1768/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/1768/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1767 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1767/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1767/comments | https://api.github.com/repos/huggingface/transformers/issues/1767/events | https://github.com/huggingface/transformers/issues/1767 | 519,755,053 | MDU6SXNzdWU1MTk3NTUwNTM= | 1,767 | GPT2 - XL | {
"login": "anandhperumal",
"id": 12907396,
"node_id": "MDQ6VXNlcjEyOTA3Mzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/12907396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anandhperumal",
"html_url": "https://github.com/anandhperumal",
"followers_url": "https://api.github.com/users/anandhperumal/followers",
"following_url": "https://api.github.com/users/anandhperumal/following{/other_user}",
"gists_url": "https://api.github.com/users/anandhperumal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anandhperumal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anandhperumal/subscriptions",
"organizations_url": "https://api.github.com/users/anandhperumal/orgs",
"repos_url": "https://api.github.com/users/anandhperumal/repos",
"events_url": "https://api.github.com/users/anandhperumal/events{/privacy}",
"received_events_url": "https://api.github.com/users/anandhperumal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> ## Bug\r\n> Model I am using (Bert, XLNet....):\r\n> \r\n> Language I am using the model on (English, Chinese....):\r\n> \r\n> The problem arise when using:\r\n> \r\n> * [X ] the official example scripts: (give details)\r\n> * [ ] my own modified scripts: (give details)\r\n> \r\n> The tasks I am working on is:\r\n> \r\n> * [ ] an official GLUE/SQUaD task: (give the name)\r\n> * [x] my own task or dataset: (give details)\r\n> \r\n> ## To Reproduce\r\n> Steps to reproduce the behavior:\r\n> \r\n> 1. In the documentation, it's given how to load GPT2-XL. But I'm getting an error when I try to load is GPT2 XL available as of now or it would take some time?\r\n> \r\n> Error:\r\n> \r\n> ```\r\n> >>> model = GPT2DoubleHeadsModel.from_pretrained('gpt2-xl')\r\n> Model name 'gpt2-xl' was not found in model name list (gpt2-large, gpt2, gpt2-medium). We assumed 'gpt2-xl' was a path or url but couldn't find any file associated to this path or url.\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"/cshome/anandhpe/.local/lib/python3.5/site-packages/pytorch_transformers/modeling_utils.py\", line 492, in from_pretrained\r\n> **kwargs\r\n> File \"/cshome/anandhpe/.local/lib/python3.5/site-packages/pytorch_transformers/modeling_utils.py\", line 194, in from_pretrained\r\n> raise e\r\n> File \"/cshome/anandhpe/.local/lib/python3.5/site-packages/pytorch_transformers/modeling_utils.py\", line 180, in from_pretrained\r\n> resolved_config_file = cached_path(config_file, cache_dir=cache_dir, force_download=force_download, proxies=proxies) File \"/cshome/anandhpe/.local/lib/python3.5/site-packages/pytorch_transformers/file_utils.py\", line 124, in cached_path\r\n> raise EnvironmentError(\"file {} not found\".format(url_or_filename))\r\n> OSError: file gpt2-xl not found\r\n> ```\r\n> \r\n> ## Expected behavior\r\n> ## Environment\r\n> * OS:Ubuntu\r\n> * Python version:3.7\r\n> * PyTorch version: 1.3.1\r\n> * PyTorch Transformers version (or branch):\r\n> * Using GPU ? yes\r\n> * Distributed of parallel setup ? no\r\n> * Any other relevant information:\r\n> \r\n> ## Additional context\r\n\r\nThis \"issue\" you have opened is the same as the #1747 . It's not a bug, but a **mismatch between version on Pypi and source GitHub**.\r\n\r\nAs stated by @LysandreJik 2 days ago , they haven't released the version with OpenAI GPT-2-xl yet on Pypi. Please, install Huggingface from source with the following command:\r\n`pip install git+https://github.com/huggingface/transformers`",
"@TheEdoardo93 sure thanks"
] | 1,573 | 1,573 | 1,573 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....):
Language I am using the model on (English, Chinese....):
The problem arise when using:
* [X ] the official example scripts: (give details)
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
1. In the documentation, it's given how to load GPT2-XL. But I'm getting an error when I try to load is GPT2 XL available as of now or it would take some time?
Error:
```
>>> model = GPT2DoubleHeadsModel.from_pretrained('gpt2-xl')
Model name 'gpt2-xl' was not found in model name list (gpt2-large, gpt2, gpt2-medium). We assumed 'gpt2-xl' was a path or url but couldn't find any file associated to this path or url.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/cshome/anandhpe/.local/lib/python3.5/site-packages/pytorch_transformers/modeling_utils.py", line 492, in from_pretrained
**kwargs
File "/cshome/anandhpe/.local/lib/python3.5/site-packages/pytorch_transformers/modeling_utils.py", line 194, in from_pretrained
raise e
File "/cshome/anandhpe/.local/lib/python3.5/site-packages/pytorch_transformers/modeling_utils.py", line 180, in from_pretrained
resolved_config_file = cached_path(config_file, cache_dir=cache_dir, force_download=force_download, proxies=proxies) File "/cshome/anandhpe/.local/lib/python3.5/site-packages/pytorch_transformers/file_utils.py", line 124, in cached_path
raise EnvironmentError("file {} not found".format(url_or_filename))
OSError: file gpt2-xl not found
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS:Ubuntu
* Python version:3.7
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch):
* Using GPU ? yes
* Distributed of parallel setup ? no
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1767/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1767/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1766 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1766/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1766/comments | https://api.github.com/repos/huggingface/transformers/issues/1766/events | https://github.com/huggingface/transformers/pull/1766 | 519,555,080 | MDExOlB1bGxSZXF1ZXN0MzM4Mjc2NDQw | 1,766 | Added additional training utils for run_squad.py | {
"login": "maxmatical",
"id": 8890262,
"node_id": "MDQ6VXNlcjg4OTAyNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8890262?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxmatical",
"html_url": "https://github.com/maxmatical",
"followers_url": "https://api.github.com/users/maxmatical/followers",
"following_url": "https://api.github.com/users/maxmatical/following{/other_user}",
"gists_url": "https://api.github.com/users/maxmatical/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxmatical/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxmatical/subscriptions",
"organizations_url": "https://api.github.com/users/maxmatical/orgs",
"repos_url": "https://api.github.com/users/maxmatical/repos",
"events_url": "https://api.github.com/users/maxmatical/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxmatical/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1766?src=pr&el=h1) Report\n> Merging [#1766](https://codecov.io/gh/huggingface/transformers/pull/1766?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1c542df7e554a2014051dd09becf60f157fed524?src=pr&el=desc) will **decrease** coverage by `1.38%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1766?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1766 +/- ##\n==========================================\n- Coverage 84.03% 82.65% -1.39% \n==========================================\n Files 94 94 \n Lines 14032 14032 \n==========================================\n- Hits 11792 11598 -194 \n- Misses 2240 2434 +194\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1766?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1766/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `9.85% <0%> (-83.1%)` | :arrow_down: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1766/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `79.78% <0%> (-17.03%)` | :arrow_down: |\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1766/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `59.41% <0%> (-12.36%)` | :arrow_down: |\n| [transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1766/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `71.18% <0%> (-2.44%)` | :arrow_down: |\n| [transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1766/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `94.24% <0%> (-2.22%)` | :arrow_down: |\n| [transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1766/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `80.66% <0%> (-1.34%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1766?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1766?src=pr&el=footer). Last update [1c542df...5f1eca9](https://codecov.io/gh/huggingface/transformers/pull/1766?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Ok with that. Merging.\r\ncc @LysandreJik and his refactoring of the `run_squad` example.",
"Actually seems like I can't push a merge commit on your branch to fix conflict.\r\nWill close the PR later unless you can fix it."
] | 1,573 | 1,576 | 1,576 | NONE | null | Added different LR schedules and hyperparameters for beta1 and beta2 for AdamW | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1766/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1766",
"html_url": "https://github.com/huggingface/transformers/pull/1766",
"diff_url": "https://github.com/huggingface/transformers/pull/1766.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1766.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1765 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1765/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1765/comments | https://api.github.com/repos/huggingface/transformers/issues/1765/events | https://github.com/huggingface/transformers/pull/1765 | 519,549,538 | MDExOlB1bGxSZXF1ZXN0MzM4MjcxOTEx | 1,765 | Fix run_bertology.py | {
"login": "adrianbg",
"id": 66751,
"node_id": "MDQ6VXNlcjY2NzUx",
"avatar_url": "https://avatars.githubusercontent.com/u/66751?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adrianbg",
"html_url": "https://github.com/adrianbg",
"followers_url": "https://api.github.com/users/adrianbg/followers",
"following_url": "https://api.github.com/users/adrianbg/following{/other_user}",
"gists_url": "https://api.github.com/users/adrianbg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adrianbg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adrianbg/subscriptions",
"organizations_url": "https://api.github.com/users/adrianbg/orgs",
"repos_url": "https://api.github.com/users/adrianbg/repos",
"events_url": "https://api.github.com/users/adrianbg/events{/privacy}",
"received_events_url": "https://api.github.com/users/adrianbg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1765?src=pr&el=h1) Report\n> Merging [#1765](https://codecov.io/gh/huggingface/transformers/pull/1765?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1c542df7e554a2014051dd09becf60f157fed524?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1765?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1765 +/- ##\n=======================================\n Coverage 84.03% 84.03% \n=======================================\n Files 94 94 \n Lines 14032 14032 \n=======================================\n Hits 11792 11792 \n Misses 2240 2240\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1765?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1765?src=pr&el=footer). Last update [1c542df...c199d96](https://codecov.io/gh/huggingface/transformers/pull/1765?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"That looks great, thanks @adrianbg !"
] | 1,573 | 1,573 | 1,573 | CONTRIBUTOR | null | Make imports and `args.overwrite_cache` match `run_glue.py`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1765/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1765",
"html_url": "https://github.com/huggingface/transformers/pull/1765",
"diff_url": "https://github.com/huggingface/transformers/pull/1765.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1765.patch",
"merged_at": 1573248521000
} |
https://api.github.com/repos/huggingface/transformers/issues/1764 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1764/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1764/comments | https://api.github.com/repos/huggingface/transformers/issues/1764/events | https://github.com/huggingface/transformers/pull/1764 | 519,426,650 | MDExOlB1bGxSZXF1ZXN0MzM4MTcxMDE1 | 1,764 | Bug-fix: Roberta Embeddings Not Masked | {
"login": "DomHudson",
"id": 10864294,
"node_id": "MDQ6VXNlcjEwODY0Mjk0",
"avatar_url": "https://avatars.githubusercontent.com/u/10864294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DomHudson",
"html_url": "https://github.com/DomHudson",
"followers_url": "https://api.github.com/users/DomHudson/followers",
"following_url": "https://api.github.com/users/DomHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/DomHudson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DomHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DomHudson/subscriptions",
"organizations_url": "https://api.github.com/users/DomHudson/orgs",
"repos_url": "https://api.github.com/users/DomHudson/repos",
"events_url": "https://api.github.com/users/DomHudson/events{/privacy}",
"received_events_url": "https://api.github.com/users/DomHudson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1764?src=pr&el=h1) Report\n> Merging [#1764](https://codecov.io/gh/huggingface/transformers/pull/1764?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a80778f40e4738071b5d01420a0328bb00cdb356?src=pr&el=desc) will **increase** coverage by `0.02%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1764?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1764 +/- ##\n==========================================\n+ Coverage 79.82% 79.84% +0.02% \n==========================================\n Files 131 131 \n Lines 19496 19519 +23 \n==========================================\n+ Hits 15562 15585 +23 \n Misses 3934 3934\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1764?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1764/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `72.57% <100%> (+0.8%)` | :arrow_up: |\n| [transformers/tests/modeling\\_roberta\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1764/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3JvYmVydGFfdGVzdC5weQ==) | `78.14% <100%> (+2.95%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1764?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1764?src=pr&el=footer). Last update [a80778f...3e52915](https://codecov.io/gh/huggingface/transformers/pull/1764?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks! Any way you could do this PR with just the relevant changes (no whitespace changes, no extraneous commits)?",
"Hi,\r\n\r\nI have pushed just the relevant changes with a single commit.\r\n\r\nMany thanks,\r\nDom",
"Thanks! We need to update the TF model as well. (this might be the reason behind the failing unit test, btw)\r\n\r\nWill do it unless you have the bandwidth (and the inclination) to do it yourself @DomHudson ",
"@DomHudson Fixed a bug introduced by this PR's commit + rebased on top of current master.\r\n\r\nPlease note that for most models, we expect users to define `attention_mask`s themselves. \r\n\r\nIf I'm not mistaken, if a user defined an attention_mask that correctly excluded padding, he would not need the padding-aware position_ids from this PR.",
"LGTM merging"
] | 1,573 | 1,576 | 1,576 | NONE | null | ## Summary
I replace the code that makes the position ids with logic closer to the original fairseq `make_positions` function. It wasn't clear to me what to do in the event that the embeddings are passed in directly through `inputs_embeds` so I resorted to the old methodology and just generating a positional id for all inputs.
## Closes
#1761 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1764/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1764",
"html_url": "https://github.com/huggingface/transformers/pull/1764",
"diff_url": "https://github.com/huggingface/transformers/pull/1764.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1764.patch",
"merged_at": 1576926808000
} |
https://api.github.com/repos/huggingface/transformers/issues/1763 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1763/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1763/comments | https://api.github.com/repos/huggingface/transformers/issues/1763/events | https://github.com/huggingface/transformers/pull/1763 | 519,419,951 | MDExOlB1bGxSZXF1ZXN0MzM4MTY1NTQ3 | 1,763 | Fixed training for TF XLNet | {
"login": "tlkh",
"id": 5409617,
"node_id": "MDQ6VXNlcjU0MDk2MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5409617?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tlkh",
"html_url": "https://github.com/tlkh",
"followers_url": "https://api.github.com/users/tlkh/followers",
"following_url": "https://api.github.com/users/tlkh/following{/other_user}",
"gists_url": "https://api.github.com/users/tlkh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tlkh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tlkh/subscriptions",
"organizations_url": "https://api.github.com/users/tlkh/orgs",
"repos_url": "https://api.github.com/users/tlkh/repos",
"events_url": "https://api.github.com/users/tlkh/events{/privacy}",
"received_events_url": "https://api.github.com/users/tlkh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1763?src=pr&el=h1) Report\n> Merging [#1763](https://codecov.io/gh/huggingface/transformers/pull/1763?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d49c43ff789d309e688fb7b252511e9e618e46db?src=pr&el=desc) will **decrease** coverage by `0.08%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1763?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1763 +/- ##\n==========================================\n- Coverage 84.11% 84.03% -0.09% \n==========================================\n Files 105 94 -11 \n Lines 15545 14032 -1513 \n==========================================\n- Hits 13076 11792 -1284 \n+ Misses 2469 2240 -229\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1763?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1763/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3hsbmV0LnB5) | `87.82% <100%> (-0.37%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1763/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `92.95% <0%> (-2.19%)` | :arrow_down: |\n| [transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1763/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `89.64% <0%> (-1.22%)` | :arrow_down: |\n| [transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1763/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl91dGlscy5weQ==) | `91.2% <0%> (-0.71%)` | :arrow_down: |\n| [transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1763/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fYXV0by5weQ==) | `58.82% <0%> (-0.64%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1763/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3JvYmVydGEucHk=) | `89.9% <0%> (-0.53%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/1763/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3RyYW5zZm9feGwucHk=) | `92.21% <0%> (-0.33%)` | :arrow_down: |\n| [transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1763/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3V0aWxzLnB5) | `92.4% <0%> (-0.33%)` | :arrow_down: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1763/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `96.8% <0%> (-0.28%)` | :arrow_down: |\n| ... and [37 more](https://codecov.io/gh/huggingface/transformers/pull/1763/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1763?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1763?src=pr&el=footer). Last update [d49c43f...41e0859](https://codecov.io/gh/huggingface/transformers/pull/1763?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This looks good to me, thank you @tlkh !",
"I see that the changes were already incorporated, so closing this PR."
] | 1,573 | 1,575 | 1,575 | CONTRIBUTOR | null | This PR makes the following changes and fully fixes `model.fit()` training for XLNet. It now works, and was tested with a slightly modified `run_tf_glue.py`, and also tested with XLA, AMP and tf.distribute
- Fix dtype error with `input_mask` and `attention_mask`
- `rel_attn_core()` does not work properly in non-eager mode, resulting in shape errors. `tf.shape` is now used to obtain the shape of `ac` instead of calling `ac.shape`, and training now works in non-eager mode.
@thomwolf | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1763/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1763/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1763",
"html_url": "https://github.com/huggingface/transformers/pull/1763",
"diff_url": "https://github.com/huggingface/transformers/pull/1763.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1763.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1762 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1762/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1762/comments | https://api.github.com/repos/huggingface/transformers/issues/1762/events | https://github.com/huggingface/transformers/issues/1762 | 519,412,002 | MDU6SXNzdWU1MTk0MTIwMDI= | 1,762 | Perplexity for (not-stateful) Transformer - Why is it still fair to compare to RNN? | {
"login": "hoangcuong2011",
"id": 8759715,
"node_id": "MDQ6VXNlcjg3NTk3MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8759715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hoangcuong2011",
"html_url": "https://github.com/hoangcuong2011",
"followers_url": "https://api.github.com/users/hoangcuong2011/followers",
"following_url": "https://api.github.com/users/hoangcuong2011/following{/other_user}",
"gists_url": "https://api.github.com/users/hoangcuong2011/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hoangcuong2011/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hoangcuong2011/subscriptions",
"organizations_url": "https://api.github.com/users/hoangcuong2011/orgs",
"repos_url": "https://api.github.com/users/hoangcuong2011/repos",
"events_url": "https://api.github.com/users/hoangcuong2011/events{/privacy}",
"received_events_url": "https://api.github.com/users/hoangcuong2011/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,573 | 1,578 | 1,578 | NONE | null | Greetings,
It is clear to me how to compute perplexity for RNNs as RNNs are a stateful model.
In general, given a very long document, I believe we need to: 1. chunk the document into a sequence of chunks, and we compute the cross entropy loss for each of the chunks, before taking the average for all chunks and take exponential.
Because RNNs are a stateful model, we can use what we have from the last RNN hidden state regarding to previous chunk to initialize hidden state for RNN that handles the next chunk. Because of this, I think RNN computes the "right" perplexity for a long document.
Meanwhile Transformer are not a stateful model. For this reason there is no such things as "transferring" hidden state from previous chunk to next chunk I guess. For this reason I think Transformers are not really computing the perplexity for a long document. It compute something different.
So why is it still fair to compare two models regarding to perplexity for long documents?
Plus, if I create a non-stateful RNN (e.g. as in Keras I can set stateful=False), will the perplexity the non-stateful model computes without "transferring" hidden state be the right perplexity we need? Is it comparable to the stateful RNN?
Thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1762/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1761 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1761/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1761/comments | https://api.github.com/repos/huggingface/transformers/issues/1761/events | https://github.com/huggingface/transformers/issues/1761 | 519,312,284 | MDU6SXNzdWU1MTkzMTIyODQ= | 1,761 | Roberta Positional Embeddings Not Masked | {
"login": "DomHudson",
"id": 10864294,
"node_id": "MDQ6VXNlcjEwODY0Mjk0",
"avatar_url": "https://avatars.githubusercontent.com/u/10864294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DomHudson",
"html_url": "https://github.com/DomHudson",
"followers_url": "https://api.github.com/users/DomHudson/followers",
"following_url": "https://api.github.com/users/DomHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/DomHudson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DomHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DomHudson/subscriptions",
"organizations_url": "https://api.github.com/users/DomHudson/orgs",
"repos_url": "https://api.github.com/users/DomHudson/repos",
"events_url": "https://api.github.com/users/DomHudson/events{/privacy}",
"received_events_url": "https://api.github.com/users/DomHudson/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
}
] | [
"I think you are right. Awesome if you can fix this in a PR.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Closed in #1764"
] | 1,573 | 1,578 | 1,578 | NONE | null | ## Bug
Hi, I think for the RoBERTa model, the positional embeddings are created slightly wrong. In both this library and in `fairseq` the positional embeddings are sequential integers starting from `padding_idx + 1`. However, in `fairseq.utils.make_positions` all indicies of the input that correspond to the padding index are masked as they go into the positional embedding layer.
## Example
```python
import torch
padding_idx = 0
tensor = torch.as_tensor([
[1, 13, 54, 0, 0],
[1, 55, 12, 24, 0]
])
```
### In Fairseq
```
from fairseq import utils
utils.make_positions(tensor, padding_idx = padding_idx)
>>> tensor([
[1, 2, 3, 0, 0],
[1, 2, 3, 4, 0]
])
```
The positional sequence begins at padding_idx + 1 but all occurrences of the padding index in the input tensor are unmanipulated. This means the position embeddings are only adjusted for tokens which are not padding.
### In Transformers
This code comes from [modeling_roberta.py#L65](https://github.com/huggingface/transformers/blob/master/transformers/modeling_roberta.py#L65)
```python
# Position numbers begin at padding_idx+1. Padding symbols are ignored.
# cf. fairseq's `utils.make_positions`
position_ids = torch.arange(self.padding_idx+1, seq_length+self.padding_idx+1, dtype=torch.long, device=input_ids.device)
position_ids = position_ids.unsqueeze(0).expand_as(input_shape)
```
The output tensor of these function calls do not retain the masked tokens:
```
>>> tensor([
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5]
])
```
## Complete Example
```python
from fairseq import utils
tensor = torch.as_tensor([
[1, 13, 54, 0, 0],
[1, 55, 12, 24, 0]
])
def fairseq_make_positions(tensor, padding_idx):
mask = tensor.ne(padding_idx).int()
return (
torch.cumsum(mask, dim=1).type_as(mask) * mask
).long() + padding_idx
def transformers_make_positions(input_ids, padding_idx):
input_shape = input_ids.size()
seq_length = input_shape[1]
position_ids = torch.arange(padding_idx+1, seq_length+padding_idx+1, dtype=torch.long, device=input_ids.device)
position_ids = position_ids.unsqueeze(0).expand(input_shape)
return position_ids
print(fairseq_make_positions(tensor, padding_idx = 0))
print(transformers_make_positions(tensor, padding_idx = 0))
```
Output:
```
# Fairseq:
tensor([[1, 2, 3, 0, 0],
[1, 2, 3, 4, 0]])
# Transformers:
tensor([[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5]])
```
## Expected behavior
I expect all padded tokens in the input tensor to be padded as they go into the positional embeddings layer as well as the token embeddings layer.
## Additional context
Happy to make a PR if this is agreed to be a bug, but this will mean the positional embeddings train slightly differently to the existing `RobertaEmbeddings` module.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1761/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1761/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1760 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1760/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1760/comments | https://api.github.com/repos/huggingface/transformers/issues/1760/events | https://github.com/huggingface/transformers/issues/1760 | 519,251,526 | MDU6SXNzdWU1MTkyNTE1MjY= | 1,760 | 'RuntimeError: CUDA error' is occured when encoding text with pre-trained model on cuda | {
"login": "jcw521",
"id": 52779032,
"node_id": "MDQ6VXNlcjUyNzc5MDMy",
"avatar_url": "https://avatars.githubusercontent.com/u/52779032?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jcw521",
"html_url": "https://github.com/jcw521",
"followers_url": "https://api.github.com/users/jcw521/followers",
"following_url": "https://api.github.com/users/jcw521/following{/other_user}",
"gists_url": "https://api.github.com/users/jcw521/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jcw521/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jcw521/subscriptions",
"organizations_url": "https://api.github.com/users/jcw521/orgs",
"repos_url": "https://api.github.com/users/jcw521/repos",
"events_url": "https://api.github.com/users/jcw521/events{/privacy}",
"received_events_url": "https://api.github.com/users/jcw521/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> ## Bug\r\n> Model I am using (Bert, XLNet....): I am trying to use Bert encoder\r\n> \r\n> Language I am using the model on (English, Chinese....): English\r\n> \r\n> There are two problems when I use BERT for encoding text to vectors\r\n> Here is Error log\r\n> \r\n> First,\r\n> \r\n> ```\r\n> File \"/home/jcw/VAIRLtext/Text2Vec.py\", line 192, in encode\r\n> embeddings = self.encoder(padded_tokens, attention_mask=masks)[0]\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n> result = self.forward(*input, **kwargs)\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py\", line 627, in forward\r\n> head_mask=head_mask)\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n> result = self.forward(*input, **kwargs)\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py\", line 348, in forward\r\n> layer_outputs = layer_module(hidden_states, attention_mask, head_mask[i])\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n> result = self.forward(*input, **kwargs)\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py\", line 326, in forward\r\n> attention_outputs = self.attention(hidden_states, attention_mask, head_mask)\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n> result = self.forward(*input, **kwargs)\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py\", line 283, in forward\r\n> self_outputs = self.self(input_tensor, attention_mask, head_mask)\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n> result = self.forward(*input, **kwargs)\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py\", line 202, in forward\r\n> mixed_query_layer = self.query(hidden_states)\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n> result = self.forward(*input, **kwargs)\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/linear.py\", line 87, in forward\r\n> return F.linear(input, self.weight, self.bias)\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/functional.py\", line 1374, in linear\r\n> output += bias\r\n> RuntimeError: CUDA error: device-side assert triggered\r\n> ```\r\n> \r\n> Second,\r\n> \r\n> ```\r\n> File \"/home/jcw/VAIRLtext/Text2Vec.py\", line 192, in encode\r\n> embeddings = self.encoder(padded_tokens, attention_mask=masks)[0]\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n> result = self.forward(*input, **kwargs)\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py\", line 627, in forward\r\n> head_mask=head_mask)\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n> result = self.forward(*input, **kwargs)\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py\", line 348, in forward\r\n> layer_outputs = layer_module(hidden_states, attention_mask, head_mask[i])\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n> result = self.forward(*input, **kwargs)\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py\", line 326, in forward\r\n> attention_outputs = self.attention(hidden_states, attention_mask, head_mask)\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n> result = self.forward(*input, **kwargs)\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py\", line 283, in forward\r\n> self_outputs = self.self(input_tensor, attention_mask, head_mask)\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n> result = self.forward(*input, **kwargs)\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py\", line 202, in forward\r\n> mixed_query_layer = self.query(hidden_states)\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n> result = self.forward(*input, **kwargs)\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/linear.py\", line 87, in forward\r\n> return F.linear(input, self.weight, self.bias)\r\n> File \"/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/functional.py\", line 1372, in linear\r\n> output = input.matmul(weight.t())\r\n> RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`\r\n> ```\r\n> \r\n> These two errors are coming from BERT encode :(\r\n> here is my code\r\n> \r\n> ```\r\n> class Text2Vec(object):\r\n> def __init__(self, args):\r\n> self.max_lens = args.max_lens\r\n> self.tokenizer = AutoTokenizer.from_pretrained('bert-base-multilingual-cased')\r\n> self.encoder = self.get_encoder('bert-base-multilingual-cased', args.num_layers)\r\n> self.encoder.to('cuda')\r\n> self.pad_token = self.tokenizer.convert_tokens_to_ids(self.tokenizer.pad_token)\r\n> \r\n> def get_encoder(self, encoder_type, num_layers):\r\n> model = AutoModel.from_pretrained(encoder_type)\r\n> model.eval()\r\n> model.encoder.layer = torch.nn.ModuleList([layer for layer in model.encoder.layer[:num_layers]])\r\n> return model\r\n> \r\n> def padding(self, arr, pad_token):\r\n> lens = torch.LongTensor([len(a) for a in arr])\r\n> padded = torch.ones(len(arr), self.max_lens) * pad_token\r\n> mask = torch.zeros(len(arr), self.max_lens)\r\n> for i, a in enumerate(arr):\r\n> padded[i, :lens[i]] = torch.tensor(a)\r\n> mask[i, :lens[i]] = 1\r\n> return padded, mask\r\n> \r\n> def encode(self, text):\r\n> text = sent_tokenize(text)\r\n> tokens = [self.tokenizer.encode(a, add_special_tokens=True) for a in text]\r\n> padded_tokens, masks = self.padding(tokens, self.pad_token)\r\n> \r\n> padded_tokens = padded_tokens.to('cuda')\r\n> masks = masks.to('cuda')\r\n> \r\n> self.encoder.eval()\r\n> with torch.no_grad():\r\n> embeddings = self.encoder(padded_tokens, attention_mask=masks)[0]\r\n> \r\n> embed = []\r\n> masks = list(torch.split(masks, 1, dim=0))\r\n> embeddings = list(torch.split(embeddings, 1, dim=0))\r\n> for embedding, mask in zip(embeddings, masks):\r\n> masked_embed = embedding * mask.squeeze()[:, None]\r\n> embed.append(masked_embed)\r\n> \r\n> return torch.cat(embed, dim=0)\r\n> ```\r\n> \r\n> What am I missing? please advise to me!!\r\n> \r\n> ## Environment\r\n> * OS: ubuntu 16.04\r\n> * Python version: Python 3.7.4\r\n> * PyTorch version: 1.3.0\r\n> * PyTorch Transformers version (or branch): how do I check PyTorch Transformers version?\r\n> * Using GPU ? titan-xp\r\n> * Distributed of parallel setup ? not yet\r\n> * Any other relevant information:\r\n\r\nIn order to check the Transformers version you can insert this line in your code:\r\n\r\n```\r\nimport transformers\r\nprint(transformers.__version__)\r\n```",
"Hi! There was a few things to change on my side to get your code to run.\r\n\r\n1 - when doing `torch.ones(...)`, it initializes a tensor of dtype `float` while it should be of dtype `long`, i had to change the following line in Text2Vec.padding():\r\n\r\n```py\r\npadded = torch.ones(len(arr), self.max_lens, dtype=torch.int64) * pad_token\r\n```\r\n\r\n2 - it crashed on my side at first because I was on the pypi release which has a bug related to the initialization of `token_type_ids` when they're not given to the model (they're not put on GPU). I had to install from master with the following command:\r\n\r\n```\r\npip install git+https://github.com/huggingface/transformers\r\n```\r\n\r\nOnce you've done that it should work fine!"
] | 1,573 | 1,573 | 1,573 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): I am trying to use Bert encoder
Language I am using the model on (English, Chinese....): English
There are two problems when I use BERT for encoding text to vectors
Here is Error log
First,
```
File "/home/jcw/VAIRLtext/Text2Vec.py", line 192, in encode
embeddings = self.encoder(padded_tokens, attention_mask=masks)[0]
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 627, in forward
head_mask=head_mask)
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 348, in forward
layer_outputs = layer_module(hidden_states, attention_mask, head_mask[i])
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 326, in forward
attention_outputs = self.attention(hidden_states, attention_mask, head_mask)
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 283, in forward
self_outputs = self.self(input_tensor, attention_mask, head_mask)
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 202, in forward
mixed_query_layer = self.query(hidden_states)
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/functional.py", line 1374, in linear
output += bias
RuntimeError: CUDA error: device-side assert triggered
```
Second,
```
File "/home/jcw/VAIRLtext/Text2Vec.py", line 192, in encode
embeddings = self.encoder(padded_tokens, attention_mask=masks)[0]
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 627, in forward
head_mask=head_mask)
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 348, in forward
layer_outputs = layer_module(hidden_states, attention_mask, head_mask[i])
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 326, in forward
attention_outputs = self.attention(hidden_states, attention_mask, head_mask)
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 283, in forward
self_outputs = self.self(input_tensor, attention_mask, head_mask)
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/transformers/modeling_bert.py", line 202, in forward
mixed_query_layer = self.query(hidden_states)
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 87, in forward
return F.linear(input, self.weight, self.bias)
File "/home/jcw/tmp/anaconda3/envs/VAIRLtext/lib/python3.7/site-packages/torch/nn/functional.py", line 1372, in linear
output = input.matmul(weight.t())
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
```
These two errors are coming from BERT encode :(
here is my code
```
class Text2Vec(object):
def __init__(self, args):
self.max_lens = args.max_lens
self.tokenizer = AutoTokenizer.from_pretrained('bert-base-multilingual-cased')
self.encoder = self.get_encoder('bert-base-multilingual-cased', args.num_layers)
self.encoder.to('cuda')
self.pad_token = self.tokenizer.convert_tokens_to_ids(self.tokenizer.pad_token)
def get_encoder(self, encoder_type, num_layers):
model = AutoModel.from_pretrained(encoder_type)
model.eval()
model.encoder.layer = torch.nn.ModuleList([layer for layer in model.encoder.layer[:num_layers]])
return model
def padding(self, arr, pad_token):
lens = torch.LongTensor([len(a) for a in arr])
padded = torch.ones(len(arr), self.max_lens) * pad_token
mask = torch.zeros(len(arr), self.max_lens)
for i, a in enumerate(arr):
padded[i, :lens[i]] = torch.tensor(a)
mask[i, :lens[i]] = 1
return padded, mask
def encode(self, text):
text = sent_tokenize(text)
tokens = [self.tokenizer.encode(a, add_special_tokens=True) for a in text]
padded_tokens, masks = self.padding(tokens, self.pad_token)
padded_tokens = padded_tokens.to('cuda')
masks = masks.to('cuda')
self.encoder.eval()
with torch.no_grad():
embeddings = self.encoder(padded_tokens, attention_mask=masks)[0]
embed = []
masks = list(torch.split(masks, 1, dim=0))
embeddings = list(torch.split(embeddings, 1, dim=0))
for embedding, mask in zip(embeddings, masks):
masked_embed = embedding * mask.squeeze()[:, None]
embed.append(masked_embed)
return torch.cat(embed, dim=0)
```
What am I missing? please advise to me!!
## Environment
* OS: ubuntu 16.04
* Python version: Python 3.7.4
* PyTorch version: 1.3.0
* PyTorch Transformers version (or branch): 2.1.1
* Using GPU ? titan-xp
* Distributed of parallel setup ? not yet
* Any other relevant information:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1760/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1759 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1759/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1759/comments | https://api.github.com/repos/huggingface/transformers/issues/1759/events | https://github.com/huggingface/transformers/issues/1759 | 519,165,735 | MDU6SXNzdWU1MTkxNjU3MzU= | 1,759 | NameError: name 'DUMMY_INPUTS' is not defined | {
"login": "techwithshadab",
"id": 10863620,
"node_id": "MDQ6VXNlcjEwODYzNjIw",
"avatar_url": "https://avatars.githubusercontent.com/u/10863620?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/techwithshadab",
"html_url": "https://github.com/techwithshadab",
"followers_url": "https://api.github.com/users/techwithshadab/followers",
"following_url": "https://api.github.com/users/techwithshadab/following{/other_user}",
"gists_url": "https://api.github.com/users/techwithshadab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/techwithshadab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/techwithshadab/subscriptions",
"organizations_url": "https://api.github.com/users/techwithshadab/orgs",
"repos_url": "https://api.github.com/users/techwithshadab/repos",
"events_url": "https://api.github.com/users/techwithshadab/events{/privacy}",
"received_events_url": "https://api.github.com/users/techwithshadab/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"> ## Can someone help me with resolving this error?\r\n> \r\n\r\nWithout any other information, it's very difficult to understand which is the problem with your Jupyter Notebook! **Which is the content of your directory?** Moreover, **provide the steps to reproduce this bug!**\r\nPlease, link your Jupyter Notebook and **provide more information** (e.g. OS, PyTorch version, TensorFlow version, Transformers version, etc.)",
"@TheEdoardo93 Below are the package version details which I'm using on Windows 10 64 bit machine:\r\njupyter 1.0.0 \r\ntensorflow 2.0.0\r\ntorch 1.3.0+cpu \r\ntorchvision 0.4.1+cpu\r\ntransformers 2.1.1\r\n\r\nI have trained the model on colab GPU version, now was trying to load the pre-trained model on my local machine.\r\nLink of the notebook: https://colab.research.google.com/drive/1URz5MkjZwH621zGG_89S2KahCCBK0Se8\r\nIt's working fine on colab, but when I'm trying to resume on my local machine from loading the trained model, I'm getting this error.\r\nLet me know if any more info needed.",
"Same problem here, using transformers in Tensorflow 2.0.0b0. Training is ok:\r\n\r\n```\r\nimport tensorflow as tf\r\nimport tensorflow_datasets\r\nfrom transformers import *\r\n\r\ntf.compat.v1.enable_eager_execution()\r\n\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-cased')\r\nmodel = TFBertForSequenceClassification.from_pretrained('bert-base-cased')\r\ndata = tensorflow_datasets.load('glue/mrpc')\r\n\r\ntrain_dataset = glue_convert_examples_to_features(data['train'], tokenizer, max_length=128, task='mrpc')\r\nvalid_dataset = glue_convert_examples_to_features(data['validation'], tokenizer, max_length=128, task='mrpc')\r\ntrain_dataset = train_dataset.shuffle(100).batch(32).repeat(2)\r\nvalid_dataset = valid_dataset.batch(64)\r\n\r\noptimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)\r\nloss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)\r\nmetric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')\r\nmodel.compile(optimizer=optimizer, loss=loss, metrics=[metric])\r\n\r\nhistory = model.fit(train_dataset, epochs=2, steps_per_epoch=115,\r\n validation_data=valid_dataset, validation_steps=7)\r\n\r\nmodel.save_pretrained('/home/rubens_gmail_com/tf')\r\n\r\nTrain for 115 steps, validate for 7 steps\r\nEpoch 1/2\r\n115/115 [==============================] - 57s 494ms/step - loss: 0.5496 - accuracy: 0.7331 - val_loss: 0.4226 - val_accuracy: 0.8235\r\nEpoch 2/2\r\n115/115 [==============================] - 34s 291ms/step - loss: 0.2939 - accuracy: 0.8817 - val_loss: 0.4039 - val_accuracy: 0.8456\r\n````\r\nBut this line of code give the following error:\r\n\r\n```\r\npytorch_model = BertForSequenceClassification.from_pretrained('/home/rubens_gmail_com/tf', from_tf=True,num_labels = 8)\r\n\r\nNameError Traceback (most recent call last)\r\n<ipython-input-19-ebb1b098c156> in <module>\r\n----> 1 pytorch_model = BertForSequenceClassification.from_pretrained('/home/rubensvectomobile_gmail_com/tf', from_tf=True,num_labels = 8)\r\n\r\n~/.local/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 357 try:\r\n 358 from transformers import load_tf2_checkpoint_in_pytorch_model\r\n--> 359 model = load_tf2_checkpoint_in_pytorch_model(model, resolved_archive_file, allow_missing_keys=True)\r\n 360 except ImportError as e:\r\n 361 logger.error(\"Loading a TensorFlow model in PyTorch, requires both PyTorch and TensorFlow to be installed. Please see \"\r\n\r\n~/.local/lib/python3.7/site-packages/transformers/modeling_tf_pytorch_utils.py in load_tf2_checkpoint_in_pytorch_model(pt_model, tf_checkpoint_path, tf_inputs, allow_missing_keys)\r\n 199 \r\n 200 if tf_inputs is None:\r\n--> 201 tf_inputs = tf.constant(DUMMY_INPUTS)\r\n 202 \r\n 203 if tf_inputs is not None:\r\n\r\nNameError: name 'DUMMY_INPUTS' is not defined\r\n```\r\nPyTorch, Tensorflow, transformers are up to date, as well as nvidia-ml-py3 and fast.ai. I'm on GCP using Debian, Anaconda environment w/ 8 x V100.\r\n\r\n",
"Have same issue following the examples. Everything most recent version. I am on a MacBook Pro. Virtualenv environment. Python3. ",
"I had the same issue. I think it was fixed with this commit - https://github.com/huggingface/transformers/pull/1509/commits/099358675899f759110ad8ccecc22c2fab9b1888.\r\n\r\nPerhaps try to re-install from source as @LysandreJik has mentioned in https://github.com/huggingface/transformers/issues/1532, or just follow the file path in the error and manually make the changes yourself. Works for me now. :)\r\n\r\n```\r\n201 - tf_inputs = tf.constant(DUMMY_INPUTS)\r\n + tf_inputs = tf_model.dummy_inputs\r\n```",
"As I mentioned here [issue](https://github.com/huggingface/transformers/issues/1810#issuecomment-553108898) I was able to make a workaround, as I'm using GCP:\r\n\r\nI installed `transformers` using:\r\n\r\n```\r\nconda install -c conda-forge transformers\r\n```\r\n\r\nIn Python 3.7.4, then I added `DUMMY_INPUTS = [[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]]` after variable `logger` in `/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_utils.py`, because it was missing.\r\n\r\nIt was interesting, because in GCP, transformers were not showing in `python3` , only in `sudo python3`",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,573 | 1,579 | 1,579 | NONE | null | ## ❓ Can someone help me with resolving this error?
<!-- A clear and concise description of the question. -->

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1759/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1758 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1758/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1758/comments | https://api.github.com/repos/huggingface/transformers/issues/1758/events | https://github.com/huggingface/transformers/issues/1758 | 519,119,817 | MDU6SXNzdWU1MTkxMTk4MTc= | 1,758 | the BertModel have the class "BertForTokenClassification", why XLNetModel don't have the class | {
"login": "zyxdSTU",
"id": 26239665,
"node_id": "MDQ6VXNlcjI2MjM5NjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/26239665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zyxdSTU",
"html_url": "https://github.com/zyxdSTU",
"followers_url": "https://api.github.com/users/zyxdSTU/followers",
"following_url": "https://api.github.com/users/zyxdSTU/following{/other_user}",
"gists_url": "https://api.github.com/users/zyxdSTU/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zyxdSTU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zyxdSTU/subscriptions",
"organizations_url": "https://api.github.com/users/zyxdSTU/orgs",
"repos_url": "https://api.github.com/users/zyxdSTU/repos",
"events_url": "https://api.github.com/users/zyxdSTU/events{/privacy}",
"received_events_url": "https://api.github.com/users/zyxdSTU/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"By reading the code, I find the BertModel have the class \"BertForTokenClassification\", but XLNetModel\r\ndon't have the class \"XLNetForTokenClassification\". Why? Can someone explain this?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,573 | 1,578 | 1,578 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1758/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1757 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1757/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1757/comments | https://api.github.com/repos/huggingface/transformers/issues/1757/events | https://github.com/huggingface/transformers/issues/1757 | 519,101,993 | MDU6SXNzdWU1MTkxMDE5OTM= | 1,757 | Is it possible fine tune XLNet? | {
"login": "tyu0912",
"id": 24836159,
"node_id": "MDQ6VXNlcjI0ODM2MTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/24836159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tyu0912",
"html_url": "https://github.com/tyu0912",
"followers_url": "https://api.github.com/users/tyu0912/followers",
"following_url": "https://api.github.com/users/tyu0912/following{/other_user}",
"gists_url": "https://api.github.com/users/tyu0912/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tyu0912/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tyu0912/subscriptions",
"organizations_url": "https://api.github.com/users/tyu0912/orgs",
"repos_url": "https://api.github.com/users/tyu0912/repos",
"events_url": "https://api.github.com/users/tyu0912/events{/privacy}",
"received_events_url": "https://api.github.com/users/tyu0912/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I have tried to fine-tune xlnet on squad1.0 and squad2.0, although it's a few points lower than original results, but it works fine. This #1803 is for fine-tuning on squad2.0, I'm not familiar with other datasets.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,573 | 1,579 | 1,579 | NONE | null | ## ❓ Questions & Help
Hello, is there currently a script or recommended process to fine tune XLNet? Currently I am writing my own and using https://mccormickml.com/2019/09/19/XLNet-fine-tuning/ as an inspiration but overall I am still pretty new at this. I managed to get it to run but it seems to be taking forever so just thought I'd check.
Thanks! Any help would be great.
<!-- A clear and concise description of the question. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1757/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1756 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1756/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1756/comments | https://api.github.com/repos/huggingface/transformers/issues/1756/events | https://github.com/huggingface/transformers/issues/1756 | 519,087,242 | MDU6SXNzdWU1MTkwODcyNDI= | 1,756 | Is it okay to define and use new model by using only a part of full GPT model blocks? Or anyone tried to do so? | {
"login": "duzani",
"id": 30222444,
"node_id": "MDQ6VXNlcjMwMjIyNDQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/30222444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/duzani",
"html_url": "https://github.com/duzani",
"followers_url": "https://api.github.com/users/duzani/followers",
"following_url": "https://api.github.com/users/duzani/following{/other_user}",
"gists_url": "https://api.github.com/users/duzani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/duzani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/duzani/subscriptions",
"organizations_url": "https://api.github.com/users/duzani/orgs",
"repos_url": "https://api.github.com/users/duzani/repos",
"events_url": "https://api.github.com/users/duzani/events{/privacy}",
"received_events_url": "https://api.github.com/users/duzani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"### Hi, \r\nAs far as I remember from my experiments with GPT2, the GPT2 blocks will work on its own. What I mean with that is that **the model will produce reasonable outputs, even if you leave out some layers from the original model.** So your idea should work. \r\nWhat I'm not quite sure about is the performance of an ONLY 1-2 layers model. I would guess the model will produce valid sentences, however the quality might be not optimal. \r\n\r\n### Another Tip: \r\nIf you want to build this model, load the original GPT2 model from the lib and delete the layers you don't want manually. This is a relatively ugly solution but its very time efficient ). Also you dont have to define you own forward function etc.",
"Did you take a look at `distilgpt2`? It has 6-layer, 768-hidden, 12-heads, 82M parameters.\r\n\r\ncc @VictorSanh @LysandreJik ",
"Hi! Indeed, if you're looking for a smaller GPT-2 model then DistilGPT-2 has half of the small GPT-2 layers. However, if you want to create your own GPT-2 model with an arbitrary number of layers, you can do so very easily with the configuration file:\r\n\r\n```py\r\nfrom transformers import GPT2Model, GPT2Config\r\n\r\nconfig = GPT2Config(n_layer=2)\r\nmodel = GPT2Model(config)\r\n```\r\n\r\nThis will define a model with only two layers. Please be aware that the weights will be randomly initialized if you go down this route, whereas the weights will already be trained if you use DistilGPT-2.",
"Thank you guys! Really helped me a lot. I changed the GPT2Config as @LysandreJik told me and loaded pretrained weight to those layers :)\r\n"
] | 1,573 | 1,575 | 1,575 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hi,
As far as I know, the smallest GPT2 model that comes with pretrained weight has 768 hidden size with 12 transformer blocks.
However, I'm considering to define 'smaller' GPT model with only 1 or 2 transformer blocks and load those weight from corresponding blocks in the pretrained GPT weight.
This is basically because the original model size is too big to fit in my gpu environment.
so I'm wondering if it will work or not. I think it would be at least better than start training from randomly initialized weight, but not sure about it.
So I'd like to ask how you think.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1756/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1755 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1755/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1755/comments | https://api.github.com/repos/huggingface/transformers/issues/1755/events | https://github.com/huggingface/transformers/issues/1755 | 519,076,433 | MDU6SXNzdWU1MTkwNzY0MzM= | 1,755 | How to add weighted CrossEntropy loss in sequence classification task? | {
"login": "TinaChen95",
"id": 24587336,
"node_id": "MDQ6VXNlcjI0NTg3MzM2",
"avatar_url": "https://avatars.githubusercontent.com/u/24587336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TinaChen95",
"html_url": "https://github.com/TinaChen95",
"followers_url": "https://api.github.com/users/TinaChen95/followers",
"following_url": "https://api.github.com/users/TinaChen95/following{/other_user}",
"gists_url": "https://api.github.com/users/TinaChen95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TinaChen95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TinaChen95/subscriptions",
"organizations_url": "https://api.github.com/users/TinaChen95/orgs",
"repos_url": "https://api.github.com/users/TinaChen95/repos",
"events_url": "https://api.github.com/users/TinaChen95/events{/privacy}",
"received_events_url": "https://api.github.com/users/TinaChen95/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, no specific advice other than computing the loss your-self outside of the model's forward method."
] | 1,573 | 1,575 | 1,575 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I am doing a sequence classification task base on run_ner.py. My dataset is an imbalenced dataset and apperantly 'O' label appears far more than other labels. I'm wondering if it is helpful to use weighted CrossEntropy loss.
If the answer is 'yes', how to add weight to the loss function? Currently I hard code the weight in BertForSequenceClassification class (in modeling_bert.py). I feel it is not a smart way to do it. Do you have some advice on how to add the weight infomation?
Many thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1755/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1755/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1754 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1754/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1754/comments | https://api.github.com/repos/huggingface/transformers/issues/1754/events | https://github.com/huggingface/transformers/issues/1754 | 518,884,300 | MDU6SXNzdWU1MTg4ODQzMDA= | 1,754 | Out of Memory Error (OOM) only during evaluation phase of run_lm_finetuning.py and run_glue.py | {
"login": "CMobley7",
"id": 10121829,
"node_id": "MDQ6VXNlcjEwMTIxODI5",
"avatar_url": "https://avatars.githubusercontent.com/u/10121829?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CMobley7",
"html_url": "https://github.com/CMobley7",
"followers_url": "https://api.github.com/users/CMobley7/followers",
"following_url": "https://api.github.com/users/CMobley7/following{/other_user}",
"gists_url": "https://api.github.com/users/CMobley7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CMobley7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CMobley7/subscriptions",
"organizations_url": "https://api.github.com/users/CMobley7/orgs",
"repos_url": "https://api.github.com/users/CMobley7/repos",
"events_url": "https://api.github.com/users/CMobley7/events{/privacy}",
"received_events_url": "https://api.github.com/users/CMobley7/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Have you tried the following lines after the call to the `evaluate` function (in the loop over checkpoints)?\r\n\r\n```python\r\ndel model\r\ntorch.cuda.empty_cache()\r\n```\r\n\r\nLet me know if this works.",
"@rlouf, I changed\r\n\r\n```\r\n# Evaluation\r\n results = {}\r\n if args.do_eval and args.local_rank in [-1, 0]:\r\n checkpoints = [args.output_dir]\r\n if args.eval_all_checkpoints:\r\n checkpoints = list(os.path.dirname(c) for c in sorted(glob.glob(args.output_dir + '/**/' + WEIGHTS_NAME, recursive=True)))\r\n logging.getLogger(\"transformers.modeling_utils\").setLevel(logging.WARN) # Reduce logging\r\n logger.info(\"Evaluate the following checkpoints: %s\", checkpoints)\r\n for checkpoint in checkpoints:\r\n global_step = checkpoint.split('-')[-1] if len(checkpoints) > 1 else \"\"\r\n prefix = checkpoint.split('/')[-1] if checkpoint.find('checkpoint') != -1 else \"\"\r\n \r\n model = model_class.from_pretrained(checkpoint)\r\n model.to(args.device)\r\n result = evaluate(args, model, tokenizer, prefix=prefix)\r\n result = dict((k + '_{}'.format(global_step), v) for k, v in result.items())\r\n results.update(result)\r\n\r\n return results\r\n```\r\nto\r\n```\r\n# Evaluation\r\n results = {}\r\n if args.do_eval and args.local_rank in [-1, 0]:\r\n checkpoints = [args.output_dir]\r\n if args.eval_all_checkpoints:\r\n checkpoints = list(os.path.dirname(c) for c in sorted(glob.glob(args.output_dir + '/**/' + WEIGHTS_NAME, recursive=True)))\r\n logging.getLogger(\"transformers.modeling_utils\").setLevel(logging.WARN) # Reduce logging\r\n logger.info(\"Evaluate the following checkpoints: %s\", checkpoints)\r\n for checkpoint in checkpoints:\r\n global_step = checkpoint.split('-')[-1] if len(checkpoints) > 1 else \"\"\r\n prefix = checkpoint.split('/')[-1] if checkpoint.find('checkpoint') != -1 else \"\"\r\n \r\n model = model_class.from_pretrained(checkpoint)\r\n model.to(args.device)\r\n result = evaluate(args, model, tokenizer, prefix=prefix)\r\n del model\r\n torch.cuda.empty_cache()\r\n result = dict((k + '_{}'.format(global_step), v) for k, v in result.items())\r\n results.update(result)\r\n\r\n return results\r\n\r\n```\r\nwhich didn't help.\r\n\r\nSo, I looked through https://github.com/huggingface/transformers/issues/1742 and added\r\n\r\n```\r\ndel model\r\ngc.collect()\r\ntorch.cuda.empty_cache()\r\n```\r\n\r\nThis didn't work either; so, I change the scheduler to the following while leaving\r\n\r\n```\r\ndel model\r\ngc.collect()\r\ntorch.cuda.empty_cache()\r\n```\r\n\r\n```\r\ndef warmup_linear_schedule(optimizer, warmup_steps, t_total, last_epoch=-1):\r\n def lr_lambda(step):\r\n if step < warmup_steps:\r\n return float(step) / float(max(1, warmup_steps))\r\n return max(0.0, float(t_total - step) / float(max(1.0, t_total - warmup_steps)))\r\n\r\n return LambdaLR(optimizer, lr_lambda, last_epoch=-1)\r\n\r\n```\r\nThis hasn't worked either.\r\n\r\nAny ideas? It doesn't seem like I can `del model`",
"Hi! Changing the scheduler should not do anything since you are in evaluation mode. This is likely not due to the model either, since `del` + `gc.collect()` should remove it from memory. Could you look at by how much the memory increases after each step?",
"So, when using `roberta-large` the memory increases by ~2.25 GB after each checkpoint starting at ~7GB.",
"Can you have a look at which tensors are still being tracked, using [this code](https://github.com/huggingface/transformers/issues/1742#issuecomment-550327172) after your del statement?",
"My apologies for the delay in my answer. I had hoped that https://github.com/huggingface/transformers/commit/9629e2c676efab31d1f27a3f27d811e91fd05a92 would solve my issue; however, evaluation doesn't appear to be distributed. So, it is still using one card and is still ~2.25 GB after each checkpoint. Should I open up a separate issue for the distributed evaluation issue? After adding \r\n```\r\ndef print_gpu_obj():\r\n import gc\r\n GPU_count = 0\r\n Pinned_count = 0\r\n for tracked_object in gc.get_objects():\r\n if torch.is_tensor(tracked_object):\r\n if tracked_object.is_cuda:\r\n GPU_count+=1\r\n if tracked_object.is_pinned():\r\n Pinned_count+=1\r\n \r\n\r\n print(\"There are {} cuda objects\".format(GPU_count))\r\n print(\"There are {} pinned objects\".format(Pinned_count))\r\n```\r\nto the code, I get\r\n\r\nbefore `del model`\r\ncheckpoint 0\r\n```\r\nThere are 1087 cuda objects\r\nThere are 0 pinned objects\r\n```\r\ncheckpoint 1\r\n```\r\nThere are 1673 cuda objects\r\nThere are 0 pinned objects\r\n```\r\ncheckpoint 2\r\n```\r\nThere are 2259 cuda objects\r\nThere are 0 pinned objects\r\n```\r\ncheckpoint 3\r\n```\r\nThere are 2845 cuda objects\r\nThere are 0 pinned objects\r\n```\r\n\r\nafter `del model` and before `gc.collect`\r\ncheckpoint 0\r\n```\r\nThere are 984 cuda objects\r\nThere are 0 pinned objects\r\n```\r\ncheckpoint 1\r\n```\r\nThere are 1570 cuda objects\r\nThere are 0 pinned objects\r\n```\r\ncheckpoint 2\r\n```\r\nThere are 2156 cuda objects\r\nThere are 0 pinned objects\r\n```\r\ncheckpoint 3\r\n```\r\nThere are 2742 cuda objects\r\nThere are 0 pinned objects\r\n```\r\n\r\nafter `gc.collect` and before `torch.cuda.empty_cache()` and after `torch.cuda.empty_cache()` are both the same as after `del model` and before `gc.collect`.\r\n\r\nIt appears that 586 objects are added at each checkpoint, which cannot be deleted and unallocated. Any suggestions?",
"This looks like some data is kept in memory after evaluation. Any way you could try to track this down?",
"My apologies for the delay in getting back to this, as well as my ignorance on this subject, but how would I track this down. As in what commands or tools should I use? Again sorry for the late reply, I got stuck on another project for a bit.",
"I have a similar issue. During training with run_lm_finetuning.py, the memory usage of a single GPU is 30G/32G. And after the training stage, I mean at the beginning of eval, the memory doesn't drop down and the evaluation stage is always getting OOM. If I add torch.cuda.empty_cache() before evaluation, the memory drops down to 22G, which means there are still 22GB tensors exist but I don't know where they are and how to free them. If I do the evaluation with run_lm_finetuning.py without training, everything is ok and the memory usage is about 8G.",
"> I have a similar issue. During training with run_lm_finetuning.py, the memory usage of a single GPU is 30G/32G. And after the training stage, I mean at the beginning of eval, the memory doesn't drop down and the evaluation stage is always getting OOM. If I add torch.cuda.empty_cache() before evaluation, the memory drops down to 22G, which means there are still 22GB tensors exist but I don't know where they are and how to free them. If I do the evaluation with run_lm_finetuning.py without training, everything is ok and the memory usage is about 8G.\r\n\r\nPS: If I set --evaluate_during_training, there is no OOM issue.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,573 | 1,584 | 1,584 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): roberta-large
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* the official example scripts: run_lm_finetuning.py and run_glue.py
The tasks I am working on is:
* my own task or dataset: my own dataset which I have put into the correct format for `run_lm_finetuning` and the `SST-2` task for `run_glue.py`
## To Reproduce
When I use the official `run_lm_finetuning.py` and `run_glue.py` scripts with `roberta-large` on my own data, I run into out of memory (`OOM`) issues during `evaluation`. `Training completes successfully` with a `batch_size` of `1` and a `max_senquence_length` of `512`. However, when `evaluating` the 4th or 5th checkpoint, I get an `OOM` error. I tested using `distributed` training with `FP16` on `2 Tesla P100-SXM2 16 GB` and `2 Tesla P100-PCIE 12 GB` with the same results. I'm currently testing on `2 Tesla M40 24GB`; however, I don't always have access to these GPUs; so, this is untenable long term.
## Expected behavior
Since it finishes training and evaluation of the first few checkpoints with the aforementioned setting, I would expect it to be able to evaluate every checkpoint without error. However, it seems that there is a bug in the evaluation code. I assume it is not properly clearing the memory after testing each checkpoint.
## Environment
* OS: Ubuntu 18.04
* Python version: 3.6.8
* PyTorch version: 1.3
* PyTorch Transformers version (or branch): 2.1.1
* Using GPU Yes, Tesla M40 24GB, Tesla P100-SXM2 16 GB, Tesla P100-PCIE 12 GB
* Distributed of parallel setup 2x of each separate GPU
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1754/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1753 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1753/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1753/comments | https://api.github.com/repos/huggingface/transformers/issues/1753/events | https://github.com/huggingface/transformers/pull/1753 | 518,865,678 | MDExOlB1bGxSZXF1ZXN0MzM3NzAyOTcz | 1,753 | Added Mish Activation Function | {
"login": "digantamisra98",
"id": 34192716,
"node_id": "MDQ6VXNlcjM0MTkyNzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/34192716?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/digantamisra98",
"html_url": "https://github.com/digantamisra98",
"followers_url": "https://api.github.com/users/digantamisra98/followers",
"following_url": "https://api.github.com/users/digantamisra98/following{/other_user}",
"gists_url": "https://api.github.com/users/digantamisra98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/digantamisra98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/digantamisra98/subscriptions",
"organizations_url": "https://api.github.com/users/digantamisra98/orgs",
"repos_url": "https://api.github.com/users/digantamisra98/repos",
"events_url": "https://api.github.com/users/digantamisra98/events{/privacy}",
"received_events_url": "https://api.github.com/users/digantamisra98/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ok, why not."
] | 1,573 | 1,574 | 1,574 | CONTRIBUTOR | null | Mish is a new activation function proposed here - https://arxiv.org/abs/1908.08681
It has seen some recent success and has been adopted in SpaCy, Thic, TensorFlow Addons and FastAI-dev.
All benchmarks recorded till now (including against ReLU, Swish and GELU) is present in the repository - https://github.com/digantamisra98/Mish
Might be a good addition to experiment with especially in the Bert Model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1753/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1753/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1753",
"html_url": "https://github.com/huggingface/transformers/pull/1753",
"diff_url": "https://github.com/huggingface/transformers/pull/1753.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1753.patch",
"merged_at": 1574951048000
} |
https://api.github.com/repos/huggingface/transformers/issues/1752 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1752/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1752/comments | https://api.github.com/repos/huggingface/transformers/issues/1752/events | https://github.com/huggingface/transformers/issues/1752 | 518,788,776 | MDU6SXNzdWU1MTg3ODg3NzY= | 1,752 | Subtokens in BPE in GPT2 | {
"login": "weiguowilliam",
"id": 31396452,
"node_id": "MDQ6VXNlcjMxMzk2NDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/31396452?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/weiguowilliam",
"html_url": "https://github.com/weiguowilliam",
"followers_url": "https://api.github.com/users/weiguowilliam/followers",
"following_url": "https://api.github.com/users/weiguowilliam/following{/other_user}",
"gists_url": "https://api.github.com/users/weiguowilliam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/weiguowilliam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/weiguowilliam/subscriptions",
"organizations_url": "https://api.github.com/users/weiguowilliam/orgs",
"repos_url": "https://api.github.com/users/weiguowilliam/repos",
"events_url": "https://api.github.com/users/weiguowilliam/events{/privacy}",
"received_events_url": "https://api.github.com/users/weiguowilliam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"BPE is a data compression algorithm, the result depends on the statistical properties of the corpus of text on which it has been trained.\r\n\r\nYou can start with this article https://leimao.github.io/blog/Byte-Pair-Encoding/ to get more of an intuition of how it works.",
"> BPE is a data compression algorithm, the result depends on the statistical properties of the corpus of text on which it has been trained.\r\n> \r\n> You can start with this article https://leimao.github.io/blog/Byte-Pair-Encoding/ to get more of an intuition of how it works.\r\n\r\nThank you! That helps!"
] | 1,573 | 1,573 | 1,573 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hi, I have a question.
For a word represented by more than 1 subtoken in BPE, I wonder what's the principle to divide it into subtokens?
I use GPT2 small version for example. "bookshelf" is divided into "books","he" and "if".(index for encoder is [3835, 258, 1652]). But actually "book" and "shelf" are represented by single subtoken.(book is 1492, shelf is 18316). Why is "bookshelf" divided into 3 parts instead of "book" and "shelf"?
Thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1752/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1751 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1751/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1751/comments | https://api.github.com/repos/huggingface/transformers/issues/1751/events | https://github.com/huggingface/transformers/issues/1751 | 518,669,764 | MDU6SXNzdWU1MTg2Njk3NjQ= | 1,751 | input token embedding issues | {
"login": "abrhaleitela",
"id": 43967278,
"node_id": "MDQ6VXNlcjQzOTY3Mjc4",
"avatar_url": "https://avatars.githubusercontent.com/u/43967278?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abrhaleitela",
"html_url": "https://github.com/abrhaleitela",
"followers_url": "https://api.github.com/users/abrhaleitela/followers",
"following_url": "https://api.github.com/users/abrhaleitela/following{/other_user}",
"gists_url": "https://api.github.com/users/abrhaleitela/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abrhaleitela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abrhaleitela/subscriptions",
"organizations_url": "https://api.github.com/users/abrhaleitela/orgs",
"repos_url": "https://api.github.com/users/abrhaleitela/repos",
"events_url": "https://api.github.com/users/abrhaleitela/events{/privacy}",
"received_events_url": "https://api.github.com/users/abrhaleitela/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,573 | 1,578 | 1,578 | NONE | null | 1. During fine-tuning an XLNet, does the word embeddings kept constant (learned from the pre-trained model) or are they initialized randomly but learned contextually during fine-tuning itself?
2. And how would I set my own word embeddings during fine-tuning (XLNetForSequenceClassification)? I have my own token embeddings that I want to feed for my XLNetForSequenceClassification and is it possible to keep this embeddings constant during fine-tuning?
get_input_embeddings and set_input_embeddings functions does not seem to work with XLNetForSequenceClassification Class.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1751/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1750 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1750/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1750/comments | https://api.github.com/repos/huggingface/transformers/issues/1750/events | https://github.com/huggingface/transformers/issues/1750 | 518,641,253 | MDU6SXNzdWU1MTg2NDEyNTM= | 1,750 | How to calculate memory requirements of different GPT models? | {
"login": "Henry-E",
"id": 12613144,
"node_id": "MDQ6VXNlcjEyNjEzMTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12613144?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Henry-E",
"html_url": "https://github.com/Henry-E",
"followers_url": "https://api.github.com/users/Henry-E/followers",
"following_url": "https://api.github.com/users/Henry-E/following{/other_user}",
"gists_url": "https://api.github.com/users/Henry-E/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Henry-E/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Henry-E/subscriptions",
"organizations_url": "https://api.github.com/users/Henry-E/orgs",
"repos_url": "https://api.github.com/users/Henry-E/repos",
"events_url": "https://api.github.com/users/Henry-E/events{/privacy}",
"received_events_url": "https://api.github.com/users/Henry-E/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Current commercial GPU VRAM sizes are not large enough to process the XL model. You'll need probably around 32-64 GB of RAM in order to finetune successfully. So for now, you'll have to train on a CPU with a lot of memory attached.",
"How did you come to that number though? \r\n\r\nIn Thomas Wolf's medium post on training large models he mentioned gradient checkpointing for models that couldn't fit a batch onto a single gpu. Is it possible to implement something like for GPT-2 and friends?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,573 | 1,579 | 1,579 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
What's the best way to calculate how much GPU ram is needed to finetune a given GPT size? The GPT-xl model is 6.5GB worth of parameters but clearly this isn't the full story when it comes to fine-tuning. I've tried running a batch size of 1 on a 11GB 1080ti but that ran out of memory. My next step is to test it on a 2080ti with FP16 training enabled to further reduce memory requirements. It would be great if there was some back of the envelope calculation to see in advance if this would work or if instead I should go straight to 16GB Titan V. Or maybe even that's too small!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1750/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1750/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1749 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1749/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1749/comments | https://api.github.com/repos/huggingface/transformers/issues/1749/events | https://github.com/huggingface/transformers/issues/1749 | 518,594,548 | MDU6SXNzdWU1MTg1OTQ1NDg= | 1,749 | gpt2 generation crashes when using `past` for some output lengths | {
"login": "joelb-git",
"id": 2745742,
"node_id": "MDQ6VXNlcjI3NDU3NDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2745742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joelb-git",
"html_url": "https://github.com/joelb-git",
"followers_url": "https://api.github.com/users/joelb-git/followers",
"following_url": "https://api.github.com/users/joelb-git/following{/other_user}",
"gists_url": "https://api.github.com/users/joelb-git/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joelb-git/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joelb-git/subscriptions",
"organizations_url": "https://api.github.com/users/joelb-git/orgs",
"repos_url": "https://api.github.com/users/joelb-git/repos",
"events_url": "https://api.github.com/users/joelb-git/events{/privacy}",
"received_events_url": "https://api.github.com/users/joelb-git/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@joelb-git Can you check the lowest ```length``` at which this occurs? \r\n\r\nAlso, is this an issue that only occurs when ```past``` is used? \r\n\r\nCan you check if same thing happens when running\r\n\r\n```python examples/run_generation.py --prompt \"Who was Jim Henson ? Jim Henson was a\" --model_type gpt2 --model_name_or_path gpt2 --length 40```\r\n\r\non the unchanged ```run_generation.py``` file",
"@enzoampil\r\n\r\n> @joelb-git Can you check the lowest length at which this occurs?\r\n\r\n`--length 36` is the smallest length that produces the failure.\r\n\r\n> Also, is this an issue that only occurs when past is used?\r\n\r\nYes. Also, the results when using `past` when it does not crash are\r\ndifferent from without using `past`.\r\n\r\nWithout `past`:\r\n```\r\n$ python examples/run_generation.py --prompt \"Who was Jim Henson ? Jim Henson was a\" --model_type gpt2 --model_name_or_path gpt2 --length 20\r\n man of God. Jim Henson was a man who had built his family into this nation. His\r\n```\r\n\r\nWith `past`:\r\n```\r\n$ python examples/run_generation.py --prompt \"Who was Jim Henson ? Jim Henson was a\" --model_type gpt2 --model_name_or_path gpt2 --length 20\r\n man of almostWho was Jim Henson? Jim Henson was a man of almostWho was Jim\r\n```\r\n\r\n> Can you check if same thing happens when running\r\n> \r\n> python examples/run_generation.py --prompt \"Who was Jim Henson ? Jim Henson was a\" --model_type gpt2 --model_name_or_path gpt2 --length 40\r\n> \r\n> on the unchanged run_generation.py file\r\n\r\nOn the unchanged file, this does not crash.\r\n\r\nI also note a documentation issue:\r\n\r\nhttps://huggingface.co/transformers/model_doc/gpt2.html\r\n```\r\nOutputs:\r\npast:\r\n\r\nlist of torch.FloatTensor (one for each layer) of shape\r\n(batch_size, num_heads, sequence_length, sequence_length) ...\r\n```\r\n\r\nThe shape that I observe has an extra prefixed dimension:\r\n\r\n```\r\n(2, batch_size, num_heads, sequence_length, sequence_length)\r\n```\r\n",
"I have the exact same issue (although I am using distilgpt2).\r\n\r\nAfter some experimenting I found out that the generation (starting from BOS, i.e. without a prompt) always crashes when the generated sequence goes from length 44 to 45. At this point, I suspect that the hidden states in past are concatenated. For the sum of all numbers from 1 to 44 is 990. So if you add 45 again, you exceed the maximum number of positions in GPT2 (1024).\r\n\r\nThis would also explain why your results with and without past are different @joelb-git . Also it seems that the prompt (\"Who was Jim Henson? ...\") is repeated in the output when past is used. That might also be an artifact of concatenation?",
"I also get a similar issue. I suspect the error has something to do with the past attention weights being added to the present - since the error is a matrix multiplication shape mismatch. \r\n\r\nThe issue may lay in the modelling_gpt2.py starting on line 181.\r\n``` \r\nx = self.c_attn(x)\r\nquery, key, value = x.split(self.split_size, dim=2)\r\nquery = self.split_heads(query)\r\nkey = self.split_heads(key, k=True)\r\nvalue = self.split_heads(value)\r\nif layer_past is not None:\r\n past_key, past_value = layer_past[0].transpose(-2, -1), layer_past[1] # transpose back cf below\r\n key = torch.cat((past_key, key), dim=-1)\r\n value = torch.cat((past_value, value), dim=-2)\r\npresent = torch.stack((key.transpose(-2, -1), value))\r\n**attn_outputs = self._attn(query, key, value, attention_mask, head_mask)**\r\n``` \r\n\r\nWhere the error occurs in the method I have wrapped with **.\r\nGoing into the method, the error occurs in the line alsop marked with **:\r\n\r\n```\r\ndef _attn(self, q, k, v, attention_mask=None, head_mask=None):\r\n w = torch.matmul(q, k)\r\n if self.scale:\r\n w = w / math.sqrt(v.size(-1))\r\n nd, ns = w.size(-2), w.size(-1)\r\n b = self.bias[:, :, ns-nd:ns, :ns]\r\n **w = w * b - 1e4 * (1 - b)**\r\n```\r\nThe error occurs because this code tried to multiply **'w'** of shape ([1, 12, 1024, 2048]) by **'b'** of shape ([1, 1, 0, 1024]). However, I am unsure what are the appropiate shapes thus cannot provide the solution.\r\n\r\nHope this helps.\r\n\r\n",
"@adigoryl This is definitely related.\r\nOne can run this very simple script to reproduce the error:\r\n```\r\nimport torch\r\nfrom transformers import GPT2Tokenizer, GPT2LMHeadModel\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')\r\nmodel = GPT2LMHeadModel.from_pretrained('distilgpt2')\r\ndevice = 'cuda:0'\r\nmodel.to(device)\r\n\r\ninput_ids = torch.tensor(tokenizer.encode(\r\n \"<|endoftext|>\")).unsqueeze(0).to(device)\r\ninputs = {'input_ids': input_ids}\r\n\r\nwith torch.no_grad():\r\n past = None\r\n for i in range(45):\r\n print(i)\r\n logits, past = model(**inputs, past=past)\r\n logits = logits[0, -1]\r\n\r\n next_token = logits.argmax()\r\n input_ids = torch.cat([input_ids, next_token.view(1, 1)], dim=1)\r\n\r\n inputs = {'input_ids': input_ids}\r\nprint(tokenizer.decode(input_ids[0].tolist()))\r\n```\r\n\r\nIt works fine for `range(44)` but at `range(45)` it crashes with the error @adigoryl describes:\r\n```\r\nTraceback (most recent call last):\r\n File \"test.py\", line 17, in <module>\r\n logits, past = model(**inputs, past=past)\r\n File \"/mounts/Users/martin/.local/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/mounts/Users/martin/.local/lib/python3.6/site-packages/transformers/modeling_gpt2.py\", line 566, in forward\r\n head_mask=head_mask)\r\n File \"/mounts/Users/martin/.local/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/mounts/Users/martin/.local/lib/python3.6/site-packages/transformers/modeling_gpt2.py\", line 470, in forward\r\n head_mask=head_mask[i])\r\n File \"/mounts/Users/martin/.local/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/mounts/Users/martin/.local/lib/python3.6/site-packages/transformers/modeling_gpt2.py\", line 250, in forward\r\n head_mask=head_mask)\r\n File \"/mounts/Users/martin/.local/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/mounts/Users/martin/.local/lib/python3.6/site-packages/transformers/modeling_gpt2.py\", line 211, in forward\r\n attn_outputs = self._attn(query, key, value, attention_mask, head_mask)\r\n File \"/mounts/Users/martin/.local/lib/python3.6/site-packages/transformers/modeling_gpt2.py\", line 154, in _attn\r\n w = w * b - 1e4 * (1 - b)\r\nRuntimeError: The size of tensor a (1035) must match the size of tensor b (1024) at non-singleton dimension 3\r\n```\r\n\r\n`w` having length 1035 (which is the sum of all numbers from 1 to 45) also made me think that there is some concatenation going on that shouldn't be happening.",
"As I suspect a wrong idea about the interface and thus some concatenation going on that shouldn't be happening, I modified my example script in the following way:\r\n```\r\nimport torch\r\nfrom transformers import GPT2Tokenizer, GPT2LMHeadModel\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')\r\nmodel = GPT2LMHeadModel.from_pretrained('distilgpt2')\r\ndevice = 'cuda:0'\r\nmodel.to(device)\r\n\r\ninput_ids = torch.tensor(tokenizer.encode(\r\n \"<|endoftext|>\")).unsqueeze(0).to(device)\r\ninputs = {'input_ids': input_ids}\r\n\r\nwith torch.no_grad():\r\n past = None\r\n for i in range(45):\r\n print(i)\r\n logits, past = model(**inputs, past=past)\r\n logits = logits[0, -1]\r\n\r\n next_token = logits.argmax().view(1, 1)\r\n input_ids = torch.cat([input_ids, next_token], dim=1)\r\n\r\n inputs = {'input_ids': next_token}\r\nprint(tokenizer.decode(input_ids[0].tolist()))\r\n```\r\nSo only `next_token` is given as an input to the gpt2 model. This permits me to generate sequences of arbitrary length while using the `past` parameter.\r\nThe only thing I would still really like to know is where this \"bug\" came from in the first place: do we as users have a wrong idea about the interface and should we use it in a way where `past` and `input_ids` do not overlap? Or were we right about the interface from the start and there really *is* a bug?\r\n\r\nSurprisingly, both versions give me natural results when presented with the prompt `\"<|endoftext|>Who was Jim Henson? Jim Henson was a\"`:\r\nWithout modification: `Who was Jim Henson? Jim Henson was a great father and a great father.`\r\nWith my modification: `Who was Jim Henson? Jim Henson was a great actor, and he was a great actor. He was a great actor. He was a great`\r\n\r\nThe same when using `gpt2` instead of `distilgpt2`:\r\nWithout modification: `Who was Jim Henson? Jim Henson was a writer and director of the film \"The Last Man on Earth\" and the sequel to the film \"`\r\nWith modification: `Who was Jim Henson? Jim Henson was a writer, director, producer, and producer of the hit television series \"The Henson Report.\" He`\r\n\r\nSo how are we supposed to use it?",
"Hi, when the past is computed for certain tokens these tokens should not be passed as input ids. @mnschmit your last example displays correct usage of the `past` argument. I've clarified the documentation in d409aca, please let me know if it should be clarified further.\r\n\r\nThank you for looking into it.\r\n",
"@LysandreJik great, thank you!\r\nWith the extended documentation, it would be clear to me now but, additionally, a usage example might be useful for new users. Feel free to put my short example (or parts of it) at some appropriate place if you like!",
"You're right, having a snippet showing usage would definitely improve user experience on this point. I'll update the documentation later today, thank you!",
"I've added a section in the [`quickstart` section of the documentation](https://huggingface.co/transformers/quickstart.html#using-the-past). Feel free to re-open if it's still not clear enough.",
"That's a great new part of the documentation. Thank you!",
"Yes, indeed, thanks for that doc change! And thanks for all who helped diagnose.\r\n\r\nI applied the suggestions to my local version of `run_generation.py`.\r\nIt no longer crashes for large generation lengths, and generation is\r\nindeed now faster (about 1.7 times faster in my tests).",
"Here is the relevant portion of the modified `run_generation.py` script, changed for gpt2 only:\r\n\r\n```\r\ndef sample_sequence(model, length, context, num_samples=1, temperature=1, top_k=0, top_p=0.0, repetition_penalty=1.0,\r\n device='cpu'):\r\n context = torch.tensor(context, dtype=torch.long, device=device)\r\n context = context.unsqueeze(0).repeat(num_samples, 1)\r\n generated = context.clone().detach()\r\n past = None\r\n with torch.no_grad():\r\n for _ in range(length):\r\n output, past = model(context, past=past)\r\n next_token_logits = output[:, -1, :]\r\n filtered_logits = top_k_top_p_filtering(next_token_logits, top_k=top_k, top_p=top_p)\r\n next_token = torch.multinomial(F.softmax(filtered_logits, dim=-1), num_samples=1)\r\n generated = torch.cat((generated, next_token), dim=1)\r\n # when using `past`, the context for the next call should be only\r\n # the previous token\r\n context = next_token\r\n return generated\r\n```",
"I encouter similar issue, and find the problem is cased by input_ids and length, beacuse wrongly use of special token [CLS] when tokenize input_ids ",
"I am trying a BERT-GPT2(not BART) architecture in which I am passing BERT's hidden states of 12 layers to past. After reading doc's I find that past should be of different shape as compared to shape of hidden states from BERT. Since I need BERT's encoding layer for encoding and GPT2 decoding function, tell me how should I manage it."
] | 1,573 | 1,597 | 1,573 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): gpt2
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [ ] the official example scripts: (give details)
* [x] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details)
I am using `run_generation.py` to generate text from gpt2, but with the code slightly changed to make use of `past` to cache hidden states. It crashes in some cases.
## To Reproduce
Steps to reproduce the behavior:
Apply the three line change to the `run_generation.py` script:
```
$ git checkout f88c104d8 . # current head of master
$ git diff
diff --git a/examples/run_generation.py b/examples/run_generation.py
index 2d91766..bfbf68a 100644
--- a/examples/run_generation.py
+++ b/examples/run_generation.py
@@ -113,9 +113,10 @@ def sample_sequence(model, length, context, num_samples=1, temperature=1, top_k=
context = context.unsqueeze(0).repeat(num_samples, 1)
generated = context
with torch.no_grad():
+ past = None
for _ in trange(length):
- inputs = {'input_ids': generated}
+ inputs = {'input_ids': generated, 'past': past}
if is_xlnet:
# XLNet is a direct (predict same token, not next token) and bi-directional model by default
# => need one additional dummy token in the input (will be masked), attention mask and target mapping (see model docstring)
@@ -136,6 +137,7 @@ def sample_sequence(model, length, context, num_samples=1, temperature=1, top_k=
inputs["langs"] = torch.tensor([xlm_lang] * inputs["input_ids"].shape[1], device=device).view(1, -1)
outputs = model(**inputs) # Note: we could also use 'past' with GPT-2/Transfo-XL/XLNet/CTRL (cached hidden-states)
+ past = outputs[1]
next_token_logits = outputs[0][:, -1, :] / (temperature if temperature > 0 else 1.)
# repetition penalty from CTRL (https://arxiv.org/abs/1909.05858)
```
```
$ python examples/run_generation.py --prompt "Who was Jim Henson ? Jim Henson was a" --model_type gpt2 --model_name_or_path gpt2 --length 40
...
11/06/2019 11:38:36 - INFO - __main__ - Namespace(device=device(type='cpu'), length=40, model_name_or_path='gpt2', model_type='gpt2', n_gpu=0, no_cuda=False, num_samples=1, padding_text='', prompt='Who was Jim Henson ? Jim Henson was a', repetition_penalty=1.0, seed=42, stop_token=None, temperature=1.0, top_k=0, top_p=0.9, xlm_lang='')
88%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 35/40 [00:04<00:00, 8.68it/s]
Traceback (most recent call last):
File "examples/run_generation.py", line 262, in <module>
main()
File "examples/run_generation.py", line 247, in main
device=args.device,
File "examples/run_generation.py", line 139, in sample_sequence
outputs = model(**inputs) # Note: we could also use 'past' with GPT-2/Transfo-XL/XLNet/CTRL (cached hidden-states)
File "/Users/joelb/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/Users/joelb/views/transformers/transformers/modeling_gpt2.py", line 546, in forward
inputs_embeds=inputs_embeds)
File "/Users/joelb/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/Users/joelb/views/transformers/transformers/modeling_gpt2.py", line 438, in forward
position_embeds = self.wpe(position_ids)
File "/Users/joelb/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/Users/joelb/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 114, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/Users/joelb/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/functional.py", line 1484, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: index out of range: Tried to access index 1024 out of table with 1023 rows. at /Users/distiller/project/conda/conda-bld/pytorch_1570710797334/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:418
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
The crash does not occur with the default `--length 20`.
I'm trying to make this work for faster generation speed. So I tested the change with a smaller length that does not crash, but I observe the speed with caching is slightly slower than without.
## Environment
```
Platform Darwin-18.7.0-x86_64-i386-64bit
Python 3.7.4 (default, Aug 13 2019, 15:17:50)
[Clang 4.0.1 (tags/RELEASE_401/final)]
PyTorch 1.3.0
Tensorflow 2.0.0
```
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1749/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1748 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1748/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1748/comments | https://api.github.com/repos/huggingface/transformers/issues/1748/events | https://github.com/huggingface/transformers/issues/1748 | 518,586,555 | MDU6SXNzdWU1MTg1ODY1NTU= | 1,748 | Released OpenAI GPT-2 1.5B model | {
"login": "TheEdoardo93",
"id": 19664571,
"node_id": "MDQ6VXNlcjE5NjY0NTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/19664571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TheEdoardo93",
"html_url": "https://github.com/TheEdoardo93",
"followers_url": "https://api.github.com/users/TheEdoardo93/followers",
"following_url": "https://api.github.com/users/TheEdoardo93/following{/other_user}",
"gists_url": "https://api.github.com/users/TheEdoardo93/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TheEdoardo93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheEdoardo93/subscriptions",
"organizations_url": "https://api.github.com/users/TheEdoardo93/orgs",
"repos_url": "https://api.github.com/users/TheEdoardo93/repos",
"events_url": "https://api.github.com/users/TheEdoardo93/events{/privacy}",
"received_events_url": "https://api.github.com/users/TheEdoardo93/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, GPT-2 XL was added yesterday with commit d7d36181fdefdabadc53adf51bed4a2680f5880a",
"Yes, sorry for opening a new issue! I didn't see this commit. I'll close the issue! Thank you for your time!"
] | 1,573 | 1,573 | 1,573 | NONE | null | ## 🚀 Feature
- [ ] Blog: [https://openai.com/blog/gpt-2-1-5b-release/](url)
- [ ] Code: [https://github.com/openai/gpt-2](url)
- [ ] Dataset: [https://github.com/openai/gpt-2-output-dataset](url)
## Motivation
A bigger model for text generation which is more human than the OpenAI GPT-2 774M parameters
## Additional context
Maybe this is not the most requested feature by AI researchers. Maybe, it's better to implement and standardize **new** NLG models! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1748/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1748/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1747 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1747/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1747/comments | https://api.github.com/repos/huggingface/transformers/issues/1747/events | https://github.com/huggingface/transformers/issues/1747 | 518,574,007 | MDU6SXNzdWU1MTg1NzQwMDc= | 1,747 | How to use gpt-2-xl with run_generation.py | {
"login": "avianion",
"id": 37309215,
"node_id": "MDQ6VXNlcjM3MzA5MjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/37309215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avianion",
"html_url": "https://github.com/avianion",
"followers_url": "https://api.github.com/users/avianion/followers",
"following_url": "https://api.github.com/users/avianion/following{/other_user}",
"gists_url": "https://api.github.com/users/avianion/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avianion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avianion/subscriptions",
"organizations_url": "https://api.github.com/users/avianion/orgs",
"repos_url": "https://api.github.com/users/avianion/repos",
"events_url": "https://api.github.com/users/avianion/events{/privacy}",
"received_events_url": "https://api.github.com/users/avianion/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You're probably using the command:\r\n```\r\npython run_generation.py --model_type=gpt2 --model_name_or_path=gpt2\r\n```\r\n\r\nto use GPT2-XL for generation you would change the last argument to `gpt2-xl`:\r\n```\r\npython run_generation.py --model_type=gpt2 --model_name_or_path=gpt2-xl\r\n```",
"ah. its because the latest version isnt available on pypi yet. and i pip installed it. 2.1.1 is the version I've got installed.\r\n\r\nIs there any way to force pip to install the latest version that was commited 21 hours ago.",
"Ah yes we're yet to release a pypi version with GPT2-XL. To install from source you would have to do:\r\n\r\n```\r\npip install git+https://github.com/huggingface/transformers\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
" pip install transformers==3.4.0\r\nthis version of transforrmers solved my problem: OSError: Model name 'ckiplab/gpt2-base-chinese' was not found in model name list (gpt2, gpt2-medium, gpt2-large, gpt2-xl, distilgpt2). We assumed 'ckiplab/gpt2-base-chinese' was a path or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url.\r\n"
] | 1,573 | 1,608 | 1,578 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I am currently running the run_generation.py and putting the model as gpt-2, however I am not satisfied with the results frm the small model.
How can I insert the big model instead? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1747/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1746 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1746/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1746/comments | https://api.github.com/repos/huggingface/transformers/issues/1746/events | https://github.com/huggingface/transformers/pull/1746 | 518,565,179 | MDExOlB1bGxSZXF1ZXN0MzM3NDM3MTc2 | 1,746 | Fixing models inputs_embeds | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1746?src=pr&el=h1) Report\n> Merging [#1746](https://codecov.io/gh/huggingface/transformers/pull/1746?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f88c104d8f79e78a98c8ce6c1f4a78db73142eab?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `91.3%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1746?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1746 +/- ##\n==========================================\n+ Coverage 84.03% 84.03% +<.01% \n==========================================\n Files 94 94 \n Lines 14021 14032 +11 \n==========================================\n+ Hits 11782 11792 +10 \n- Misses 2239 2240 +1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1746?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tests/conftest.py](https://codecov.io/gh/huggingface/transformers/pull/1746/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL2NvbmZ0ZXN0LnB5) | `93.33% <100%> (+3.33%)` | :arrow_up: |\n| [transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1746/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2Rpc3RpbGJlcnQucHk=) | `95.84% <100%> (+0.01%)` | :arrow_up: |\n| [transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1746/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2JlcnQucHk=) | `87.77% <75%> (ø)` | :arrow_up: |\n| [transformers/tests/modeling\\_bert\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1746/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2JlcnRfdGVzdC5weQ==) | `96.5% <91.66%> (-0.43%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1746?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1746?src=pr&el=footer). Last update [f88c104...fd29a30](https://codecov.io/gh/huggingface/transformers/pull/1746?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,573 | 1,573 | 1,573 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1746/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1746",
"html_url": "https://github.com/huggingface/transformers/pull/1746",
"diff_url": "https://github.com/huggingface/transformers/pull/1746.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1746.patch",
"merged_at": 1573067028000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/1745 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1745/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1745/comments | https://api.github.com/repos/huggingface/transformers/issues/1745/events | https://github.com/huggingface/transformers/issues/1745 | 518,494,812 | MDU6SXNzdWU1MTg0OTQ4MTI= | 1,745 | How to stop wordpiece-tokenizing in BertTokenizer? | {
"login": "RichardHWD",
"id": 35796793,
"node_id": "MDQ6VXNlcjM1Nzk2Nzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/35796793?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RichardHWD",
"html_url": "https://github.com/RichardHWD",
"followers_url": "https://api.github.com/users/RichardHWD/followers",
"following_url": "https://api.github.com/users/RichardHWD/following{/other_user}",
"gists_url": "https://api.github.com/users/RichardHWD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RichardHWD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RichardHWD/subscriptions",
"organizations_url": "https://api.github.com/users/RichardHWD/orgs",
"repos_url": "https://api.github.com/users/RichardHWD/repos",
"events_url": "https://api.github.com/users/RichardHWD/events{/privacy}",
"received_events_url": "https://api.github.com/users/RichardHWD/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"No, that's how the model works.\r\n\r\nShort of creating your own model (with you own custom tokenizer) there's no way to do what you want to do.",
"Okkkkkk, it is unreasonable ...."
] | 1,573 | 1,573 | 1,573 | NONE | null | Default BertTokenizer may split one word into two parts or three parts, which is harmful to token labeling task because it will make sentence length lager than input length. It is difficult to fuse these parts back to one representation.
So, how to stop this operation in BertTokenizer? Or there are alternative Tokenizers without spliting words?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1745/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1744 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1744/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1744/comments | https://api.github.com/repos/huggingface/transformers/issues/1744/events | https://github.com/huggingface/transformers/issues/1744 | 518,489,015 | MDU6SXNzdWU1MTg0ODkwMTU= | 1,744 | F1 socre is zero while loss is about 0.12xx when using run_ner.py to fine tuning bert model | {
"login": "Sunnycheey",
"id": 32103564,
"node_id": "MDQ6VXNlcjMyMTAzNTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/32103564?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sunnycheey",
"html_url": "https://github.com/Sunnycheey",
"followers_url": "https://api.github.com/users/Sunnycheey/followers",
"following_url": "https://api.github.com/users/Sunnycheey/following{/other_user}",
"gists_url": "https://api.github.com/users/Sunnycheey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sunnycheey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sunnycheey/subscriptions",
"organizations_url": "https://api.github.com/users/Sunnycheey/orgs",
"repos_url": "https://api.github.com/users/Sunnycheey/repos",
"events_url": "https://api.github.com/users/Sunnycheey/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sunnycheey/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @Sunnycheey I had the same issue when using my own data and labels. However, I fixed this once I ensured that the training data (train.txt) was in the correct format: WordToken/space/label, newline after every sentence (fullstop). \r\n\r\nWhile O\r\nvisiting O\r\nOtonga B-Sc\r\nprimary I-Sc\r\nhe O\r\nnoticed O\r\nan O\r\nanomaly O\r\n. O\r\n\r\nThe O\r\nnext O\r\nsentence O\r\netc...\r\n\r\n\r\n",
"Thanks for your reply. \r\n\r\nIs the fullstop matters? I use spacy to get sentences from document, so there may not be fullstop in the end of sentence...\r\n\r\nBTW, I have used training dataset to test the model trained, and the f1 score is 1 (not 0)...",
"I got it to work with fullstop and newline as above.\r\nI use stanfordnlp to prepare the data, but it is probably fine to use SpaCy. Make sure you have a separate folder for the output too.",
"@jensam It turns out if all the predict result is O (in BIO annotation scheme), the f1 score tends to be 0.\r\n\r\nThanks anyway."
] | 1,573 | 1,574 | 1,574 | NONE | null | ## ❓ Questions & Help
I want to use run_ner.py to do the sequence labeling work, and I make a few class of labels(e.g., B-School_name), then I annotate a few data (about 100 annotations for test) from dataset. Finally, I use run_ner.py to do the fine tuning job, but the f1 score and precision are both zero (but the loss is not). Have I got something wrong? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1744/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1743 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1743/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1743/comments | https://api.github.com/repos/huggingface/transformers/issues/1743/events | https://github.com/huggingface/transformers/pull/1743 | 518,363,448 | MDExOlB1bGxSZXF1ZXN0MzM3MjcxMTEw | 1,743 | Unlimited sequence length in Bert for QA | {
"login": "realEmjot",
"id": 43821946,
"node_id": "MDQ6VXNlcjQzODIxOTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/43821946?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/realEmjot",
"html_url": "https://github.com/realEmjot",
"followers_url": "https://api.github.com/users/realEmjot/followers",
"following_url": "https://api.github.com/users/realEmjot/following{/other_user}",
"gists_url": "https://api.github.com/users/realEmjot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/realEmjot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/realEmjot/subscriptions",
"organizations_url": "https://api.github.com/users/realEmjot/orgs",
"repos_url": "https://api.github.com/users/realEmjot/repos",
"events_url": "https://api.github.com/users/realEmjot/events{/privacy}",
"received_events_url": "https://api.github.com/users/realEmjot/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,573 | 1,573 | 1,573 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1743/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1743",
"html_url": "https://github.com/huggingface/transformers/pull/1743",
"diff_url": "https://github.com/huggingface/transformers/pull/1743.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1743.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/1742 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1742/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1742/comments | https://api.github.com/repos/huggingface/transformers/issues/1742/events | https://github.com/huggingface/transformers/issues/1742 | 518,262,673 | MDU6SXNzdWU1MTgyNjI2NzM= | 1,742 | Out of Memory (OOM) when repeatedly running large models | {
"login": "josiahdavis",
"id": 6405428,
"node_id": "MDQ6VXNlcjY0MDU0Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6405428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/josiahdavis",
"html_url": "https://github.com/josiahdavis",
"followers_url": "https://api.github.com/users/josiahdavis/followers",
"following_url": "https://api.github.com/users/josiahdavis/following{/other_user}",
"gists_url": "https://api.github.com/users/josiahdavis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/josiahdavis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/josiahdavis/subscriptions",
"organizations_url": "https://api.github.com/users/josiahdavis/orgs",
"repos_url": "https://api.github.com/users/josiahdavis/repos",
"events_url": "https://api.github.com/users/josiahdavis/events{/privacy}",
"received_events_url": "https://api.github.com/users/josiahdavis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This behavior is expected. `pytorch.cuda.empty_cache()` will free the memory that *can* be freed, think of it as a garbage collector.\r\n\r\nI assume the ˋmodelˋ variable contains the pretrained model. Since the variable doesn’t get out of scope, the reference to the object in the memory of the GPU still exists and the latter is thus not freed by `empty_cache()`.\r\n\r\nTry executing ˋdel modelˋ before ˋempty_cache()` to explicitly delete the variable and remove all reference to objects in the GPU’s memory.",
"Thank you for taking the time to review and comment! That makes sense, however, doesn't my code (lines 3rd and 4th from end below) execute `del model` before calling `torch.cuda.empty_cache()`?\r\n\r\n```python\r\nshow_gpu('Initial GPU memory usage:')\r\nfor i in range(2):\r\n model, optimizer, scheduler = get_training_obj(params)\r\n show_gpu(f'{i}: GPU memory usage after loading training objects:')\r\n for epoch in range(1):\r\n epoch_start = time.time()\r\n model.train()\r\n for batch in dp.train_dataloader:\r\n xb,mb,_,yb = tuple(t.to(params['device']) for t in batch)\r\n outputs = model(input_ids = xb, attention_mask = mb, labels = yb)\r\n loss = outputs[0]\r\n loss.backward()\r\n optimizer.step()\r\n scheduler.step()\r\n optimizer.zero_grad()\r\n show_gpu(f'{i}: GPU memory usage after training model:')\r\n del model, optimizer, scheduler, loss, outputs ## <<<<---- HERE\r\n torch.cuda.empty_cache() ## <<<<---- AND HERE\r\n torch.cuda.synchronize()\r\n show_gpu(f'{i}: GPU memory usage after clearing cache:')\r\n```",
"(You are correct that the variable `model` contains the pre-trained model.) ",
"I updated the code to print out GPU usage after loading the batch to GPU and after completing the forward pass for the first 5 batches of each run. It seems the big memory jumps occur during the first and second forward pass and the second loading of the batch to the GPU. Aside from these events, it seems the memory usage is relatively constant across a given training run.\r\n\r\n```\r\nInitial GPU memory usage: 0.0% (0 out of 16130)\r\n0: GPU memory usage after loading training objects: 14.7% (2377 out of 16130)\r\n0: GPU memory usage after loading batch 0: 14.7% (2377 out of 16130)\r\n0: GPU memory usage after forward pass 0: 43.6% (7029 out of 16130)\r\n0: GPU memory usage after loading batch 1: 49.1% (7927 out of 16130)\r\n0: GPU memory usage after forward pass 1: 70.0% (11289 out of 16130)\r\n0: GPU memory usage after loading batch 2: 70.6% (11393 out of 16130)\r\n0: GPU memory usage after forward pass 2: 70.6% (11393 out of 16130)\r\n0: GPU memory usage after loading batch 3: 70.6% (11393 out of 16130)\r\n0: GPU memory usage after forward pass 3: 70.6% (11393 out of 16130)\r\n0: GPU memory usage after loading batch 4: 70.6% (11393 out of 16130)\r\n0: GPU memory usage after forward pass 4: 70.6% (11393 out of 16130)\r\n0: GPU memory usage after training model: 70.8% (11415 out of 16130)\r\n0: GPU memory usage after clearing cache: 41.8% (6741 out of 16130)\r\n1: GPU memory usage after loading training objects: 50.2% (8093 out of 16130)\r\n1: GPU memory usage after loading batch 0: 50.2% (8093 out of 16130)\r\n1: GPU memory usage after forward pass 0: 78.6% (12673 out of 16130)\r\n1: GPU memory usage after loading batch 1: 84.3% (13593 out of 16130)\r\nTraceback (most recent call last):\r\n File \"14_OOM_submission.py\", line 102, in <module>\r\n outputs = model(input_ids = xb, attention_mask = mb, labels = yb)\r\n File \"/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/ubuntu/anaconda3/lib/python3.6/site-packages/transformers/modeling_roberta.py\", line 328, in forward\r\n head_mask=head_mask)\r\n File \"/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/ubuntu/anaconda3/lib/python3.6/site-packages/transformers/modeling_roberta.py\", line 181, in forward\r\n head_mask=head_mask)\r\n File \"/home/ubuntu/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py\", line 627, in forward\r\n head_mask=head_mask)\r\n File \"/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/ubuntu/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py\", line 348, in forward\r\n layer_outputs = layer_module(hidden_states, attention_mask, head_mask[i])\r\n File \"/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/ubuntu/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py\", line 328, in forward\r\n intermediate_output = self.intermediate(attention_output)\r\n File \"/home/ubuntu/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/ubuntu/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py\", line 300, in forward\r\n hidden_states = self.intermediate_act_fn(hidden_states)\r\n File \"/home/ubuntu/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py\", line 128, in gelu\r\n return x * 0.5 * (1.0 + torch.erf(x / math.sqrt(2.0)))\r\nRuntimeError: CUDA out of memory. Tried to allocate 26.00 MiB (GPU 0; 15.75 GiB total capacity; 14.24 GiB already allocated; 8.88 MiB free; 476.01 MiB cached)\r\n```",
"Yes I'm sorry, I misread the code the first time. Did you try clearing the last batch as well?",
"I did, same result :( ",
"So if you delete xb, mb, yb you still get the same memory usage after clearing the cache?",
"Yes. Just for kicks I also tried deleting xb, mb, yb after every forward pass (i.e., within the batch loop) and also clearing the cache and and confirmed that there is no difference here either.",
"Ok, I may have an idea. You can have a look at the tensors that are still tracked by python's garbage collector using something along the lines of:\r\n\r\n```python\r\nimport gc\r\n\r\nfor tracked_object in gc.get_objects():\r\n if torch.is_tensor(tracked_object):\r\n print(\"{} {} {}\".format(\r\n type(tracked_object).__name__,\r\n \"GPU\" if tracked_object.is_cuda else \"\" ,\r\n \" pinned\" if tracked_object.is_pinned() else \"\",\r\n))\r\n```",
"Thanks! I'm digging into it now... there is a lot that is showing up. Here is the output right after getting the OOM error.\r\n\r\n[output.txt](https://github.com/huggingface/transformers/files/3814757/output.txt)\r\n",
"Oops, quick update I just made, the `\" pinned\" if...` statement part should be a method not an attribute. Updating it produces a different output, showing that none of the objects are pinned.\r\n\r\n```python\r\nimport gc\r\n\r\nfor tracked_object in gc.get_objects():\r\n if torch.is_tensor(tracked_object):\r\n print(\"{} {} {}\".format(\r\n type(tracked_object).__name__,\r\n \"GPU\" if tracked_object.is_cuda else \"\" ,\r\n \" pinned\" if tracked_object.is_pinned() else \"\",\r\n))\r\n```\r\n\r\n[output.txt](https://github.com/huggingface/transformers/files/3814806/output.txt)\r\n",
"This is the core of my confusion. Why are their so many objects in memory after I have deleted everything I can think of in my script. I did a quick summary of the output of the code you just provided after running a single training job and deleting the aforementioned objects (`del model, optimizer, scheduler, outputs, loss, xb, mb, yb, batch`) and emptying the cache (`torch.cuda.empty_cache()`):\r\n\r\n```python\r\nimport pandas as pd\r\nimport gc\r\nresult = []\r\n\r\nfor tracked_object in gc.get_objects():\r\n if torch.is_tensor(tracked_object):\r\n shape = tracked_object.shape\r\n result.append({\r\n 'name': type(tracked_object).__name__,\r\n '1d': len(shape)==1,\r\n '2d': len(shape)==2,\r\n 'nrows': shape[0],\r\n 'ncols': shape[1] if (len(shape) > 1) else None,\r\n 'gpu': tracked_object.is_cuda,\r\n 'pinned': tracked_object.is_pinned()\r\n })\r\n \r\nd = pd.DataFrame(result)\r\nd.groupby('name')['gpu', 'pinned', '1d', '2d'].sum()\r\n```\r\n\r\n<img width=\"300\" alt=\"Screen Shot 2019-11-06 at 10 41 50 PM\" src=\"https://user-images.githubusercontent.com/6405428/68307729-ac373a00-00e6-11ea-81f2-a814a604365b.png\">\r\n",
"Do you know which objects they correspond to?",
"I think they correspond to the parameters and gradients of the `roberta-large` model. (So strange that they are still there after I deleted the model at the end of the training job.)\r\n\r\nHere is a single layer of the model:\r\n\r\n```\r\n(0): BertLayer(\r\n (attention): BertAttention(\r\n (self): BertSelfAttention(\r\n (query): Linear(in_features=1024, out_features=1024, bias=True)\r\n (key): Linear(in_features=1024, out_features=1024, bias=True)\r\n (value): Linear(in_features=1024, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (output): BertSelfOutput(\r\n (dense): Linear(in_features=1024, out_features=1024, bias=True)\r\n (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): BertIntermediate(\r\n (dense): Linear(in_features=1024, out_features=4096, bias=True)\r\n )\r\n (output): BertOutput(\r\n (dense): Linear(in_features=4096, out_features=1024, bias=True)\r\n (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n```\r\nHere is the summary of what's being stored in GPU:\r\n\r\n<img width=\"800\" alt=\"Screen Shot 2019-11-06 at 11 46 04 PM\" src=\"https://user-images.githubusercontent.com/6405428/68313497-a560f500-00ef-11ea-95c2-24799b5f9bd4.png\">",
"I think this person is on to something: https://discuss.pytorch.org/t/releasing-gpu-memory-after-deleting-model/48167 ",
"Mmm I stared for a really long time at your code, and it seems to be that the memory of the `_` element of the tuple is never freed (token type ids?). I believe that in Python its throwaway meaning is a convention, and the memory is effectively allocated. That’s the only think I can see that has not been freed with your previous attempts.",
"Thank you so much for looking into it! \r\n\r\nThe token type ids was a good idea, though it does not appear to impact anything.\r\n\r\nHowever, I made some progress. What I discovered:\r\n\r\n**I am unable to remove the model parameters from GPU when I load the `WarmupLinearSchedule` from `transformers`: e.g.,**\r\n\r\n```\r\nfrom transformers import WarmupLinearSchedule\r\n...\r\nscheduler = WarmupLinearSchedule(optimizer, warmup_steps=warmup_steps, t_total=params['total_steps'])\r\n```\r\n\r\nHowever, when I use a standard scheduler (e.g., `StepLR`) I have no issue with deleting the model from the GPU:\r\n\r\n```\r\nscheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=30, gamma=0.1)\r\n\r\n```",
"Here is the code for `WarmupLinearSchedule` for reference that your team wrote. Seems pretty reasonable to me, not sure why it would be causing an issue when a standard learning rate scheduler is not (e.g., both [`LambdaLR`](https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#LambdaLR) and [`StepLR`](https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#StepLR) inherit from `_LRScheduler`):\r\n\r\n```python\r\nclass WarmupLinearSchedule(LambdaLR):\r\n \"\"\" Linear warmup and then linear decay.\r\n Linearly increases learning rate from 0 to 1 over `warmup_steps` training steps.\r\n Linearly decreases learning rate from 1. to 0. over remaining `t_total - warmup_steps` steps.\r\n \"\"\"\r\n def __init__(self, optimizer, warmup_steps, t_total, last_epoch=-1):\r\n self.warmup_steps = warmup_steps\r\n self.t_total = t_total\r\n super(WarmupLinearSchedule, self).__init__(optimizer, self.lr_lambda, last_epoch=last_epoch)\r\n\r\n def lr_lambda(self, step):\r\n if step < self.warmup_steps:\r\n return float(step) / float(max(1, self.warmup_steps))\r\n return max(0.0, float(self.t_total - step) / float(max(1.0, self.t_total - self.warmup_steps)))\r\n\r\n```",
"@LysandreJik any idea why that might be?",
"@LysandreJik thanks for taking a look! To summarise my issue in a minimal example I am attaching screenshots + complete code for a minimal reproducible example that gets at the crux of the issue (w/downloading any data).\r\n\r\n<img width=\"1000\" alt=\"Screen Shot 2019-11-08 at 9 08 51 AM\" src=\"https://user-images.githubusercontent.com/6405428/68441083-f49a4900-0207-11ea-8042-708b5b852f4a.png\">\r\n\r\n<img width=\"986\" alt=\"Screen Shot 2019-11-08 at 9 09 59 AM\" src=\"https://user-images.githubusercontent.com/6405428/68441125-1c89ac80-0208-11ea-80c7-851dd9b01254.png\">\r\n\r\n<img width=\"1040\" alt=\"Screen Shot 2019-11-08 at 9 10 31 AM\" src=\"https://user-images.githubusercontent.com/6405428/68441130-20b5ca00-0208-11ea-8c4a-c7f80ac68c91.png\">\r\n\r\nHere is the complete code to reproduce these screenshots [code.txt](https://github.com/huggingface/transformers/files/3822500/code.txt).\r\n\r\n",
"Hi, there indeed seems to be an issue with the schedulers. Probably related to #1134",
"I don’t see any circular references in your code but a `gc.collect()` after `del` cannot hurt—garbage collection is clearly not what takes the longest here. Let me know if this solves the problem.",
"\r\n\r\nSo I instantiated `WarmupLinearSchedule` with `AdamW` and this is what `objgraph` gives me: there is a circular reference that seems to come with our implementation. `gc.collect()` should work as a temporary workaround, if that (I hope) is the issue.\r\n\r\nEdit: Would you mind adding the following right after `gc.collect()` and giving us the result?\r\n\r\n```python\r\nfor item in gc.garbage:\r\n print(item)\r\n```\r\n",
"Thank you so much! This order of operations does the trick for me, removing the parameters and gradients from the GPU.\r\n\r\n1. delete objects\r\n2. `gc.collect()`\r\n3. `torch.cuda.empty_cache()`\r\n\r\nStrangely, running your code snippet (`for item in gc.garbage: print(item)`) after deleting the objects (but not calling `gc.collect()` or `empty_cache()`) doesn't print out anything.",
"Regarding the aforementioned 1. I ran a couple of quick tests. The only objects I need to delete in order to delete the model parameters and gradients is the optimizer and scheduler. (I do not need to delete the outputs, loss, batch data.) As you can see from the screengrab below the only objects on GPU are xb, mb, yb, ([32,50]) tt ([32]), loss ([]), output ([32,2]) and the parameters and gradients are not there.\r\n\r\n<img width=\"681\" alt=\"Screen Shot 2019-11-09 at 10 03 56 AM\" src=\"https://user-images.githubusercontent.com/6405428/68521154-26c7ab80-02d9-11ea-9a52-454e9b1f8721.png\">\r\n",
"> 1. delete objects\r\n> 2. `gc.collect()`\r\n> 3. `torch.cuda.empty_cache()`\r\n> \r\n> Strangely, running your code snippet (`for item in gc.garbage: print(item)`) after deleting the objects (but not calling `gc.collect()` or `empty_cache()`) doesn't print out anything.\r\n\r\nSorry, maybe I wasn’t completely clear: you need to run it right *after* garbage collection.\r\n\r\nDoes that completely solve your Out Of Memory issue? Thank you for raising the issue and helping us find the solution :)\r\n\r\nThis is apparently bothering other users, but I am not 100% sure we can *easily* do something about it; Would you mind trying to initialize the scheduler with the following function and tell me if you still have a memory error? (without using `gc.collect()`)\r\n\r\n```python\r\ndef warmup_linear_schedule(optimizer, warmup_steps, t_total, last_epoch=-1):\r\n def lr_lambda(step):\r\n if step < warmup_steps:\r\n return float(step) / float(max(1, warmup_steps))\r\n return max(0.0, float(t_total - step) / float(max(1.0, t_total - warmup_steps)))\r\n\r\n return LambdaLR(optimizer, lr_lambda, last_epoch=-1)\r\n```\r\n\r\nThere might be a reference mixup when we subclass `LambdaLR`; using a closure may avoid this.",
"Yes, it does solve my issue! I am currently running some training jobs on my GPU (thanks to your help!). I will try out the closure approach once I finish up with those and get back with you on this thread.",
"Thanks guys for the great discussion.\r\n(leaving a comment so i can access this thread whenever need, sorry for a useless comment)",
"(Apologies for the delayed response.) \r\n\r\nI just tested it out and it worked as expected. No need to call `gc.collect()` now when removing parameters and gradients from GPU. Thank you so much @rlouf, I commend you for your attentiveness to resolving the issue and updating your examples."
] | 1,573 | 1,574 | 1,574 | NONE | null | ## ❓ Any advice for freeing up GPU memory after training a large model (e.g., roberta-large)?
### System Info
```
Platform Linux-4.4.0-1096-aws-x86_64-with-debian-stretch-sid
Python 3.6.6 |Anaconda, Inc.| (default, Jun 28 2018, 17:14:51)
[GCC 7.2.0]
PyTorch 1.3.0
AWS EC2 p3.2xlarge (single GPU)
```
My current objective is to run `roberta-large` multiple training jobs sequentially from the same python script (i.e., for a simple HPO search). Even after deleting all of the objects and clearing the cuda cache after the first training job ends I am still stuck with 41% GPU memory usage (as compared to 15% before starting the training process).
I am attaching a reproducible example to get the error:
### Here is the relevant code
```python
show_gpu('Initial GPU memory usage:')
for i in range(2):
model, optimizer, scheduler = get_training_obj(params)
show_gpu(f'{i}: GPU memory usage after loading training objects:')
for epoch in range(1):
epoch_start = time.time()
model.train()
for batch in dp.train_dataloader:
xb,mb,_,yb = tuple(t.to(params['device']) for t in batch)
outputs = model(input_ids = xb, attention_mask = mb, labels = yb)
loss = outputs[0]
loss.backward()
optimizer.step()
scheduler.step()
optimizer.zero_grad()
show_gpu(f'{i}: GPU memory usage after training model:')
del model, optimizer, scheduler, loss, outputs
torch.cuda.empty_cache()
torch.cuda.synchronize()
show_gpu(f'{i}: GPU memory usage after clearing cache:')
```
### Here is the output and full traceback
```
Initial GPU memory usage: 0.0% (0 out of 16130)
0: GPU memory usage after loading training objects: 14.7% (2377 out of 16130)
0: GPU memory usage after training model: 70.8% (11415 out of 16130)
0: GPU memory usage after clearing cache: 41.8% (6741 out of 16130)
1: GPU memory usage after loading training objects: 50.2% (8093 out of 16130)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-7-20a3bdec1bf4> in <module>()
8 for batch in dp.train_dataloader:
9 xb,mb,_,yb = tuple(t.to(params['device']) for t in batch)
---> 10 outputs = model(input_ids = xb, attention_mask = mb, labels = yb)
11 loss = outputs[0]
12 loss.backward()
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/transformers/modeling_roberta.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, labels)
326 token_type_ids=token_type_ids,
327 position_ids=position_ids,
--> 328 head_mask=head_mask)
329 sequence_output = outputs[0]
330 logits = self.classifier(sequence_output)
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/transformers/modeling_roberta.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask)
179 token_type_ids=token_type_ids,
180 position_ids=position_ids,
--> 181 head_mask=head_mask)
182
183
~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask)
625 encoder_outputs = self.encoder(embedding_output,
626 extended_attention_mask,
--> 627 head_mask=head_mask)
628 sequence_output = encoder_outputs[0]
629 pooled_output = self.pooler(sequence_output)
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, hidden_states, attention_mask, head_mask)
346 all_hidden_states = all_hidden_states + (hidden_states,)
347
--> 348 layer_outputs = layer_module(hidden_states, attention_mask, head_mask[i])
349 hidden_states = layer_outputs[0]
350
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, hidden_states, attention_mask, head_mask)
326 attention_outputs = self.attention(hidden_states, attention_mask, head_mask)
327 attention_output = attention_outputs[0]
--> 328 intermediate_output = self.intermediate(attention_output)
329 layer_output = self.output(intermediate_output, attention_output)
330 outputs = (layer_output,) + attention_outputs[1:] # add attentions if we output them
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, hidden_states)
298 def forward(self, hidden_states):
299 hidden_states = self.dense(hidden_states)
--> 300 hidden_states = self.intermediate_act_fn(hidden_states)
301 return hidden_states
302
~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in gelu(x)
126 Also see https://arxiv.org/abs/1606.08415
127 """
--> 128 return x * 0.5 * (1.0 + torch.erf(x / math.sqrt(2.0)))
129
130 def gelu_new(x):
RuntimeError: CUDA out of memory. Tried to allocate 26.00 MiB (GPU 0; 15.75 GiB total capacity; 14.24 GiB already allocated; 8.88 MiB free; 476.01 MiB cached)
```
### Appendix / complete code to reproduce
```python
import platform; print("Platform", platform.platform())
import sys; print("Python", sys.version)
import torch; print("PyTorch", torch.__version__)
from __future__ import absolute_import, division, print_function
import glob
import logging
import os
import time
import json
import random
import numpy as np
import pandas as pd
from random import sample, seed
import torch
from torch.utils.data import DataLoader, RandomSampler, SequentialSampler,TensorDataset
from torch.utils.data.distributed import DistributedSampler
from transformers import RobertaConfig, RobertaForSequenceClassification, RobertaTokenizer
from transformers import DistilBertConfig, DistilBertForSequenceClassification, DistilBertTokenizer
from transformers import BertConfig, BertForSequenceClassification, BertTokenizer
from transformers import AdamW, WarmupLinearSchedule
from transformers import glue_compute_metrics as compute_metrics
from transformers import glue_output_modes as output_modes
from transformers import glue_processors as processors
from transformers import glue_convert_examples_to_features as convert_examples_to_features
import subprocess
params = {
'num_epochs': 2,
'warmup_ratio': 0.06,
'weight_decay': 0.1,
'adam_epsilon': 1e-6,
'model_name': 'roberta-large',
'max_grad_norm': 1.0,
'lr': 2e-5,
'bs': 32,
'device': 'cuda',
'task': 'cola',
'data_dir': '/home/ubuntu/glue_data/CoLA',
'max_seq_length': 50,
'metric_name': 'mcc',
'patience': 3,
'seed': 935,
'n': -1,
}
class DataProcessor():
'''Preprocess the data, store data loaders and tokenizer'''
_TOKEN_TYPES = {
'roberta': RobertaTokenizer,
'distilbert': DistilBertTokenizer,
'bert': BertTokenizer,
}
def __init__(self, params):
model_type = params['model_name'].split('-')[0]
assert model_type in self._TOKEN_TYPES.keys()
self.tok = self._TOKEN_TYPES[model_type]
self.params = params
self.processor = processors[self.params['task']]()
self.output_mode = output_modes[self.params['task']]
self.label_list = self.processor.get_labels()
@staticmethod
def _convert_to_tensors(features):
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
all_attention_mask = torch.tensor([f.attention_mask for f in features], dtype=torch.long)
all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long)
all_labels = torch.tensor([f.label for f in features], dtype=torch.long)
return TensorDataset(all_input_ids, all_attention_mask, all_token_type_ids, all_labels)
def _load_examples(self, tokenizer, evaluate):
if evaluate:
examples = self.processor.get_dev_examples(self.params['data_dir'])
else:
examples = self.processor.get_train_examples(self.params['data_dir'])
if self.params['n'] >= 0:
examples = sample(examples, self.params['n'])
features = convert_examples_to_features(examples,
tokenizer,
label_list=self.label_list,
max_length=self.params['max_seq_length'],
output_mode=self.output_mode,
pad_on_left=False,
pad_token=tokenizer.convert_tokens_to_ids([tokenizer.pad_token])[0],
pad_token_segment_id=0)
return self._convert_to_tensors(features)
def _define_tokenizer(self):
return self.tok.from_pretrained(self.params['model_name'], do_lower_case=True)
def load_data(self):
tokenizer = self._define_tokenizer()
self.train_data = self._load_examples(tokenizer, False)
self.valid_data = self._load_examples(tokenizer, True)
self.train_n = len(self.train_data)
self.valid_n = len(self.valid_data)
self.params['total_steps'] = self.params['num_epochs'] * self.train_n
return self.params
def create_loaders(self):
self.train_dataloader = DataLoader(self.train_data, shuffle=True, batch_size=self.params['bs'])
self.valid_dataloader = DataLoader(self.valid_data, shuffle=False, batch_size=2*self.params['bs'])
dp = DataProcessor(params)
params = dp.load_data()
dp.create_loaders()
def show_gpu(msg):
"""
ref: https://discuss.pytorch.org/t/access-gpu-memory-usage-in-pytorch/3192/4
"""
def query(field):
return(subprocess.check_output(
['nvidia-smi', f'--query-gpu={field}',
'--format=csv,nounits,noheader'],
encoding='utf-8'))
def to_int(result):
return int(result.strip().split('\n')[0])
used = to_int(query('memory.used'))
total = to_int(query('memory.total'))
pct = used/total
print('\n' + msg, f'{100*pct:2.1f}% ({used} out of {total})')
# ### Running the Training Loop
def get_training_obj(params):
config = RobertaConfig.from_pretrained(params['model_name'], num_labels=2)
model = RobertaForSequenceClassification.from_pretrained(params['model_name'], config=config).to(params['device'])
no_decay = ['bias', 'LayerNorm.weight']
gpd_params = [
{'params': [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)],
'weight_decay': params['weight_decay']},
{'params': [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)],
'weight_decay': 0.0}
]
optimizer = AdamW(gpd_params, lr=params['lr'], eps=params['adam_epsilon'])
warmup_steps = int(params['warmup_ratio'] * params['total_steps'])
scheduler = WarmupLinearSchedule(optimizer, warmup_steps=warmup_steps, t_total=params['total_steps'])
return model, optimizer, scheduler
show_gpu('Initial GPU memory usage:')
for i in range(2):
model, optimizer, scheduler = get_training_obj(params)
show_gpu(f'{i}: GPU memory usage after loading training objects:')
for epoch in range(1):
epoch_start = time.time()
model.train()
for batch in dp.train_dataloader:
xb,mb,_,yb = tuple(t.to(params['device']) for t in batch)
outputs = model(input_ids = xb, attention_mask = mb, labels = yb)
loss = outputs[0]
loss.backward()
optimizer.step()
scheduler.step()
optimizer.zero_grad()
show_gpu(f'{i}: GPU memory usage after training model:')
del model, optimizer, scheduler, loss, outputs
torch.cuda.empty_cache()
torch.cuda.synchronize()
show_gpu(f'{i}: GPU memory usage after clearing cache:')
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1742/reactions",
"total_count": 8,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1742/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1741 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1741/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1741/comments | https://api.github.com/repos/huggingface/transformers/issues/1741/events | https://github.com/huggingface/transformers/pull/1741 | 517,952,165 | MDExOlB1bGxSZXF1ZXN0MzM2OTI0Nzc2 | 1,741 | GPT-2 XL | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,572 | 1,578 | 1,572 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1741/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1741",
"html_url": "https://github.com/huggingface/transformers/pull/1741",
"diff_url": "https://github.com/huggingface/transformers/pull/1741.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1741.patch",
"merged_at": 1572978719000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/1740 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1740/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1740/comments | https://api.github.com/repos/huggingface/transformers/issues/1740/events | https://github.com/huggingface/transformers/pull/1740 | 517,880,719 | MDExOlB1bGxSZXF1ZXN0MzM2ODY1NzU4 | 1,740 | Fix CTRL past | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1740?src=pr&el=h1) Report\n> Merging [#1740](https://codecov.io/gh/huggingface/transformers/pull/1740?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5afca00b4732f57329824e1538897e791e02e894?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1740?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1740 +/- ##\n==========================================\n+ Coverage 84.06% 84.06% +<.01% \n==========================================\n Files 105 105 \n Lines 15536 15537 +1 \n==========================================\n+ Hits 13060 13061 +1 \n Misses 2476 2476\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1740?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1740/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `96.47% <100%> (+0.01%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1740?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1740?src=pr&el=footer). Last update [5afca00...8da47b0](https://codecov.io/gh/huggingface/transformers/pull/1740?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Ok great, nice catch and fix!"
] | 1,572 | 1,574 | 1,574 | MEMBER | null | Fixes the issue with the re-usable past in CTRL. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1740/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1740",
"html_url": "https://github.com/huggingface/transformers/pull/1740",
"diff_url": "https://github.com/huggingface/transformers/pull/1740.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1740.patch",
"merged_at": 1574893711000
} |
https://api.github.com/repos/huggingface/transformers/issues/1739 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1739/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1739/comments | https://api.github.com/repos/huggingface/transformers/issues/1739/events | https://github.com/huggingface/transformers/pull/1739 | 517,854,914 | MDExOlB1bGxSZXF1ZXN0MzM2ODQ0NjI1 | 1,739 | [WIP] Adding Google T5 model | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1739?src=pr&el=h1) Report\n> Merging [#1739](https://codecov.io/gh/huggingface/transformers/pull/1739?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7296f1010b6faaf3b1fb409bc5a9ebadcea51973?src=pr&el=desc) will **increase** coverage by `0.58%`.\n> The diff coverage is `88.79%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1739?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1739 +/- ##\n==========================================\n+ Coverage 80.35% 80.93% +0.58% \n==========================================\n Files 114 121 +7 \n Lines 17095 18329 +1234 \n==========================================\n+ Hits 13736 14835 +1099 \n- Misses 3359 3494 +135\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1739?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/1739/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2VuY29kZXJfZGVjb2Rlci5weQ==) | `32.3% <0%> (ø)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_transfo\\_xl\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1739/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX3RyYW5zZm9feGxfdGVzdC5weQ==) | `94% <100%> (ø)` | :arrow_up: |\n| [transformers/tests/modeling\\_transfo\\_xl\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1739/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RyYW5zZm9feGxfdGVzdC5weQ==) | `94.54% <100%> (ø)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1739/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `97.41% <100%> (+0.49%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1739/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `95.2% <100%> (+0.06%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1739/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3V0aWxzLnB5) | `92.3% <100%> (-0.05%)` | :arrow_down: |\n| [transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1739/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2ZpbGVfdXRpbHMucHk=) | `71.29% <100%> (+0.27%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/1739/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3hsbS5weQ==) | `90.47% <100%> (ø)` | :arrow_up: |\n| [transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1739/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3V0aWxzLnB5) | `91.46% <100%> (ø)` | :arrow_up: |\n| [transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/1739/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2NvbmZpZ3VyYXRpb25fYXV0by5weQ==) | `44.18% <25%> (-0.82%)` | :arrow_down: |\n| ... and [19 more](https://codecov.io/gh/huggingface/transformers/pull/1739/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1739?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1739?src=pr&el=footer). Last update [7296f10...cbb368c](https://codecov.io/gh/huggingface/transformers/pull/1739?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"How much work is there remaining on T5? Tried to check out this branch but it looks like the weights and SentencePiece models aren't uploaded yet.",
"I would like to know if this implementation of T5 is on hold because of priority switching to different tasks or maybe there was found an fundamental obstacle, which causes impossibility to move T5 to transformers library and this pull request will not be finished?\r\n\r\nThanks for great work!",
"No fundamental obstacle, it was just stuck in the second place of my TODO list for a while.\r\n\r\nThis model will need to be fine-tuned on a downstream task before usage though because the combination of mesh-tensorflow ops and bfloat16 seems to cause a bigger discrepancy with PyTorch's float32 ops than other architectures. ",
"Ok merging",
"Good work. Will you be releasing finetuning code soon?"
] | 1,572 | 1,579 | 1,576 | MEMBER | null | Add Google T5 model:
- paper: "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer" https://arxiv.org/abs/1910.10683
- code: https://github.com/google-research/text-to-text-transfer-transformer
The original model makes heavy use of model and data parallelism to scale the training up to a 11B parameters model. It is based on Mesh TensorFlow (https://github.com/tensorflow/mesh).
We will use a simpler version of model parallelism by simply distributing the layers on several GPUs.
# Workflow for including a model from [README.md](https://github.com/huggingface/transformers/blob/master/templates/adding_a_new_model/README.md)
Here an overview of the general workflow:
- [x] add model/configuration/tokenization classes
- [x] add conversion scripts
- [x] add tests
- [x] finalize
Let's details what should be done at each step
## Adding model/configuration/tokenization classes
Here is the workflow for adding model/configuration/tokenization classes:
- [x] copy the python files from the present folder to the main folder and rename them, replacing `xxx` with your model name,
- [x] edit the files to replace `XXX` (with various casing) with your model name
- [x] copy-past or create a simple configuration class for your model in the `configuration_...` file
- [x] copy-past or create the code for your model in the `modeling_...` files (PyTorch and TF 2.0)
- [x] copy-past or create a tokenizer class for your model in the `tokenization_...` file
# Adding conversion scripts
Here is the workflow for the conversion scripts:
- [x] copy the conversion script (`convert_...`) from the present folder to the main folder.
- [x] edit this scipt to convert your original checkpoint weights to the current pytorch ones.
# Adding tests:
Here is the workflow for the adding tests:
- [x] copy the python files from the `tests` sub-folder of the present folder to the `tests` subfolder of the main folder and rename them, replacing `xxx` with your model name,
- [x] edit the tests files to replace `XXX` (with various casing) with your model name
- [x] edit the tests code as needed
# Final steps
You can then finish the addition step by adding imports for your classes in the common files:
- [x] add import for all the relevant classes in `__init__.py`
- [x] add your configuration in `configuration_auto.py`
- [x] add your PyTorch and TF 2.0 model respectively in `modeling_auto.py` and `modeling_tf_auto.py`
- [x] add your tokenizer in `tokenization_auto.py`
- [x] add your models and tokenizer to `pipeline.py`
- [x] add a link to your conversion script in the main conversion utility (currently in `__main__` but will be moved to the `commands` subfolder in the near future)
- [x] edit the PyTorch to TF 2.0 conversion script to add your model in the `convert_pytorch_checkpoint_to_tf2.py` file
- [x] add a mention of your model in the doc: `README.md` and the documentation it-self at `docs/source/pretrained_models.rst`.
- [x] upload the pretrained weigths, configurations and vocabulary files.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1739/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1739/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1739",
"html_url": "https://github.com/huggingface/transformers/pull/1739",
"diff_url": "https://github.com/huggingface/transformers/pull/1739.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1739.patch",
"merged_at": 1576312844000
} |
https://api.github.com/repos/huggingface/transformers/issues/1738 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1738/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1738/comments | https://api.github.com/repos/huggingface/transformers/issues/1738/events | https://github.com/huggingface/transformers/issues/1738 | 517,788,562 | MDU6SXNzdWU1MTc3ODg1NjI= | 1,738 | BART | {
"login": "clmnt",
"id": 821155,
"node_id": "MDQ6VXNlcjgyMTE1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/821155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clmnt",
"html_url": "https://github.com/clmnt",
"followers_url": "https://api.github.com/users/clmnt/followers",
"following_url": "https://api.github.com/users/clmnt/following{/other_user}",
"gists_url": "https://api.github.com/users/clmnt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clmnt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clmnt/subscriptions",
"organizations_url": "https://api.github.com/users/clmnt/orgs",
"repos_url": "https://api.github.com/users/clmnt/repos",
"events_url": "https://api.github.com/users/clmnt/events{/privacy}",
"received_events_url": "https://api.github.com/users/clmnt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Duplicate of #1676"
] | 1,572 | 1,581 | 1,572 | MEMBER | null | # 🌟New model addition
## Model description
method for pre-training seq2seq models by de-noising text. BART outperforms previous work on a bunch of generation tasks (summarization/dialogue/QA), while getting similar performance to RoBERTa on SQuAD/GLUE
## Open Source status
* [ ] the model implementation is available: not yet
* [ ] the model weights are available: not yet
* [ ] who are the authors: @yinhanliu @ernamangoyal
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1738/reactions",
"total_count": 4,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/1738/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1737 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1737/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1737/comments | https://api.github.com/repos/huggingface/transformers/issues/1737/events | https://github.com/huggingface/transformers/pull/1737 | 517,704,856 | MDExOlB1bGxSZXF1ZXN0MzM2NzIxOTU2 | 1,737 | Documentation: Updating docblocks in optimizers.py | {
"login": "DomHudson",
"id": 10864294,
"node_id": "MDQ6VXNlcjEwODY0Mjk0",
"avatar_url": "https://avatars.githubusercontent.com/u/10864294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DomHudson",
"html_url": "https://github.com/DomHudson",
"followers_url": "https://api.github.com/users/DomHudson/followers",
"following_url": "https://api.github.com/users/DomHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/DomHudson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DomHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DomHudson/subscriptions",
"organizations_url": "https://api.github.com/users/DomHudson/orgs",
"repos_url": "https://api.github.com/users/DomHudson/repos",
"events_url": "https://api.github.com/users/DomHudson/events{/privacy}",
"received_events_url": "https://api.github.com/users/DomHudson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1737?src=pr&el=h1) Report\n> Merging [#1737](https://codecov.io/gh/huggingface/transformers/pull/1737?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e99071f10578adb0191288c1f3301e9a758d6200?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1737?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1737 +/- ##\n=======================================\n Coverage 83.93% 83.93% \n=======================================\n Files 94 94 \n Lines 13951 13951 \n=======================================\n Hits 11710 11710 \n Misses 2241 2241\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1737?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/1737/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL29wdGltaXphdGlvbi5weQ==) | `96.62% <ø> (ø)` | :arrow_up: |\n| [transformers/tests/modeling\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1737/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2NvbW1vbl90ZXN0LnB5) | `73.08% <0%> (-0.59%)` | :arrow_down: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1737/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `96.8% <0%> (+1.59%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1737?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1737?src=pr&el=footer). Last update [e99071f...c958962](https://codecov.io/gh/huggingface/transformers/pull/1737?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great, thanks for this!"
] | 1,572 | 1,572 | 1,572 | NONE | null | ## Summary
Updating documentation for the optimizer classes. I explicitly state that the scheduler classes are multiplying the optimizer's learning rate by a changing variable.
## Closes
https://github.com/huggingface/transformers/issues/1712 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1737/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1737",
"html_url": "https://github.com/huggingface/transformers/pull/1737",
"diff_url": "https://github.com/huggingface/transformers/pull/1737.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1737.patch",
"merged_at": 1572993090000
} |
https://api.github.com/repos/huggingface/transformers/issues/1736 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1736/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1736/comments | https://api.github.com/repos/huggingface/transformers/issues/1736/events | https://github.com/huggingface/transformers/pull/1736 | 517,683,009 | MDExOlB1bGxSZXF1ZXN0MzM2NzA0NDcw | 1,736 | Fix TFXLNet | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1736?src=pr&el=h1) Report\n> Merging [#1736](https://codecov.io/gh/huggingface/transformers/pull/1736?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ba973342e3315471a9f44e7465cd245d7bcc5ea2?src=pr&el=desc) will **increase** coverage by `1.39%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1736?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1736 +/- ##\n==========================================\n+ Coverage 82.56% 83.95% +1.39% \n==========================================\n Files 94 94 \n Lines 13951 13951 \n==========================================\n+ Hits 11519 11713 +194 \n+ Misses 2432 2238 -194\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1736?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_tf\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1736/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3hsbmV0LnB5) | `87.82% <100%> (ø)` | :arrow_up: |\n| [transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1736/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `81.69% <0%> (+1.35%)` | :arrow_up: |\n| [transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1736/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `96.36% <0%> (+2.27%)` | :arrow_up: |\n| [transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1736/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `73.28% <0%> (+2.46%)` | :arrow_up: |\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1736/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `71.08% <0%> (+12.65%)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1736/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `96.8% <0%> (+17.02%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1736/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `92.95% <0%> (+83.09%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1736?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1736?src=pr&el=footer). Last update [ba97334...dfb61ca](https://codecov.io/gh/huggingface/transformers/pull/1736?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,572 | 1,651 | 1,576 | MEMBER | null | PR to fix #1692 (type casting attention mask in TF 2.0 version of XLNet)
cc @mfuntowicz | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1736/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1736",
"html_url": "https://github.com/huggingface/transformers/pull/1736",
"diff_url": "https://github.com/huggingface/transformers/pull/1736.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1736.patch",
"merged_at": 1576928525000
} |
https://api.github.com/repos/huggingface/transformers/issues/1735 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1735/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1735/comments | https://api.github.com/repos/huggingface/transformers/issues/1735/events | https://github.com/huggingface/transformers/pull/1735 | 517,680,710 | MDExOlB1bGxSZXF1ZXN0MzM2NzAyNjEw | 1,735 | Do not use GPU when importing transformers | {
"login": "ondewo",
"id": 42312363,
"node_id": "MDEyOk9yZ2FuaXphdGlvbjQyMzEyMzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/42312363?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ondewo",
"html_url": "https://github.com/ondewo",
"followers_url": "https://api.github.com/users/ondewo/followers",
"following_url": "https://api.github.com/users/ondewo/following{/other_user}",
"gists_url": "https://api.github.com/users/ondewo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ondewo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ondewo/subscriptions",
"organizations_url": "https://api.github.com/users/ondewo/orgs",
"repos_url": "https://api.github.com/users/ondewo/repos",
"events_url": "https://api.github.com/users/ondewo/events{/privacy}",
"received_events_url": "https://api.github.com/users/ondewo/received_events",
"type": "Organization",
"site_admin": false
} | [] | closed | false | null | [] | [
"Well this is actually used in other places, in `modeling_tf_pytorch_utils` for instance, and designed to be overriden by model class-specific inputs, for instance in `modeling_xlm` (hence all the CircleCI errors).\r\n\r\nMaybe making this a python property instead of an attribute would solve your issue as well?",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1735?src=pr&el=h1) Report\n> Merging [#1735](https://codecov.io/gh/huggingface/transformers/pull/1735?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ba973342e3315471a9f44e7465cd245d7bcc5ea2?src=pr&el=desc) will **increase** coverage by `1.39%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1735?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1735 +/- ##\n==========================================\n+ Coverage 82.56% 83.95% +1.39% \n==========================================\n Files 94 94 \n Lines 13951 13952 +1 \n==========================================\n+ Hits 11519 11714 +195 \n+ Misses 2432 2238 -194\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1735?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1735/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3V0aWxzLnB5) | `92.45% <100%> (+0.04%)` | :arrow_up: |\n| [transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1735/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `81.69% <0%> (+1.35%)` | :arrow_up: |\n| [transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1735/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `96.36% <0%> (+2.27%)` | :arrow_up: |\n| [transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1735/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `73.28% <0%> (+2.46%)` | :arrow_up: |\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1735/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `71.08% <0%> (+12.65%)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1735/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `96.8% <0%> (+17.02%)` | :arrow_up: |\n| [transformers/modeling\\_tf\\_pytorch\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1735/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RmX3B5dG9yY2hfdXRpbHMucHk=) | `92.95% <0%> (+83.09%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1735?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1735?src=pr&el=footer). Last update [ba97334...124409d](https://codecov.io/gh/huggingface/transformers/pull/1735?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"When will this pull request be merged and released?",
"Indeed, LGTM, let's merge"
] | 1,572 | 1,575 | 1,575 | NONE | null | This PR addresses the issue #1507 and the main point is to **not** use GPU already at import time of transformers.
This was caused by the use of class variable `dummy_inputs` that initializes TF by using `tf.constant`. This can be prevented simply by makeing `dummy inputs` a local variable since is not used anywhere else but in `__init__`.
commit messages:
* Make dummy inputs a local variable in TFPreTrainedModel. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1735/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1735/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1735",
"html_url": "https://github.com/huggingface/transformers/pull/1735",
"diff_url": "https://github.com/huggingface/transformers/pull/1735.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1735.patch",
"merged_at": 1575543408000
} |
https://api.github.com/repos/huggingface/transformers/issues/1734 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1734/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1734/comments | https://api.github.com/repos/huggingface/transformers/issues/1734/events | https://github.com/huggingface/transformers/pull/1734 | 517,621,124 | MDExOlB1bGxSZXF1ZXN0MzM2NjU0MTIz | 1,734 | add progress bar to convert_examples_to_features | {
"login": "orena1",
"id": 8983713,
"node_id": "MDQ6VXNlcjg5ODM3MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8983713?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/orena1",
"html_url": "https://github.com/orena1",
"followers_url": "https://api.github.com/users/orena1/followers",
"following_url": "https://api.github.com/users/orena1/following{/other_user}",
"gists_url": "https://api.github.com/users/orena1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/orena1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/orena1/subscriptions",
"organizations_url": "https://api.github.com/users/orena1/orgs",
"repos_url": "https://api.github.com/users/orena1/repos",
"events_url": "https://api.github.com/users/orena1/events{/privacy}",
"received_events_url": "https://api.github.com/users/orena1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1734?src=pr&el=h1) Report\n> Merging [#1734](https://codecov.io/gh/huggingface/transformers/pull/1734?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d2e2577dd31ab92f21154e5280d64fa0ae90bbb8?src=pr&el=desc) will **increase** coverage by `0.28%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1734?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1734 +/- ##\n==========================================\n+ Coverage 83.67% 83.95% +0.28% \n==========================================\n Files 94 94 \n Lines 13950 13951 +1 \n==========================================\n+ Hits 11673 11713 +40 \n+ Misses 2277 2238 -39\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1734?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/1734/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl90cmFuc2ZvX3hsLnB5) | `33.8% <0%> (-0.1%)` | :arrow_down: |\n| [transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/1734/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl9ncHQyLnB5) | `96.72% <0%> (+0.81%)` | :arrow_up: |\n| [transformers/tests/modeling\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1734/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2NvbW1vbl90ZXN0LnB5) | `73.67% <0%> (+1.37%)` | :arrow_up: |\n| [transformers/tests/modeling\\_tf\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1734/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX3RmX2NvbW1vbl90ZXN0LnB5) | `96.8% <0%> (+3.72%)` | :arrow_up: |\n| [transformers/tests/tokenization\\_utils\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1734/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl91dGlsc190ZXN0LnB5) | `96.15% <0%> (+3.84%)` | :arrow_up: |\n| [transformers/tests/tokenization\\_tests\\_commons.py](https://codecov.io/gh/huggingface/transformers/pull/1734/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl90ZXN0c19jb21tb25zLnB5) | `100% <0%> (+3.92%)` | :arrow_up: |\n| [transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/1734/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL2ZpbGVfdXRpbHMucHk=) | `74.17% <0%> (+4.39%)` | :arrow_up: |\n| [transformers/tokenization\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1734/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rva2VuaXphdGlvbl94bG5ldC5weQ==) | `90.24% <0%> (+6.5%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1734?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1734?src=pr&el=footer). Last update [d2e2577...d790616](https://codecov.io/gh/huggingface/transformers/pull/1734?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Yes, nice indeed, thanks"
] | 1,572 | 1,572 | 1,572 | CONTRIBUTOR | null | It takes considerate amount of time (~10 min) to parse the examples to features, it is good to have a progress-bar to track this | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1734/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1734",
"html_url": "https://github.com/huggingface/transformers/pull/1734",
"diff_url": "https://github.com/huggingface/transformers/pull/1734.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1734.patch",
"merged_at": 1572950061000
} |
https://api.github.com/repos/huggingface/transformers/issues/1733 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1733/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1733/comments | https://api.github.com/repos/huggingface/transformers/issues/1733/events | https://github.com/huggingface/transformers/issues/1733 | 517,521,903 | MDU6SXNzdWU1MTc1MjE5MDM= | 1,733 | 🌟New model addition: VL-BERT | {
"login": "Eurus-Holmes",
"id": 34226570,
"node_id": "MDQ6VXNlcjM0MjI2NTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/34226570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Eurus-Holmes",
"html_url": "https://github.com/Eurus-Holmes",
"followers_url": "https://api.github.com/users/Eurus-Holmes/followers",
"following_url": "https://api.github.com/users/Eurus-Holmes/following{/other_user}",
"gists_url": "https://api.github.com/users/Eurus-Holmes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Eurus-Holmes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Eurus-Holmes/subscriptions",
"organizations_url": "https://api.github.com/users/Eurus-Holmes/orgs",
"repos_url": "https://api.github.com/users/Eurus-Holmes/repos",
"events_url": "https://api.github.com/users/Eurus-Holmes/events{/privacy}",
"received_events_url": "https://api.github.com/users/Eurus-Holmes/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"We have released the code for VL-BERT: https://github.com/jackroos/VL-BERT. Thanks for your attention!",
"@jackroos Awesome! Thanks for your work!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@thomwolf @srush Any plans to integrate any visual BERT kind of model (like VilBERT, LXMERT or VL-BERT) in the near future? ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,572 | 1,588 | 1,588 | NONE | null | # 🌟New model addition
## Model description
[VL-BERT: PRE-TRAINING OF GENERIC VISUALLINGUISTIC REPRESENTATIONS](https://arxiv.org/pdf/1908.08530.pdf)
> *We introduce a new pre-trainable generic representation for visual-linguistic tasks,
called Visual-Linguistic BERT (VL-BERT for short). VL-BERT adopts the simple yet powerful Transformer model as the backbone, and extends it to take both
visual and linguistic embedded features as input. In it, each element of the input is either of a word from the input sentence, or a region-of-interest (RoI) from
the input image. It is designed to fit for most of the visual-linguistic downstream
tasks. To better exploit the generic representation, we pre-train VL-BERT on the
massive-scale Conceptual Captions dataset, together with text-only corpus. Extensive empirical analysis demonstrates that the pre-training procedure can better
align the visual-linguistic clues and benefit the downstream tasks, such as visual
commonsense reasoning, visual question answering and referring expression comprehension. It is worth noting that VL-BERT achieved the first place of single
model on the leaderboard of the VCR benchmark.*
<!-- Important information -->
## Open Source status
* [x] the model implementation is available: https://github.com/jackroos/VL-BERT
* [x] the model weights are available: https://github.com/jackroos/VL-BERT/blob/master/model/pretrained_model/PREPARE_PRETRAINED_MODELS.md
## Additional context
The pre-trainable generic representation for visual-linguistic tasks is becoming more and more important.
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1733/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1732 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1732/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1732/comments | https://api.github.com/repos/huggingface/transformers/issues/1732/events | https://github.com/huggingface/transformers/issues/1732 | 517,520,850 | MDU6SXNzdWU1MTc1MjA4NTA= | 1,732 | TFBertForSequenceClassification.from_pretrained ERROR | {
"login": "dkicenan",
"id": 21953155,
"node_id": "MDQ6VXNlcjIxOTUzMTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/21953155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dkicenan",
"html_url": "https://github.com/dkicenan",
"followers_url": "https://api.github.com/users/dkicenan/followers",
"following_url": "https://api.github.com/users/dkicenan/following{/other_user}",
"gists_url": "https://api.github.com/users/dkicenan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dkicenan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dkicenan/subscriptions",
"organizations_url": "https://api.github.com/users/dkicenan/orgs",
"repos_url": "https://api.github.com/users/dkicenan/repos",
"events_url": "https://api.github.com/users/dkicenan/events{/privacy}",
"received_events_url": "https://api.github.com/users/dkicenan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Just give the path to your folder instead of the model so the loading method can also find the configuration files.\r\n\r\nYou can read more [here](https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.from_pretrained)",
"@thomwolf Thank you for your help! I can load the pretrained model now. "
] | 1,572 | 1,572 | 1,572 | NONE | null | ### code:
model = TFBertForSequenceClassification.from_pretrained('./bert-base-cased-tf_model.h5')
### Error message:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/DATA/disk1/wutianlong/python/miniconda3/lib/python3.6/site-packages/transformers/modeling_tf_utils.py", line 212, in from_pretrained
**kwargs
File "/DATA/disk1/wutianlong/python/miniconda3/lib/python3.6/site-packages/transformers/configuration_utils.py", line 154, in from_pretrained
config = cls.from_json_file(resolved_config_file)
File "/DATA/disk1/wutianlong/python/miniconda3/lib/python3.6/site-packages/transformers/configuration_utils.py", line 186, in from_json_file
text = reader.read()
File "/DATA/disk1/wutianlong/python/miniconda3/lib/python3.6/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte
### Environment
py:3.6
tf:2.0.0
###
I haved download "bert-base-cased-tf_model.h5" "bert-base-cased-vocab.txt" "bert-base-cased-config.json" in the same dir. when I load model "from_pretrained", I got error above. Please help me what 's wrong and how to fix it. Thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1732/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/1732/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1731 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1731/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1731/comments | https://api.github.com/repos/huggingface/transformers/issues/1731/events | https://github.com/huggingface/transformers/issues/1731 | 517,509,773 | MDU6SXNzdWU1MTc1MDk3NzM= | 1,731 | Threads running on evaluation? | {
"login": "vivekam101",
"id": 5806230,
"node_id": "MDQ6VXNlcjU4MDYyMzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5806230?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vivekam101",
"html_url": "https://github.com/vivekam101",
"followers_url": "https://api.github.com/users/vivekam101/followers",
"following_url": "https://api.github.com/users/vivekam101/following{/other_user}",
"gists_url": "https://api.github.com/users/vivekam101/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vivekam101/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vivekam101/subscriptions",
"organizations_url": "https://api.github.com/users/vivekam101/orgs",
"repos_url": "https://api.github.com/users/vivekam101/repos",
"events_url": "https://api.github.com/users/vivekam101/events{/privacy}",
"received_events_url": "https://api.github.com/users/vivekam101/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"*attaching complete log\r\n[test_log.txt](https://github.com/huggingface/transformers/files/3806959/test_log.txt)\r\n",
"Sorry. guys. logger has been called multiple times hence the issue closing the issue."
] | 1,572 | 1,572 | 1,572 | NONE | null | Hi Transformers :)
Thanks for the wonderful repo. One quick question. I was running examples/test_examples.py and came to see logs printing multiple times while running in cpu. Is there threads spawned ?
snippet from examples/test_examples.py
**
testargs = ["run_squad.py",
"--train_file=./tests_samples/SQUAD/dev-v2.0-small.json",
"--predict_file=./tests_samples/SQUAD/dev-v2.0-small.json",
"--model_name=bert-base-uncased",
"--output_dir=./tests_samples/temp_dir",
"--max_steps=10",
"--warmup_steps=2",
"--do_train",
"--do_eval",
"--version_2_with_negative",
"--learning_rate=2e-4",
"--per_gpu_train_batch_size=2",
"--per_gpu_eval_batch_size=1",
"--overwrite_output_dir",
"--seed=42"]
***
Result where i see multiple logs prints
-----
11/05/2019 07:11:33 - INFO - utils_squad - *** Example ***
*** Example ***
*** Example ***
11/05/2019 07:11:33 - INFO - utils_squad - unique_id: 1000000013
unique_id: 1000000013
unique_id: 1000000013
11/05/2019 07:11:33 - INFO - utils_squad - example_index: 13
example_index: 13
example_index: 13
11/05/2019 07:11:33 - INFO - utils_squad - doc_span_index: 0
doc_span_index: 0
doc_span_index: 0
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1731/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1730 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1730/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1730/comments | https://api.github.com/repos/huggingface/transformers/issues/1730/events | https://github.com/huggingface/transformers/issues/1730 | 517,486,844 | MDU6SXNzdWU1MTc0ODY4NDQ= | 1,730 | resize_token_embeddings doesn't work as expected for BertForMaskedLM | {
"login": "praateekmahajan",
"id": 7589415,
"node_id": "MDQ6VXNlcjc1ODk0MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7589415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/praateekmahajan",
"html_url": "https://github.com/praateekmahajan",
"followers_url": "https://api.github.com/users/praateekmahajan/followers",
"following_url": "https://api.github.com/users/praateekmahajan/following{/other_user}",
"gists_url": "https://api.github.com/users/praateekmahajan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/praateekmahajan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/praateekmahajan/subscriptions",
"organizations_url": "https://api.github.com/users/praateekmahajan/orgs",
"repos_url": "https://api.github.com/users/praateekmahajan/repos",
"events_url": "https://api.github.com/users/praateekmahajan/events{/privacy}",
"received_events_url": "https://api.github.com/users/praateekmahajan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi! May I know what version are you using? On the current master branch, this is the result:\r\n\r\n```py\r\nimport torch\r\nfrom transformers import BertForMaskedLM\r\n\r\ninput_ids = torch.tensor([[1,2,3,4,5]])\r\nprint(model.bert.embeddings.word_embeddings.num_embeddings) # 119547\r\nprint(model.cls.predictions.decoder.out_features) # 119547\r\nmodel.resize_token_embeddings(119547 + 5)\r\nprint(model.bert.embeddings.word_embeddings.num_embeddings) # 119552\r\nprint(model.cls.predictions.decoder.out_features) # 119552\r\n```",
"Hmm, I am also on 2.1.1. Looks like in your example you're not loading the model from a pretrained model. (Though I am not sure how that makes a difference, but looks like it does)\r\n\r\n\r\n```python\r\nmodel = BertForMaskedLM.from_pretrained(model_type, cache_dir=\"/tmp/bert_models\")\r\n\r\nprint(transformers.__version__) #2.1.1\r\nprint(model.bert.embeddings.word_embeddings.num_embeddings) # 119547\r\nprint(model.cls.predictions.decoder.out_features) # 119547\r\nmodel.resize_token_embeddings(119547 + 5)\r\nprint(model.bert.embeddings.word_embeddings.num_embeddings) # 119552\r\nprint(model.cls.predictions.decoder.out_features) # 119547\r\n```\r\n\r\nAnd as per the current master also I don't see where we copy over the params from the old `cls` to the new updated `cls` with the new number of tokens as we do it for the embedding layer [here](https://github.com/huggingface/transformers/blob/de890ae67d43e1e5d031a815dab5dfed081e9a95/transformers/modeling_utils.py#L196).\r\n\r\n\r\nIn the next comment, I can share the temporary hack that I have.",
"```python\r\nclass CustomBertForMaskedLM(BertForMaskedLM):\r\n def __init__(self, config):\r\n super().__init__(config)\r\n\r\n def resize_embedding_and_fc(self, new_num_tokens):\r\n # Change the FC \r\n old_fc = self.cls.predictions.decoder\r\n self.cls.predictions.decoder = self._get_resized_fc(old_fc, new_num_tokens)\r\n \r\n # Change the bias\r\n old_bias = self.cls.predictions.bias\r\n self.cls.predictions.bias = self._get_resized_bias(old_bias, new_num_tokens)\r\n \r\n # Change the embedding\r\n self.resize_token_embeddings(new_num_tokens)\r\n \r\n \r\n def _get_resized_bias(self, old_bias, new_num_tokens):\r\n old_num_tokens = old_bias.data.size()[0]\r\n if old_num_tokens == new_num_tokens:\r\n return old_bias\r\n\r\n # Create new biases\r\n new_bias = nn.Parameter(torch.zeros(new_num_tokens))\r\n new_bias.to(old_bias.device)\r\n\r\n # Copy from the previous weights\r\n num_tokens_to_copy = min(old_num_tokens, new_num_tokens)\r\n new_bias.data[:num_tokens_to_copy] = old_bias.data[:num_tokens_to_copy]\r\n return new_bias\r\n \r\n def _get_resized_fc(self, old_fc, new_num_tokens):\r\n\r\n old_num_tokens, old_embedding_dim = old_fc.weight.size()\r\n if old_num_tokens == new_num_tokens:\r\n return old_fc\r\n\r\n # Create new weights\r\n new_fc = nn.Linear(in_features=old_embedding_dim, out_features=new_num_tokens)\r\n new_fc.to(old_fc.weight.device)\r\n\r\n # initialize all weights (in particular added tokens)\r\n self._init_weights(new_fc)\r\n\r\n # Copy from the previous weights\r\n num_tokens_to_copy = min(old_num_tokens, new_num_tokens)\r\n new_fc.weight.data[:num_tokens_to_copy, :] = old_fc.weight.data[:num_tokens_to_copy, :]\r\n return new_fc\r\n```\r\n\r\n\r\nUsing this, I can achieve the output as expected.\r\n\r\n\r\n",
"Indeed I had forgotten the line about the model initialization, but it is initialized from a pretrained checkpoint as you have done. Running your code gives me the same results:\r\n\r\n```py\r\nfrom transformers import BertForMaskedLM\r\nimport torch\r\n\r\nmodel = BertForMaskedLM.from_pretrained(\"bert-base-cased\")\r\n\r\nprint(model.bert.embeddings.word_embeddings.num_embeddings) # 28996\r\nprint(model.cls.predictions.decoder.out_features) # 28996\r\nmodel.resize_token_embeddings(119547 + 5)\r\nprint(model.bert.embeddings.word_embeddings.num_embeddings) # 119552\r\nprint(model.cls.predictions.decoder.out_features) # 119552\r\n```\r\n\r\n but please note I am running on the **master branch**, not on the official pypi release. In the official release the predictions layer indeed isn't correctly tied, but it has been fixed since then with #1721.\r\n\r\nYou can install from master with: `pip install git+https://github.com/huggingface/transformers`.\r\n\r\nAs for where the tying happens, it is [in the superclass `PreTrainedModel`](https://github.com/huggingface/transformers/blob/master/transformers/modeling_utils.py#L114) which ties the input embeddings and output embeddings if there are any.",
"@LysandreJik This issue is only partially fixed. In _tie_or_clone_weights(), it tries to fix output_embeddings.bias, but output_embeddings is actually self.cls.predictions.decoder, which has no bias term. Instead, self.cls.predictions has a bias term. This bias term is not fixed, and exceptions will occur. \r\n\r\nI'll have to use @praateekmahajan 's hack temporarily 😃 ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This is still an issue, I'm reopening it.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@LysandreJik do you know if its `wontfix` or just in your backlog. Lmk if I can help somehow. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,572 | 1,590 | 1,590 | CONTRIBUTOR | null | ## 🐛 Bug
<!-- Important information -->
Model I am using : **BertForMaskedLM**
Language I am using the model on (English, Chinese....): **multilingual**
The problem arise when using:
* [x] the official example scripts: **BertForMaskedLM**
The tasks I am working on is:
* [x] my own task or dataset: **Finetuning with newly added tokens**.
## To Reproduce
Steps to reproduce the behavior:
```python
# Get model
model = BertForMaskedLM.from_pretrained("bert-base-multilingual-cased")
# define input
input = torch.tensor([[1,2,3,4,5]])
# before experiment
print(f"BERT num embeddings before\t: {model.bert.embeddings.word_embeddings.num_embeddings}")
print(f"LM Decoder num embedding before\t: {model.cls.predictions.decoder.out_features}")
print(f"MLM Loss\t\t\t: {model(input_ids=input, masked_lm_labels=input)[0]}")
# change embedding size for experiment
model.resize_token_embeddings(119547 + 5)
print(f"\nBERT num embeddings after\t: {model.bert.embeddings.word_embeddings.num_embeddings}")
print(f"LM Decoder num embedding after\t: {model.cls.predictions.decoder.out_features}")
# print failuire
try:
print(f"MLM Loss\t\t\t: {model(input_ids=input, masked_lm_labels=input)[0]}")
except RuntimeError:
print(" ---- Forward Pass failed ---- ")
```
This is the output that I see
```
BERT num embeddings before : 119547
LM Decoder num embedding before : 119547
MLM Loss : 18.269268035888672
BERT num embeddings after : 119552
LM Decoder num embedding after : 119547
---- Forward Pass failed ----
```
If you see the LM decoder did not change the embedding size, hence won't ever predict the new tokens.
The expectation would be that `resize_token_embeddings` handles the FC inside LM decoder.
I tried also using BertConfig
```python
config = BertConfig.from_pretrained("bert-base-multilingual-cased")
config.vocab_size = 119547 + 5
model = BertForMaskedLM.from_pretrained("bert-base-multilingual-cased", config=config)
```
Which naturally results in error while loading `state_dict`
```
RuntimeError: Error(s) in loading state_dict for BertForMaskedLM:
size mismatch for bert.embeddings.word_embeddings.weight: copying a param with shape torch.Size([119547, 768]) from checkpoint, the shape in current model is torch.Size([119554, 768]).
size mismatch for cls.predictions.bias: copying a param with shape torch.Size([119547]) from checkpoint, the shape in current model is torch.Size([119554]).
size mismatch for cls.predictions.decoder.weight: copying a param with shape torch.Size([119547, 768]) from checkpoint, the shape in current model is torch.Size([119554, 768]).
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1730/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1729 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1729/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1729/comments | https://api.github.com/repos/huggingface/transformers/issues/1729/events | https://github.com/huggingface/transformers/issues/1729 | 517,451,643 | MDU6SXNzdWU1MTc0NTE2NDM= | 1,729 | Roberta embeddings comparison | {
"login": "aclifton314",
"id": 53267795,
"node_id": "MDQ6VXNlcjUzMjY3Nzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/53267795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aclifton314",
"html_url": "https://github.com/aclifton314",
"followers_url": "https://api.github.com/users/aclifton314/followers",
"following_url": "https://api.github.com/users/aclifton314/following{/other_user}",
"gists_url": "https://api.github.com/users/aclifton314/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aclifton314/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aclifton314/subscriptions",
"organizations_url": "https://api.github.com/users/aclifton314/orgs",
"repos_url": "https://api.github.com/users/aclifton314/repos",
"events_url": "https://api.github.com/users/aclifton314/events{/privacy}",
"received_events_url": "https://api.github.com/users/aclifton314/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Any update on this issue?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> ## ❓ Questions & Help\r\n> **SYSTEM**\r\n> OS: Linux pop-os 5.0.0\r\n> Python version: 3.6.8\r\n> Torch version: 1.3.0\r\n> Transformers version: 2.1.1\r\n> I am running this linux VM with the above software versions on a Windows 10 laptop.\r\n> \r\n> I wanted to compare the cosine similarity between elmo and roberta embeddings for two sequences:\r\n> \r\n> ```python\r\n> from allennlp.commands.elmo import ElmoEmbedder\r\n> from allennlp.data.tokenizers.spacy_tokenizer import SpacyTokenizer\r\n> from scipy.spatial.distance import cdist\r\n> import numpy as np\r\n> import pandas as pd\r\n> from transformers import RobertaTokenizer, RobertaModel\r\n> import torch\r\n> \r\n> seqA = 'How many cars are in the target parking lot'\r\n> seqB = 'How many countries are in the continent of Africa'\r\n> comparison_df = pd.DataFrame()\r\n> sim_list = []\r\n> \r\n> model_types = ['elmo', 'roberta']\r\n> for model_type in model_types:\r\n> sim_list = []\r\n> if model_type == 'elmo':\r\n> tokenizer = SpacyTokenizer()\r\n> model = ElmoEmbedder()\r\n> seqA_tokens = np.array([s.text for s in tokenizer.tokenize(seqA)])\r\n> seqB_tokens = np.array([s.text for s in tokenizer.tokenize(seqB)])\r\n> \r\n> seqA_embeddings = model.embed_sentence(seqA_tokens)\r\n> seqB_embeddings = model.embed_sentence(seqB_tokens)\r\n> \r\n> seqA_embeds = np.sum(seqA_embeddings[-1], axis=0).reshape((1, seqA_embeddings.shape[2]))\r\n> seqB_embeds = np.sum(seqB_embeddings[-1], axis=0).reshape((1, seqB_embeddings.shape[2]))\r\n> Y = cdist(seqA_embeds, seqB_embeds, 'cosine')[0][0]\r\n> cos_sim = 1.0 - Y\r\n> sim_list.append(cos_sim)\r\n> \r\n> \r\n> elif model_type == 'roberta':\r\n> tokenizer = RobertaTokenizer.from_pretrained('roberta-large')\r\n> model = RobertaModel.from_pretrained('roberta-large')\r\n> seqA_tokens = torch.tensor(tokenizer.encode(seqA, add_special_tokens=True)).unsqueeze(0)\r\n> seqB_tokens = torch.tensor(tokenizer.encode(seqB, add_special_tokens=True)).unsqueeze(0)\r\n> \r\n> with torch.no_grad():\r\n> seqA_embeddings = model(seqA_tokens)[0]\r\n> seqB_embeddings = model(seqB_tokens)[0]\r\n> \r\n> seqA_embeddings = seqA_embeddings.detach().numpy()\r\n> seqA_embeddings = np.sum(seqA_embeddings[-1], axis=0).reshape((1, seqA_embeddings.shape[2]))\r\n> \r\n> seqB_embeddings = seqB_embeddings.detach().numpy()\r\n> seqB_embeddings = np.sum(seqB_embeddings[-1], axis=0).reshape((1, seqB_embeddings.shape[2]))\r\n> \r\n> Y = cdist(seqA_embeddings, seqB_embeddings, 'cosine')[0][0]\r\n> cos_sim = 1.0 - Y\r\n> sim_list.append(cos_sim)\r\n> \r\n> comparison_df[model_type] = sim_list\r\n> print(comparison_df)\r\n> ```\r\n> \r\n> The result of the print statement is:\r\n> \r\n> ```\r\n> elmo roberta\r\n> 0 0.55092 0.996526\r\n> ```\r\n> \r\n> I find this strange because, despite having the same number of tokens, I would expect the similarity between the two sequences to not be so close to 1 as roberta is showing. My concern is that I am not accessing the embeddings correctly or not pooling correctly (or both), and was wondering if people here wouldn't mind letting me know what I am doing wrong? Thanks in advance!\r\n\r\n@aclifton314 have you deduce any meaningful intuition on this thus far ?? I found another similar issue, hopefully this can help: https://github.com/huggingface/transformers/issues/2298 ...",
"@aclifton314 \r\n\r\nIn addition to the above link, I also ran a quick test with most popular models not just Roberta, it seems there are some models that can differentiate better, though I am not sure if you can generalize that statement without the test. Code and test results below:\r\n\r\n```\r\nimport torch\r\nfrom torch import nn\r\nfrom transformers import *\r\n\r\nimport numpy as np\r\nfrom scipy.spatial import distance\r\nfrom sklearn.metrics.pairwise import cosine_similarity\r\n\r\n# for 10 transformer architectures and 30 pretrained weights.\r\n# Model | Tokenizer | Pretrained weights shortcut\r\nMODELS = [(BertModel, BertTokenizer, 'bert-base-uncased'),\r\n (OpenAIGPTModel, OpenAIGPTTokenizer, 'openai-gpt'),\r\n (GPT2Model, GPT2Tokenizer, 'gpt2'),\r\n (CTRLModel, CTRLTokenizer, 'ctrl'),\r\n (TransfoXLModel, TransfoXLTokenizer, 'transfo-xl-wt103'),\r\n (XLNetModel, XLNetTokenizer, 'xlnet-base-cased'),\r\n (XLMModel, XLMTokenizer, 'xlm-mlm-enfr-1024'),\r\n (DistilBertModel, DistilBertTokenizer, 'distilbert-base-uncased'),\r\n (RobertaModel, RobertaTokenizer, 'roberta-base'),\r\n (XLMRobertaModel, XLMRobertaTokenizer, 'xlm-roberta-base'),\r\n ]\r\n\r\nfor model_class, tokenizer_class, pretrained_weights in MODELS:\r\n tokenizer = tokenizer_class.from_pretrained(pretrained_weights)\r\n model = model_class.from_pretrained(pretrained_weights)\r\n\r\n print (\"\")\r\n print (\"Model\", ''.join(e for e in str(type(model)).split(\".\")[-1] if e.isalnum()))\r\n\r\n input_ids1 = torch.tensor([tokenizer.encode(\"How many cars are in the target parking lot\", add_special_tokens=True)]) \r\n input_ids2 = torch.tensor([tokenizer.encode(\"How many countries are in the continent of Africa\", add_special_tokens=True)]) \r\n\r\n with torch.no_grad():\r\n last_hidden_states1 = model(input_ids1)[0].detach().numpy()\r\n last_hidden_states2 = model(input_ids2)[0].detach().numpy()\r\n \r\n last_hidden_states1_sq = np.sum(last_hidden_states1[-1], axis=0).reshape((1, last_hidden_states1.shape[2]))\r\n last_hidden_states2_sq = np.sum(last_hidden_states2[-1], axis=0).reshape((1, last_hidden_states2.shape[2]))\r\n \r\n print (\"Similarity between Sent1 & Sent 2\", cosine_similarity(last_hidden_states1_sq, last_hidden_states2_sq) )\r\n\r\n```\r\n**Results**:\r\n\r\nModel BertModel\r\nSimilarity between Sent1 & Sent 2 [[0.7234119]]\r\n\r\nModel OpenAIGPTModel\r\nSimilarity between Sent1 & Sent 2 [[0.6779459]]\r\n\r\nModel GPT2Model\r\nSimilarity between Sent1 & Sent 2 [[0.9986031]]\r\n\r\nModel CTRLModel\r\nSimilarity between Sent1 & Sent 2 [[0.9086089]]\r\n\r\nModel TransfoXLModel\r\nSimilarity between Sent1 & Sent 2 [[0.77547705]]\r\n\r\nModel XLNetModel\r\nSimilarity between Sent1 & Sent 2 [[0.98956037]]\r\n\r\nModel XLMModel\r\nSimilarity between Sent1 & Sent 2 [[0.623127]]\r\n\r\nModel DistilBertModel\r\nSimilarity between Sent1 & Sent 2 [[0.7640958]]\r\n\r\nModel RobertaModel\r\nSimilarity between Sent1 & Sent 2 [[0.96598077]]\r\n\r\nModel XLMRobertaModel\r\nSimilarity between Sent1 & Sent 2 [[0.99813896]]\r\n\r\n**I am interested in Passage (Few Paragraphs) Level Embeddings based Similarity Comparison, if anyone has any viable solution, suggestions please do share.** \r\n\r\nThanks!!"
] | 1,572 | 1,581 | 1,581 | NONE | null | ## ❓ Questions & Help
**SYSTEM**
OS: Linux pop-os 5.0.0
Python version: 3.6.8
Torch version: 1.3.0
Transformers version: 2.1.1
I am running this linux VM with the above software versions on a Windows 10 laptop.
I wanted to compare the cosine similarity between elmo and roberta embeddings for two sequences:
```python
from allennlp.commands.elmo import ElmoEmbedder
from allennlp.data.tokenizers.spacy_tokenizer import SpacyTokenizer
from scipy.spatial.distance import cdist
import numpy as np
import pandas as pd
from transformers import RobertaTokenizer, RobertaModel
import torch
seqA = 'How many cars are in the target parking lot'
seqB = 'How many countries are in the continent of Africa'
comparison_df = pd.DataFrame()
sim_list = []
model_types = ['elmo', 'roberta']
for model_type in model_types:
sim_list = []
if model_type == 'elmo':
tokenizer = SpacyTokenizer()
model = ElmoEmbedder()
seqA_tokens = np.array([s.text for s in tokenizer.tokenize(seqA)])
seqB_tokens = np.array([s.text for s in tokenizer.tokenize(seqB)])
seqA_embeddings = model.embed_sentence(seqA_tokens)
seqB_embeddings = model.embed_sentence(seqB_tokens)
seqA_embeds = np.sum(seqA_embeddings[-1], axis=0).reshape((1, seqA_embeddings.shape[2]))
seqB_embeds = np.sum(seqB_embeddings[-1], axis=0).reshape((1, seqB_embeddings.shape[2]))
Y = cdist(seqA_embeds, seqB_embeds, 'cosine')[0][0]
cos_sim = 1.0 - Y
sim_list.append(cos_sim)
elif model_type == 'roberta':
tokenizer = RobertaTokenizer.from_pretrained('roberta-large')
model = RobertaModel.from_pretrained('roberta-large')
seqA_tokens = torch.tensor(tokenizer.encode(seqA, add_special_tokens=True)).unsqueeze(0)
seqB_tokens = torch.tensor(tokenizer.encode(seqB, add_special_tokens=True)).unsqueeze(0)
with torch.no_grad():
seqA_embeddings = model(seqA_tokens)[0]
seqB_embeddings = model(seqB_tokens)[0]
seqA_embeddings = seqA_embeddings.detach().numpy()
seqA_embeddings = np.sum(seqA_embeddings[-1], axis=0).reshape((1, seqA_embeddings.shape[2]))
seqB_embeddings = seqB_embeddings.detach().numpy()
seqB_embeddings = np.sum(seqB_embeddings[-1], axis=0).reshape((1, seqB_embeddings.shape[2]))
Y = cdist(seqA_embeddings, seqB_embeddings, 'cosine')[0][0]
cos_sim = 1.0 - Y
sim_list.append(cos_sim)
comparison_df[model_type] = sim_list
print(comparison_df)
```
The result of the print statement is:
```
elmo roberta
0 0.55092 0.996526
```
I find this strange because, despite having the same number of tokens, I would expect the similarity between the two sequences to not be so close to 1 as roberta is showing. My concern is that I am not accessing the embeddings correctly or not pooling correctly (or both), and was wondering if people here wouldn't mind letting me know what I am doing wrong? Thanks in advance!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1729/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1729/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1728 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1728/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1728/comments | https://api.github.com/repos/huggingface/transformers/issues/1728/events | https://github.com/huggingface/transformers/issues/1728 | 517,418,210 | MDU6SXNzdWU1MTc0MTgyMTA= | 1,728 | glue_convert_examples_to_features not working if no task is provided | {
"login": "oliviernguyenquoc",
"id": 7463783,
"node_id": "MDQ6VXNlcjc0NjM3ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7463783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oliviernguyenquoc",
"html_url": "https://github.com/oliviernguyenquoc",
"followers_url": "https://api.github.com/users/oliviernguyenquoc/followers",
"following_url": "https://api.github.com/users/oliviernguyenquoc/following{/other_user}",
"gists_url": "https://api.github.com/users/oliviernguyenquoc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oliviernguyenquoc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oliviernguyenquoc/subscriptions",
"organizations_url": "https://api.github.com/users/oliviernguyenquoc/orgs",
"repos_url": "https://api.github.com/users/oliviernguyenquoc/repos",
"events_url": "https://api.github.com/users/oliviernguyenquoc/events{/privacy}",
"received_events_url": "https://api.github.com/users/oliviernguyenquoc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,572 | 1,578 | 1,578 | NONE | null | ## 🐛 Bug
<!-- Important information -->
The problem arise when using:
* [X] my own modified scripts: (give details)
The tasks I am working on is:
* [X] my own task or dataset: (give details)
In these lines:
https://github.com/huggingface/transformers/blob/04c69db399b2ab9e3af872ce46730fbd9f17aec3/transformers/data/processors/glue.py#L66-L69
If no task is provided (even if it's a optional parameter), no processor is selected, label_list is still none.
At least an error should be thrown.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1728/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1727 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1727/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1727/comments | https://api.github.com/repos/huggingface/transformers/issues/1727/events | https://github.com/huggingface/transformers/issues/1727 | 517,413,141 | MDU6SXNzdWU1MTc0MTMxNDE= | 1,727 | loss is nan, for training on MNLI dataset | {
"login": "antgr",
"id": 2175768,
"node_id": "MDQ6VXNlcjIxNzU3Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2175768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antgr",
"html_url": "https://github.com/antgr",
"followers_url": "https://api.github.com/users/antgr/followers",
"following_url": "https://api.github.com/users/antgr/following{/other_user}",
"gists_url": "https://api.github.com/users/antgr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antgr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antgr/subscriptions",
"organizations_url": "https://api.github.com/users/antgr/orgs",
"repos_url": "https://api.github.com/users/antgr/repos",
"events_url": "https://api.github.com/users/antgr/events{/privacy}",
"received_events_url": "https://api.github.com/users/antgr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi, there seems to be a problem indeed. Do you mind sharing a link to a notebook so that I can see what's wrong?",
"Is it related to the issue you opened 3 days ago #1704?",
"Yes, its is the same, but I though that was my mistake and closed it. I will provide a link to a notebook to see the code. Here it is: https://colab.research.google.com/drive/1WlhgpgERppzZ5HK6E_iappn_Kap9EKOA",
"Hi, is there any progress on that?",
"Hi, I believe the issue stems from the fact that MNLI has three labels, whereas MRPC only has two. Your model must therefore be instantiated differently:\r\n\r\n```py\r\nfrom transformers import TFBertForSequenceClassification, BertConfig\r\n\r\nconfig = BertConfig.from_pretrained(\"bert-base-cased\", num_labels=3)\r\nbert_model = TFBertForSequenceClassification.from_pretrained(\"bert-base-cased\", config=config)\r\n```\r\nYou can see how the configurations and models interact together in the [quickstart part of our documentation](https://huggingface.co/transformers/quickstart.html).\r\n\r\nYour model should have a correct loss after specifying the number of labels. We've fixed an issue with the way the `tensorflow_datasets` datasets were handled recently, so I recommend installing directly from source:\r\n\r\n```\r\n!pip install git+https://github.com/huggingface/transformers\r\n```\r\n\r\nYou won't need to specify the `label_list` like you have done in your notebook if you install from the master branch.\r\n\r\nIf you want a fully functional script that works will all glue tasks, I recommend taking a look at [examples/run_tf_glue.py](https://github.com/huggingface/transformers/blob/master/examples/run_tf_glue.py)\r\n",
"Thank you very much! This is very helpful. I see that in the script is an if ``` if TASK == \"mrpc\": ``` (only for mrpc). Also Is there a similar script for pytorch as well? Seems that this issue is ready for closing.",
"Sure, no problem! That flag is because we're then loading the saved model in pytorch to show how easy it is to switch between pytorch/tensorflow. We're showcasing the predictions made by the PyTorch model that was trained in TensorFlow, but we're only showcasing it for MRPC.\r\n\r\nWe'll update this script sometimes in the next few weeks to evaluate all the tasks.",
"Following your recommendations, now the loss takes value. \r\n\r\nFor your info, there is an issue with roberta. \r\nBut for that could be opened a new issue, if it is not a fault from my side.\r\n```\r\nFine-tuning RoBERTa on MNLI\r\nWARNING:tensorflow:Gradients do not exist for variables ['tf_roberta_for_sequence_classification/roberta/pooler/dense/kernel:0', 'tf_roberta_for_sequence_classification/roberta/pooler/dense/bias:0'] when minimizing the loss.\r\n\r\nWARNING:tensorflow:Gradients do not exist for variables ['tf_roberta_for_sequence_classification/roberta/pooler/dense/kernel:0', 'tf_roberta_for_sequence_classification/roberta/pooler/dense/bias:0'] when minimizing the loss\r\n",
"Thanks for letting us know, we'll look into that!",
"Any updates on this one? Still getting the same error when running the `2.2.2` release.\r\n\r\n```\r\nWARNING:tensorflow:Gradients do not exist for variables ['tf_roberta_for_sequence_classification/roberta/pooler/dense/kernel:0', 'tf_roberta_for_sequence_classification/roberta/pooler/dense/bias:0'] when minimizing the loss\r\n````",
"It's not an error, it just means it's not updating those variables. Those variables (pooler) are not used when doing sequence classification.",
"Those weights are part of `trainable_weights` which makes little sense if they are not used when doing sequence classification. Maybe just remove them from trainable weights and avoid any confusion in the future?",
"> It's not an error, it just means it's not updating those variables. Those variables (pooler) are not used when doing sequence classification.\r\n\r\n\r\n\r\n> Those weights are part of `trainable_weights` which makes little sense if they are not used when doing sequence classification. Maybe just remove them from trainable weights and avoid any confusion in the future?\r\n\r\nWe are getting a similar issue here\r\n\r\nhttps://github.com/huggingface/transformers/issues/2256\r\n\r\nWe are using just the Bert base model to get a vector representation from a piece of text .\r\n\r\nSince\r\n\r\n> **pooler_output**: ``tf.Tensor`` of shape ``(batch_size, hidden_size)``\r\n Last layer hidden-state of the first token of the sequence (classification token)\r\n further processed by a Linear layer and a Tanh activation function. The Linear\r\n layer weights are trained from the next sentence prediction (classification)\r\n objective during Bert pretraining. This output is usually *not* a good summary\r\n of the semantic content of the input, you're often better with averaging or pooling\r\n the sequence of hidden-states for the whole input sequence.\r\n\r\nThat would mean that the warning I am getting is just because we are not using the pooler output?",
"I'm getting similar warning on roberta large architecture training on TPU for sequence classification for num_labal = 4: \r\n\r\nIt is mostly the same code as except for data preparation phase and num of labels and the model is roberta-large:\r\nhttps://colab.research.google.com/drive/1yWaLpCWImXZE2fPV0ZYDdWWI8f52__9A#scrollTo=G_chN1Sy3IXn\r\n\r\nI defined my model as : \r\n\r\n`assert len(label_list) == 4`\r\n\r\n`def model_fn():`\r\n` return TFRobertaForSequenceClassification.from_pretrained(\"roberta-large\",config =RobertaConfig.from_pretrained(\"roberta-large\",num_labels=len(label_list) ))`\r\n\r\nThe warning i get are : \r\n\r\nWARNING:tensorflow:Gradients do not exist for variables ['tf_roberta_for_sequence_classification_2/roberta/pooler/dense/kernel:0', 'tf_roberta_for_sequence_classification_2/roberta/pooler/dense/bias:0'] when minimizing the loss.\r\nWARNING:tensorflow:Gradients do not exist for variables ['tf_roberta_for_sequence_classification_2/roberta/pooler/dense/kernel:0', 'tf_roberta_for_sequence_classification_2/roberta/pooler/dense/bias:0'] when minimizing the loss.\r\nWARNING:tensorflow:Gradients do not exist for variables ['tf_roberta_for_sequence_classification_2/roberta/pooler/dense/kernel:0', 'tf_roberta_for_sequence_classification_2/roberta/pooler/dense/bias:0'] when minimizing the loss.\r\nWARNING:tensorflow:Gradients do not exist for variables ['tf_roberta_for_sequence_classification_2/roberta/pooler/dense/kernel:0', 'tf_roberta_for_sequence_classification_2/roberta/pooler/dense/bias:0'] when minimizing the loss.\r\n\r\nI using : \r\n\r\n- Tensorflow 2.1.0rc1\r\n\r\n- Tranformers but with checkout : git checkout f3386 -b tanda-sequential-finetuning\r\n\r\nI don't find any diference on models between this checkout and the tranfermers version that exist right now.\r\n\r\nAltough the training is decreasing , for exemple:\r\nIteration 20:\r\n\r\nTraining: 20it [02:28, 1.46it/s]\r\nTraining step 20 Accuracy: 0.901562511920929, Training loss: 0.053004395216703415 \r\n\r\nIteration 2040:\r\n\r\nTraining: 2040it [20:16, 1.89it/s]\r\nTraining step 2040 Accuracy: 0.953576922416687, Training loss: 0.02367759868502617",
"i have the same issue. But since i'm using custom loss and need to pass mask from inputs into the loss, i'm not using eager execution. The problem in this case is that model does not train and throw an error \r\n\r\n`ValueError: Variable <tf.Variable 'tf_bert_model/bert/pooler/dense/kernel:0' shape=(768, 768) dtype=float32> has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.`. \r\n\r\nIf i use simple standard loss with eager execution, i get only warnings and training is possible. I'm training the model using `model.fit()`",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,572 | 1,586 | 1,586 | NONE | null | ## 🐛 Bug
Model I am using (Bert, XLNet....):
** Bert **
bert_model = TFBertForSequenceClassification.from_pretrained("bert-base-cased")
Language I am using the model on (English, Chinese....):
** English **
The problem arise when using:
https://medium.com/tensorflow/using-tensorflow-2-for-state-of-the-art-natural-language-processing-102445cda54a
for MNLI
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: ** MNLI **
## To Reproduce
Steps to reproduce the behavior:
1. create bert_validation_matched_dataset
example = list(bert_validation_matched_dataset.__iter__())[0]
example
({'attention_mask': <tf.Tensor: id=5022668, shape=(64, 128), dtype=int32, numpy=
array([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
...,
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0]], dtype=int32)>,
'input_ids': <tf.Tensor: id=5022669, shape=(64, 128), dtype=int32, numpy=
array([[ 101, 8147, 1218, ..., 0, 0, 0],
[ 101, 21637, 1103, ..., 0, 0, 0],
[ 101, 1109, 2570, ..., 0, 0, 0],
...,
[ 101, 1109, 7271, ..., 0, 0, 0],
[ 101, 7947, 1685, ..., 0, 0, 0],
[ 101, 1130, 1103, ..., 0, 0, 0]], dtype=int32)>,
'token_type_ids': <tf.Tensor: id=5022670, shape=(64, 128), dtype=int32, numpy=
array([[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]], dtype=int32)>},
<tf.Tensor: id=5022671, shape=(64,), dtype=int64, numpy=
array([2, 1, 0, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 2, 0, 2, 0, 1,
2, 1, 1, 2, 2, 1, 0, 0, 2, 0, 0, 0, 2, 0, 0, 0, 0, 1, 0, 0, 2, 0,
2, 2, 1, 2, 2, 2, 2, 2, 2, 0, 2, 0, 0, 2, 1, 2, 2, 1, 2, 0])>)
2. create bert_train_dataset such that
example = list(bert_train_dataset.__iter__())[0]
example
({'attention_mask': <tf.Tensor: id=5023289, shape=(32, 128), dtype=int32, numpy=
array([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
...,
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0]], dtype=int32)>,
'input_ids': <tf.Tensor: id=5023290, shape=(32, 128), dtype=int32, numpy=
array([[ 101, 144, 2312, ..., 0, 0, 0],
[ 101, 107, 1192, ..., 0, 0, 0],
[ 101, 149, 10844, ..., 0, 0, 0],
...,
[ 101, 1109, 2350, ..., 0, 0, 0],
[ 101, 1262, 1173, ..., 0, 0, 0],
[ 101, 1105, 1128, ..., 0, 0, 0]], dtype=int32)>,
'token_type_ids': <tf.Tensor: id=5023291, shape=(32, 128), dtype=int32, numpy=
array([[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]], dtype=int32)>},
<tf.Tensor: id=5023292, shape=(32,), dtype=int64, numpy=
array([2, 1, 0, 0, 1, 2, 1, 1, 0, 2, 0, 1, 1, 2, 2, 2, 0, 2, 1, 1, 0, 1,
1, 2, 0, 1, 1, 0, 2, 2, 2, 1])>)
3. Fit the model
bert_history = bert_model.fit(bert_train_dataset, epochs=1, validation_data=bert_validation_matched_dataset)
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
The expected behavior is that loss should not be nan.
Fine-tuning BERT on MNLI
3/Unknown - 4s 1s/step - loss: nan - accuracy: 0.2031
## Environment
colab
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1727/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1727/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1726 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1726/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1726/comments | https://api.github.com/repos/huggingface/transformers/issues/1726/events | https://github.com/huggingface/transformers/issues/1726 | 517,329,282 | MDU6SXNzdWU1MTczMjkyODI= | 1,726 | Exceeding max sequence length in Roberta | {
"login": "aclifton314",
"id": 53267795,
"node_id": "MDQ6VXNlcjUzMjY3Nzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/53267795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aclifton314",
"html_url": "https://github.com/aclifton314",
"followers_url": "https://api.github.com/users/aclifton314/followers",
"following_url": "https://api.github.com/users/aclifton314/following{/other_user}",
"gists_url": "https://api.github.com/users/aclifton314/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aclifton314/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aclifton314/subscriptions",
"organizations_url": "https://api.github.com/users/aclifton314/orgs",
"repos_url": "https://api.github.com/users/aclifton314/repos",
"events_url": "https://api.github.com/users/aclifton314/events{/privacy}",
"received_events_url": "https://api.github.com/users/aclifton314/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, same here with distilgpt2 . Did you solve this?",
"@iedmrc I did not. I'm still waiting response from someone.",
"If your input sequence is too long then it cannot be fed to the model: it will crash as you have seen in your example. \r\n\r\nThere are two ways to handle this: either shorten your sequence by truncating it (manually or via the `max_length` parameter in the `encode` method) or use a model that has a larger input sequence length.",
"Do you mean splitting by truncating? If it is not, why I have to truncate the sequence? Because gpt2 is not only capable of process just 512 or 1024 tokens. If you mean splitting, I use run_lm_finetuning.py and it seems like it already splits sequence to blocks [here](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py#L84). But still outputs the same warning.",
"I mean that you should take only the 1024 first tokens (if using GPT-2) of your sequence. As you have said, GPT-2 cannot handle more than 1024 tokens in a sequence.\r\n\r\nYes, `run_lm_finetuning.py` outputs this warning as it first converts the entire sequence before splitting it into blocks. It is a warning but not an error. Please let me know if you get the same error (as @aclifton314): `RuntimeError: index out of range` when using the script."
] | 1,572 | 1,575 | 1,575 | NONE | null | ## ❓ Questions & Help
**SYSTEM**
OS: Linux pop-os 5.0.0
Python version: 3.6.8
Torch version: 1.3.0
Transformers version: 2.1.1
I am running this linux VM with the above software versions on a Windows 10 laptop.
<!-- A clear and concise description of the question. -->
I am interested in comparing the embeddings for two pieces of text from the various models in hf-transformers using some metric (like cosine similarity). I am running into an issue where one piece of text is exceeding the max sequence length for roberta (899 > 512). I was wondering what the best work around for this would be?
```python
from transformers import RobertaTokenizer, RobertaModel
import torch
tokenizer = RobertaTokenizer.from_pretrained('roberta-large')
model = RobertaModel.from_pretrained('roberta-large')
seqA = 'Five college students take time off to spend a peaceful vacation in a remote cabin. A book and audio tape is discovered, and its evil is found to be powerful once the incantations are read out loud. The friends find themselves helpless to stop the evil as it takes them one by one, with only one survivor left with the evil dead and desperately tries to fight to live until morning.'
seqA_tokens = torch.tensor(tokenizer.encode(seqA)).unsqueeze(0)
seqA_embeddings = model(seqA_tokens)[0]
seqB = 'Five Michigan State University students—Ash Williams, his girlfriend, Linda; Ashs sister, Cheryl; their friend Scott; and his girlfriend Shelly—vacation at an isolated cabin in rural Tennessee. Approaching the cabin, the group notices the porch swing move on its own but suddenly stop as Scott grabs the doorknob. While Cheryl draws a picture of a clock, the clock stops, and she hears a faint, demonic voice tell her to "Join us". Her hand becomes possessed, turns pale and draws a picture of a book with a demonic face on its cover. Although shaken, she does not mention the incident. When the cellar trapdoor flies open during dinner, Shelly, Linda, and Cheryl remain upstairs as Ash and Scott investigate. They find the Naturan Demanto, a Sumerian version of the Egyptian Book of the Dead, along with an archaeologists tape recorder. Scott and Ash joke around with the items and take them upstairs. Scott plays a tape of incantations that resurrect a demonic entity. Cheryl yells for Scott to turn off the tape recorder, and a tree branch breaks one of the cabins windows. Later that evening, an agitated Cheryl goes into the woods to investigate strange noises. She gets attacked, stripped, pinned to the ground, and raped by demonically possessed trees. When she manages to escape and returns to the cabin bruised and anguished, Ash agrees to take her back into town, only to discover that the bridge to the cabin has been destroyed. Cheryl panics as she realizes that they are now trapped and the demonic entity will not let them leave. Back at the cabin, Ash listens to more of the tape, learning that the only way to kill the entity is to dismember a possessed host. As Linda and Shelly play Spades, Cheryl correctly calls out the cards, succumbs to the entity, and levitates. In a raspy, demonic voice, she demands to know why they disturbed her sleep and threatens to kill everyone. After she falls to the floor, the group checks on her; Cheryl stabs Linda in the ankle and throws Ash into a shelf. Scott knocks Cheryl into the cellar and locks her inside. Everyone fights about what to do. Shelly becomes paranoid upon seeing Cheryls demonic transformation. She lies down in her room but is drawn to look out of her window, where a demon crashes through and attacks her. Shelly becomes a Deadite and scratches Scotts face. Scott throws her into the fireplace, briefly burning Shellys face. As she attacks him again, Scott bisects part of her wrist with a knife and then she bites off her own mangled hand. Scott stabs her in the back with a Sumerian dagger, apparently killing her. When she reanimates, Scott dismembers her with an axe and buries the remains. Shaken by the experience, he leaves to find a way back to town. He shortly returns mortally wounded; he dies while warning Ash that the trees will not let them escape alive. When Ash checks on Linda, he is horrified to find that she has become possessed. She attacks him, but he stabs her with a Sumerian dagger. Unwilling to dismember her, he buries her instead. She revives and attacks him, forcing him to decapitate her with a shovel. Her headless body bleeds on his face as it tries to rape him, but he pushes it off and retreats to the cabin. Back inside, Ash is attacked by Cheryl, who has escaped the cellar, and the reanimated Scott. He shoots Cheryl several times, gouges Scotts eyes out, and pulls out a branch lodged in Scotts stomach, causing him to bleed out. The Deadites attack, bite, and beat Ash with a fire iron. Ash throws the Naturan Demanto into the fireplace, and the Deadites stop their attack. As the book burns, Scott, Cheryl, and the book gruesomely decompose. Demonic hands protrude from both corpses, and Cheryls decomposed body falls and splatters in front of Ash, leaving him covered in her and Scotts entrails. He hears a voice say "Join Us" but relaxes when it dies away. As day breaks, Ash stumbles outside. Before he can leave, an unseen entity rapidly trails the forest and runs through the cabin, breaking the cabin doors and attacks him from behind.'
seqB_tokens = torch.tensor(tokenizer.encode(seqB)).unsqueeze(0)
seqB_embeddings = model(seqB_tokens)[0]
#code to compare seqA and seqB embeddings using cosine similarity
```
Here is the error:
```python
Token indices sequence length is longer than the specified maximum sequence length for this model (899 > 512). Running this sequence through the model will result in indexing errors
Traceback (most recent call last):
File "/model_embeddings_test.py", line 48, in <module>
seqB_embeddings = model(seqB_tokens)[0]
File "/usr/local/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/transformers/transformers/modeling_bert.py", line 692, in forward
embedding_output = self.embeddings(input_ids, position_ids=position_ids, token_type_ids=token_type_ids)
File "/usr/local/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/transformers/transformers/modeling_roberta.py", line 60, in forward
position_ids=position_ids)
File "/transformers/transformers/modeling_bert.py", line 170, in forward
position_embeddings = self.position_embeddings(position_ids)
File "/usr/local/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/anaconda3/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 118, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/usr/local/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 1454, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: index out of range at /opt/conda/conda-bld/pytorch_1550796191843/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:191
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1726/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1726/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1725 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1725/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1725/comments | https://api.github.com/repos/huggingface/transformers/issues/1725/events | https://github.com/huggingface/transformers/issues/1725 | 517,276,150 | MDU6SXNzdWU1MTcyNzYxNTA= | 1,725 | GPT2 text generation repeat | {
"login": "aclifton314",
"id": 53267795,
"node_id": "MDQ6VXNlcjUzMjY3Nzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/53267795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aclifton314",
"html_url": "https://github.com/aclifton314",
"followers_url": "https://api.github.com/users/aclifton314/followers",
"following_url": "https://api.github.com/users/aclifton314/following{/other_user}",
"gists_url": "https://api.github.com/users/aclifton314/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aclifton314/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aclifton314/subscriptions",
"organizations_url": "https://api.github.com/users/aclifton314/orgs",
"repos_url": "https://api.github.com/users/aclifton314/repos",
"events_url": "https://api.github.com/users/aclifton314/events{/privacy}",
"received_events_url": "https://api.github.com/users/aclifton314/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Adding **temperature** (in brief, _Temperature is a hyperparameter of LSTMs - and neural networks generally - used to control the randomness of predictions by scaling the logits before applying softmax_) could be an interesting way!\r\n\r\nHere is a modified version of your code _with temperature_:\r\n```\r\nimport torch\r\nfrom transformers import GPT2Tokenizer, GPT2LMHeadModel\r\nimport torch.nn.functional as F\r\n\r\nsentence = 'Natural language processing tasks are typically approached with'\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\ncontext_tokens = tokenizer.encode(sentence, add_special_tokens=False)\r\ncontext = torch.tensor(context_tokens, dtype=torch.long)\r\nnum_samples = 1\r\ncontext = context.unsqueeze(0).repeat(num_samples, 1)\r\ngenerated = context\r\n\r\nmodel = GPT2LMHeadModel.from_pretrained('gpt2')\r\nmodel.eval()\r\nlength = 20\r\ntemperature = 0.8 # ADD TEMPERATURE PARAMETER!\r\nwith torch.no_grad():\r\n\tfor jj in range(5):\r\n\t\tfor _ in range(length):\r\n\t\t\toutputs = model(generated)\r\n\t\t\tnext_token_logits = outputs[0][:, -1, :] / (temperature if temperature > 0 else 1.) ### CHANGE THIS ROW\r\n\t\t\tnext_token = torch.multinomial(F.softmax(next_token_logits, dim=-1), num_samples=1) ### CHANGE THIS ROW\r\n\t\t\tgenerated = torch.cat((generated, next_token), dim=1)\r\n\r\n\r\nout = generated\r\nout = out[:, len(context_tokens):].tolist()\r\nfor o in out:\r\n\ttext = tokenizer.decode(o, clean_up_tokenization_spaces=True)\r\n\r\n\tprint(text)\r\n``` \r\n\r\nThe output is the following:\r\n\r\n> a hand for 1-10 minutes. However, we had recently seen that a small set of tasks can be used to process many different languages in a short period of time. We had designed the program from scratch. The purpose of the program was to generate as many variables and as many basic rules as possible. Each rule got its own \"factory\". Each register gets its own \"rules\". The terms used are:<|endoftext|>Intel's #1-Buying Power-Technology\r\n\r\nObviously, you can change **seed** and **temperature** itself too!",
"@TheEdoardo93 Thanks for the feedback! Closing this issue.",
"just have the same issue, anyone knows how to solve it? thx!",
"@drizzt00s Since this posting, HF has put out a fantastic blog about generating text utilizing different sampling methods. I highly recommend it. It's well written!\r\n\r\nhttps://huggingface.co/blog/how-to-generate\r\n\r\nGive that a read and see if it helps you out.\r\n"
] | 1,572 | 1,598 | 1,573 | NONE | null | ## ❓ Questions & Help
**SYSTEM**
OS: Linux pop-os 5.0.0
Python version: 3.6.8
Torch version: 1.3.0
Transformers version: 2.1.1
I am running this linux VM with the above software versions on a Windows 10 laptop.
<!-- A clear and concise description of the question. -->
I am running the following code:
```python
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel
sentence = 'Natural language processing tasks are typically approached with'
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
context_tokens = tokenizer.encode(sentence, add_special_tokens=False)
context = torch.tensor(context_tokens, dtype=torch.long)
num_samples = 1
context = context.unsqueeze(0).repeat(num_samples, 1)
generated = context
model = GPT2LMHeadModel.from_pretrained('gpt2')
model.eval()
length = 20
with torch.no_grad():
for jj in range(5):
for _ in range(length):
outputs = model(generated)
next_token_logits = outputs[0][:, -1, :]
next_token = torch.argmax(next_token_logits, dim=-1).unsqueeze(-1)
generated = torch.cat((generated, next_token), dim=1)
out = generated
out = out[:, len(context_tokens):].tolist()
for o in out:
text = tokenizer.decode(o, clean_up_tokenization_spaces=True)
```
What I was noticing was that GPT2 starts to produce repetitive text (see below) with this approach. I am not sure the best way to prevent this from happening and was wondering if others had any ideas? Thank you in advance!
**OUTPUT**
```
a single task, such as a word search, and the task is then repeated. The task is then repeated for each word in the search.
The task is then repeated for each word in the search. The task is then repeated for each word in the search. The task is then repeated for each word in the search. The task is then repeated for each word in the search. The task is then repeated for each word in the search. The task is then repeated for each word in
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1725/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1724 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1724/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1724/comments | https://api.github.com/repos/huggingface/transformers/issues/1724/events | https://github.com/huggingface/transformers/pull/1724 | 517,252,071 | MDExOlB1bGxSZXF1ZXN0MzM2MzU3OTAw | 1,724 | Fix encode_plus | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Looks good to me, this philosophy (only model-inputs and optional non-model-inputs) seems way better to me than the previous one."
] | 1,572 | 1,574 | 1,574 | MEMBER | null | Add options to control more precisely the output of `encode_plus`.
All the outputs that can't be ingested by a model as deactivated by default.
`token_type_ids` can be ingested by most models and if thus activated by default but can be turned off.
Fix #1532 among others | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1724/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1724",
"html_url": "https://github.com/huggingface/transformers/pull/1724",
"diff_url": "https://github.com/huggingface/transformers/pull/1724.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1724.patch",
"merged_at": 1574871289000
} |
https://api.github.com/repos/huggingface/transformers/issues/1723 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1723/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1723/comments | https://api.github.com/repos/huggingface/transformers/issues/1723/events | https://github.com/huggingface/transformers/pull/1723 | 517,223,228 | MDExOlB1bGxSZXF1ZXN0MzM2MzM0NTU4 | 1,723 | Fix #1623 | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1723?src=pr&el=h1) Report\n> Merging [#1723](https://codecov.io/gh/huggingface/transformers/pull/1723?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c8f2712199771e313ab8901698b0886e1c1bf39d?src=pr&el=desc) will **increase** coverage by `1.18%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1723?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1723 +/- ##\n==========================================\n+ Coverage 83.95% 85.14% +1.18% \n==========================================\n Files 94 94 \n Lines 13951 13920 -31 \n==========================================\n+ Hits 11713 11852 +139 \n+ Misses 2238 2068 -170\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1723?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/1723/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2N0cmwucHk=) | `95.45% <0%> (-0.91%)` | :arrow_down: |\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1723/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `70.55% <0%> (-0.54%)` | :arrow_down: |\n| [transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/1723/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RyYW5zZm9feGwucHk=) | `75.16% <0%> (-0.28%)` | :arrow_down: |\n| [transformers/tests/tokenization\\_utils\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1723/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl91dGlsc190ZXN0LnB5) | `96% <0%> (-0.16%)` | :arrow_down: |\n| [transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/1723/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbmV0LnB5) | `73.28% <0%> (ø)` | :arrow_up: |\n| [transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/1723/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbS5weQ==) | `88.42% <0%> (ø)` | :arrow_up: |\n| [transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/1723/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2dwdDIucHk=) | `84.19% <0%> (+0.05%)` | :arrow_up: |\n| [transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/1723/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2Rpc3RpbGJlcnQucHk=) | `95.8% <0%> (+0.05%)` | :arrow_up: |\n| [transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/1723/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX29wZW5haS5weQ==) | `81.75% <0%> (+0.06%)` | :arrow_up: |\n| [transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/1723/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2JlcnQucHk=) | `87.59% <0%> (+0.07%)` | :arrow_up: |\n| ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/1723/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1723?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1723?src=pr&el=footer). Last update [c8f2712...89d6272](https://codecov.io/gh/huggingface/transformers/pull/1723?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,572 | 1,578 | 1,572 | MEMBER | null | Make use of the `--cache_dir` argument in all the examples that include it.
cc @VictorSanh (distillation script) @LysandreJik (all the other scripts) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1723/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1723/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1723",
"html_url": "https://github.com/huggingface/transformers/pull/1723",
"diff_url": "https://github.com/huggingface/transformers/pull/1723.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1723.patch",
"merged_at": 1572939391000
} |
https://api.github.com/repos/huggingface/transformers/issues/1722 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1722/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1722/comments | https://api.github.com/repos/huggingface/transformers/issues/1722/events | https://github.com/huggingface/transformers/issues/1722 | 517,212,978 | MDU6SXNzdWU1MTcyMTI5Nzg= | 1,722 | BUG for XLNet: Low GPU usage and High CPU usage, very low running speed! | {
"login": "Weili-NLP",
"id": 25901705,
"node_id": "MDQ6VXNlcjI1OTAxNzA1",
"avatar_url": "https://avatars.githubusercontent.com/u/25901705?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Weili-NLP",
"html_url": "https://github.com/Weili-NLP",
"followers_url": "https://api.github.com/users/Weili-NLP/followers",
"following_url": "https://api.github.com/users/Weili-NLP/following{/other_user}",
"gists_url": "https://api.github.com/users/Weili-NLP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Weili-NLP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Weili-NLP/subscriptions",
"organizations_url": "https://api.github.com/users/Weili-NLP/orgs",
"repos_url": "https://api.github.com/users/Weili-NLP/repos",
"events_url": "https://api.github.com/users/Weili-NLP/events{/privacy}",
"received_events_url": "https://api.github.com/users/Weili-NLP/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, could you provide a minimal code that reproduces the issue so that we may help debug/check on our side?",
"hey @LysandreJik,\r\n\r\nI think I am experiencing the same/similar issue. Below are the sample code, a screenshot showing the high CPU and low GPU usage, and the `requirements.txt` file.\r\n\r\nIn terms of technical setup, I used a docker container deployed with [vast.ai](https://vast.ai/) with RTX 2070S, CUDA 10.1, AMD Ryzen 5 2600X (6c/12t), 16GB. It used the [latest PyTorch image](https://hub.docker.com/r/pytorch/pytorch) as of the time of writing. From my observations the fluctuating CPU usage seems to max out at 600%, hinting at the 6 physical cores. However. I think I have experienced the issue with 2080 Ti and 1080 Ti, as well.\r\n\r\nThank you!\r\n\r\n```python\r\n# -*- coding: utf-8 -*-\r\n\r\nimport torch\r\nfrom transformers import XLNetTokenizer, XLNetLMHeadModel\r\nimport requests\r\n\r\nconvai1_data = requests.get('http://convai.io/2017/data/train_full.json').json()\r\n\r\nfor dial in convai1_data:\r\n utterances = [thread_line['text'] for thread_line in dial['thread']]\r\n dial['utterances'] = utterances\r\n dial['predictions'] = dict()\r\n\r\nconvai1_data = [dial for dial in convai1_data if len(dial['utterances']) > 2]\r\n\r\n# https://github.com/huggingface/transformers/issues/917#issuecomment-525297746\r\ndef xlnet_sent_probability(PADDING_TEXT, text):\r\n tokenize_text = model_tokenizer.tokenize(text)[:512]\r\n tokenize_input = model_tokenizer.tokenize(PADDING_TEXT)[:511] + ['<eod>'] + tokenize_text\r\n\r\n sentence_word_probs = list()\r\n sentence_best_word_probs = list()\r\n best_words = list()\r\n\r\n for max_word_id in range((len(tokenize_input)-len(tokenize_text)), (len(tokenize_input))):\r\n\r\n sent = tokenize_input[:]\r\n\r\n input_ids = torch.tensor([model_tokenizer.convert_tokens_to_ids(sent)])\r\n\r\n perm_mask = torch.zeros((1, input_ids.shape[1], input_ids.shape[1]), dtype=torch.float)\r\n perm_mask[:, :, max_word_id:] = 1.0 \r\n\r\n target_mapping = torch.zeros((1, 1, input_ids.shape[1]), dtype=torch.float)\r\n target_mapping[0, 0, max_word_id] = 1.0\r\n\r\n if torch.cuda.is_available():\r\n input_ids = input_ids.cuda()\r\n perm_mask = perm_mask.cuda()\r\n target_mapping = target_mapping.cuda()\r\n\r\n with torch.no_grad():\r\n outputs = model(input_ids, perm_mask=perm_mask, target_mapping=target_mapping)\r\n next_token_logits = outputs[0] # Output has shape [target_mapping.size(0), target_mapping.size(1), config.vocab_size]\r\n predicted_prob = torch.softmax(next_token_logits[0][-1], dim=-1)\r\n\r\n predicted_prob = predicted_prob.detach().cpu().numpy()\r\n\r\n word_id = model_tokenizer.convert_tokens_to_ids([tokenize_input[max_word_id]])[0]\r\n word_prob = predicted_prob[word_id].item()\r\n\r\n sentence_word_probs.append(word_prob)\r\n\r\n return sentence_word_probs\r\n\r\nfor XLNET_MODEL in ['xlnet-base-cased', 'xlnet-large-cased']:\r\n\r\n model_tokenizer = XLNetTokenizer.from_pretrained(XLNET_MODEL)\r\n model = XLNetLMHeadModel.from_pretrained(XLNET_MODEL)\r\n\r\n if torch.cuda.is_available():\r\n model = model.cuda()\r\n\r\n model = model.eval()\r\n\r\n for dial in convai1_data:\r\n utterances = dial['utterances']\r\n\r\n sentences_word_probs = list()\r\n\r\n for u1, u2 in zip(utterances[:-1], utterances[1:]):\r\n sentence_word_probs = xlnet_sent_probability(u1, u2)\r\n\r\n sentences_word_probs.append(sentence_word_probs)\r\n\r\n dial['predictions'][XLNET_MODEL+'_sentences_word_probs'] = sentences_word_probs\r\n\r\n```\r\n\r\n[requirements.txt](https://github.com/huggingface/transformers/files/3883834/requirements.txt)\r\n[cpu_and_gpu_usage](https://user-images.githubusercontent.com/8143425/69497875-e9226f80-0ee1-11ea-8098-f81df3972f3e.png)\r\n\r\n\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"The same problem. How to solve it?"
] | 1,572 | 1,593 | 1,580 | NONE | null | ## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): XLNet
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [ ] the official example scripts: (give details)
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: XLNet used as encoder for seq2seq generation
## To Reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: Centos 7.6
* Python version: 3.6
* PyTorch version: 1.1.0
* PyTorch Transformers version (or branch): 1.2.0
* Using GPU ? V100, 16G
* Distributed of parallel setup ? Use multiprocessing for multi GPU training
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1722/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1722/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1721 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1721/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1721/comments | https://api.github.com/repos/huggingface/transformers/issues/1721/events | https://github.com/huggingface/transformers/pull/1721 | 517,092,802 | MDExOlB1bGxSZXF1ZXN0MzM2MjI4NjUz | 1,721 | Add common getter and setter for input_embeddings & output_embeddings | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Couldn't get `input_embeddings` and `output_embeddings` to work well as python properties with our class inheritance hierarchy for some reason.\r\n\r\nSwitching to simple `get_xxx` and `set_xxx` for now.\r\n\r\nMaybe let's investigate that again in the future if needed.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1721?src=pr&el=h1) Report\n> Merging [#1721](https://codecov.io/gh/huggingface/transformers/pull/1721?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8a62835577a2a93642546858b21372e43c1a1ff8?src=pr&el=desc) will **decrease** coverage by `1.18%`.\n> The diff coverage is `99.17%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1721?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1721 +/- ##\n==========================================\n- Coverage 85.14% 83.95% -1.19% \n==========================================\n Files 94 94 \n Lines 13920 13951 +31 \n==========================================\n- Hits 11852 11713 -139 \n- Misses 2068 2238 +170\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1721?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/tests/tokenization\\_bert\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1721/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl9iZXJ0X3Rlc3QucHk=) | `89.47% <100%> (-9.2%)` | :arrow_down: |\n| [transformers/tests/modeling\\_common\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1721/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2NvbW1vbl90ZXN0LnB5) | `73.67% <100%> (-2.83%)` | :arrow_down: |\n| [transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/1721/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX2dwdDIucHk=) | `84.14% <100%> (-0.06%)` | :arrow_down: |\n| [transformers/tests/tokenization\\_roberta\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1721/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl9yb2JlcnRhX3Rlc3QucHk=) | `75.92% <100%> (-16.53%)` | :arrow_down: |\n| [transformers/tests/tokenization\\_xlm\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1721/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl94bG1fdGVzdC5weQ==) | `82.22% <100%> (-15.51%)` | :arrow_down: |\n| [transformers/tests/tokenization\\_distilbert\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1721/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL3Rva2VuaXphdGlvbl9kaXN0aWxiZXJ0X3Rlc3QucHk=) | `63.63% <100%> (-31.61%)` | :arrow_down: |\n| [transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/1721/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3JvYmVydGEucHk=) | `71.08% <100%> (+0.53%)` | :arrow_up: |\n| [transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/1721/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3RyYW5zZm9feGwucHk=) | `75.43% <100%> (+0.27%)` | :arrow_up: |\n| [transformers/tests/modeling\\_auto\\_test.py](https://codecov.io/gh/huggingface/transformers/pull/1721/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL3Rlc3RzL21vZGVsaW5nX2F1dG9fdGVzdC5weQ==) | `33.89% <100%> (-64.41%)` | :arrow_down: |\n| [transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/1721/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL21vZGVsaW5nX3hsbS5weQ==) | `88.42% <100%> (ø)` | :arrow_up: |\n| ... and [26 more](https://codecov.io/gh/huggingface/transformers/pull/1721/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1721?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1721?src=pr&el=footer). Last update [8a62835...b340a91](https://codecov.io/gh/huggingface/transformers/pull/1721?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This looks very clean, it looks good to me!",
"Thanks @LysandreJik "
] | 1,572 | 1,578 | 1,572 | MEMBER | null | This PR adds two attributes `input_embeddings` and `output_embeddings` as common properties for all the models.
Simpler to write weights tying.
Also superseed #1598.
cc @rlouf | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1721/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1721",
"html_url": "https://github.com/huggingface/transformers/pull/1721",
"diff_url": "https://github.com/huggingface/transformers/pull/1721.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1721.patch",
"merged_at": 1572880913000
} |
https://api.github.com/repos/huggingface/transformers/issues/1720 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1720/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1720/comments | https://api.github.com/repos/huggingface/transformers/issues/1720/events | https://github.com/huggingface/transformers/issues/1720 | 517,089,410 | MDU6SXNzdWU1MTcwODk0MTA= | 1,720 | run_generation.py Runtime error | {
"login": "Yondijr",
"id": 42034404,
"node_id": "MDQ6VXNlcjQyMDM0NDA0",
"avatar_url": "https://avatars.githubusercontent.com/u/42034404?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yondijr",
"html_url": "https://github.com/Yondijr",
"followers_url": "https://api.github.com/users/Yondijr/followers",
"following_url": "https://api.github.com/users/Yondijr/following{/other_user}",
"gists_url": "https://api.github.com/users/Yondijr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yondijr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yondijr/subscriptions",
"organizations_url": "https://api.github.com/users/Yondijr/orgs",
"repos_url": "https://api.github.com/users/Yondijr/repos",
"events_url": "https://api.github.com/users/Yondijr/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yondijr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"### Description\r\nRunning `python run_generation.py \r\n--model_type=gpt2 \r\n--model_name_or_path=gpt2` in my environment works as expected.\r\n\r\nMy suggestions:\r\n- update PyTorch to latest version with `pip install --upgrade torch`\r\n- try to install Transformers' library from branch master, and not use the one installed with PyPi\r\n- try to use the CPU for inference (typically, we'll use GPU/TPU only in training a model, whereas in inference mode we'll use CPU) --> add **--no_cuda** when calling `run_generation.py` script\r\n\r\n### My environment\r\n- __Python__ '3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) \\n[GCC 7.3.0]'\r\n- __PyTorch__ v1.3.0\r\n- __HuggingFace's Transformers__ v2.1.1 (installed **not** from PyPi, but with **git clone from master branch**)\r\n- __O.S.__ 'Linux-4.15.0-66-generic-x86_64-with-debian-buster-sid'\r\n- __Device__ tried with CPU and GPU (both work as expected)\r\n\r\n### Output Example\r\n```2019-11-04 14:16:59.671926: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1\r\n2019-11-04 14:16:59.684347: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2019-11-04 14:16:59.684998: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: \r\nname: GeForce GTX 980 Ti major: 5 minor: 2 memoryClockRate(GHz): 1.076\r\npciBusID: 0000:01:00.0\r\n2019-11-04 14:16:59.685168: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0\r\n2019-11-04 14:16:59.686072: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0\r\n2019-11-04 14:16:59.686812: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0\r\n2019-11-04 14:16:59.686987: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0\r\n2019-11-04 14:16:59.687996: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0\r\n2019-11-04 14:16:59.688758: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0\r\n2019-11-04 14:16:59.688840: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libcudnn.so.7'; dlerror: libcudnn.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64\r\n2019-11-04 14:16:59.688849: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1641] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.\r\nSkipping registering GPU devices...\r\n2019-11-04 14:16:59.689057: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA\r\n2019-11-04 14:16:59.712088: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3408000000 Hz\r\n2019-11-04 14:16:59.712609: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x561d09196760 executing computations on platform Host. Devices:\r\n2019-11-04 14:16:59.712622: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version\r\n2019-11-04 14:16:59.752521: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1006] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\r\n2019-11-04 14:16:59.753095: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x561d09161190 executing computations on platform CUDA. Devices:\r\n2019-11-04 14:16:59.753111: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): GeForce GTX 980 Ti, Compute Capability 5.2\r\n2019-11-04 14:16:59.753166: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:\r\n2019-11-04 14:16:59.753173: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165] \r\n11/04/2019 14:17:00 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json from cache at /home/vidiemme/.cache/torch/transformers/f2808208f9bec2320371a9f5f891c184ae0b674ef866b79c58177067d15732dd.1512018be4ba4e8726e41b9145129dc30651ea4fec86aa61f4b9f40bf94eac71\r\n11/04/2019 14:17:00 - INFO - transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt from cache at /home/vidiemme/.cache/torch/transformers/d629f792e430b3c76a1291bb2766b0a047e36fae0588f9dbc1ae51decdff691b.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda\r\n11/04/2019 14:17:01 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-config.json from cache at /home/vidiemme/.cache/torch/transformers/4be02c5697d91738003fb1685c9872f284166aa32e061576bbe6aaeb95649fcf.085d5f6a8e7812ea05ff0e6ed0645ab2e75d80387ad55c1ad9806ee70d272f80\r\n11/04/2019 14:17:01 - INFO - transformers.configuration_utils - Model config {\r\n \"attn_pdrop\": 0.1,\r\n \"embd_pdrop\": 0.1,\r\n \"finetuning_task\": null,\r\n \"initializer_range\": 0.02,\r\n \"is_decoder\": false,\r\n \"layer_norm_epsilon\": 1e-05,\r\n \"n_ctx\": 1024,\r\n \"n_embd\": 768,\r\n \"n_head\": 12,\r\n \"n_layer\": 12,\r\n \"n_positions\": 1024,\r\n \"num_labels\": 1,\r\n \"output_attentions\": false,\r\n \"output_hidden_states\": false,\r\n \"output_past\": true,\r\n \"pruned_heads\": {},\r\n \"resid_pdrop\": 0.1,\r\n \"summary_activation\": null,\r\n \"summary_first_dropout\": 0.1,\r\n \"summary_proj_to_labels\": true,\r\n \"summary_type\": \"cls_index\",\r\n \"summary_use_proj\": true,\r\n \"torchscript\": false,\r\n \"use_bfloat16\": false,\r\n \"vocab_size\": 50257\r\n}\r\n\r\n11/04/2019 14:17:01 - INFO - transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-pytorch_model.bin from cache at /home/vidiemme/.cache/torch/transformers/4295d67f022061768f4adc386234dbdb781c814c39662dd1662221c309962c55.778cf36f5c4e5d94c8cd9cefcf2a580c8643570eb327f0d4a1f007fab2acbdf1\r\n11/04/2019 14:17:12 - INFO - __main__ - Namespace(device=device(type='cuda'), length=20, model_name_or_path='gpt2', model_type='gpt2', n_gpu=1, no_cuda=False, num_samples=1, padding_text='', prompt='', repetition_penalty=1.0, seed=42, stop_token=None, temperature=1.0, top_k=0, top_p=0.9, xlm_lang='')\r\nModel prompt >>> Hi. I'm Edward and I'm a journalist.\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 27.07it/s]\r\n This writer lost her health and is now ill. But I want to show you something. That's\r\nModel prompt >>> \r\n```",
"@Yondijr could you try to update your Torch version to a more recent version than 1.0 and let us know if it fixes the issue?",
"@LysandreJik \r\nYes!\r\nHowever it **ONLY** works with torch 1.3 with GPU support. Interestingly even with the 1.3-cpu version I got the same error.\r\nThanks for the tip :)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,572 | 1,578 | 1,578 | NONE | null | ## ❓ Questions & Help
**Hi everyone,**
I'm currently trying to execute the run_generation example script.
However I can not get it to work. Tried to reinstall and played around with the parameters. However the error I get is always the same:
**I'm running the example from the website:**
_python run_generation.py \
--model_type=gpt2 \
--model_name_or_path=gpt2_
**It allocates resources etc. However when it is starting to generate I get this error:**
_Namespace(device=device(type='cuda'), length=20, model_name_or_path='gpt2', model_type='gpt2', n_gpu=1, no_cuda=False, num_samples=1, padding_text='', prompt='', repetition_penalty=1.0, seed=42, stop_token=None, temperature=1.0, top_k=0, top_p=0.9, xlm_lang='')
0%| | 0/20 [00:00<?, ?it/s]
Traceback (most recent call last):
File "run_generation.py", line 264, in <module>
main()
File "run_generation.py", line 249, in main
device=args.device,
File "run_generation.py", line 142, in sample_sequence
outputs = model(**inputs) # Note: we could also use 'past' with GPT-2/Transfo-XL/XLNet/CTRL (cached hidden-states)
File "/home/gstein/myPython/lib/python3.5/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/gstein/Executables/transformers/transformers/modeling_gpt2.py", line 533, in forward
head_mask=head_mask)
File "/home/gstein/myPython/lib/python3.5/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/gstein/Executables/transformers/transformers/modeling_gpt2.py", line 373, in forward
input_ids = input_ids.view(-1, input_shape[-1])
RuntimeError: cannot reshape tensor of 0 elements into shape [-1, 0]_
## Environment
* OS: Ubuntu 16.04.3
* Python version: 3.5.2
* Torch version 1.0
* cloned from Git 4. November 2019:
* Using GPU: Yes/No
##Thoughts?
Does anyone else encounter this behavior?
Or even better: Does somebody has a solution for it?)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1720/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1719 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1719/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1719/comments | https://api.github.com/repos/huggingface/transformers/issues/1719/events | https://github.com/huggingface/transformers/issues/1719 | 517,036,522 | MDU6SXNzdWU1MTcwMzY1MjI= | 1,719 | Can we fine tune GPT2 using multiple inputs? | {
"login": "tyu0912",
"id": 24836159,
"node_id": "MDQ6VXNlcjI0ODM2MTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/24836159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tyu0912",
"html_url": "https://github.com/tyu0912",
"followers_url": "https://api.github.com/users/tyu0912/followers",
"following_url": "https://api.github.com/users/tyu0912/following{/other_user}",
"gists_url": "https://api.github.com/users/tyu0912/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tyu0912/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tyu0912/subscriptions",
"organizations_url": "https://api.github.com/users/tyu0912/orgs",
"repos_url": "https://api.github.com/users/tyu0912/repos",
"events_url": "https://api.github.com/users/tyu0912/events{/privacy}",
"received_events_url": "https://api.github.com/users/tyu0912/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can merge them in the same document but add a token that indicates it's the beginning and end of each article. This will allow the generator to understand the concept of a document being a group of text and there are many of those groups in your document.\r\n\r\n**Example (assuming one line is one document):**\r\n\r\n```\r\n<sod> Today was a good day! <eod>\r\n<sod> Tomorrow is going to be even better <eod>\r\n```\r\n\r\nThen at the text generation step (e.g. using ```run_generation.py```), you can specify a ```stop_token``` argument if you want to generate one document at a time.",
"Fantastic! Thank you for sharing @enzoampil 👍 "
] | 1,572 | 1,573 | 1,573 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
Hello. I don't think this is possible looking at the code, but to make sure is it possible to use multiple input texts to fine tune GPT-2? For example, I have 5 news articles. Do I submit them all in one document or should I separate it out? I feel like the latter makes more sense especially if the articles are all written differently.
Thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1719/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1718 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1718/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1718/comments | https://api.github.com/repos/huggingface/transformers/issues/1718/events | https://github.com/huggingface/transformers/issues/1718 | 517,004,526 | MDU6SXNzdWU1MTcwMDQ1MjY= | 1,718 | Hello, how to upload a .ckpt file in TFBertForSequenceClassification? | {
"login": "hecongqing",
"id": 19923817,
"node_id": "MDQ6VXNlcjE5OTIzODE3",
"avatar_url": "https://avatars.githubusercontent.com/u/19923817?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hecongqing",
"html_url": "https://github.com/hecongqing",
"followers_url": "https://api.github.com/users/hecongqing/followers",
"following_url": "https://api.github.com/users/hecongqing/following{/other_user}",
"gists_url": "https://api.github.com/users/hecongqing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hecongqing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hecongqing/subscriptions",
"organizations_url": "https://api.github.com/users/hecongqing/orgs",
"repos_url": "https://api.github.com/users/hecongqing/repos",
"events_url": "https://api.github.com/users/hecongqing/events{/privacy}",
"received_events_url": "https://api.github.com/users/hecongqing/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"What model is referring to this .ckpt file? If the model is BERT or ALBERT, you can:\r\n- use the `convert_X_original_tf_checkpoint_to_pytorch --tf_checkpoint_path=dir/model.ckpt-xxx`, where _X_ is albert or bert\r\n- load the .pt model into Transformers as usual\r\n\r\n> ## Questions & Help\r\n> I find the TFBertForSequenceClassification only can upload .h5 file",
"@TheEdoardo93 if load the .pt model into Transformers as usual. I think it will be pytorch model.\r\nBut I want use the TFBertForSequenceClassification to train with the model.fit, and with the mutil gpu with tf Strategy.\r\n\r\nWhat can I do?\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,572 | 1,584 | 1,584 | NONE | null | ## ❓ Questions & Help
<!-- A clear and concise description of the question. -->
I find the TFBertForSequenceClassification only can upload .h5 file | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1718/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1717 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1717/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1717/comments | https://api.github.com/repos/huggingface/transformers/issues/1717/events | https://github.com/huggingface/transformers/pull/1717 | 517,002,397 | MDExOlB1bGxSZXF1ZXN0MzM2MTU2NDYx | 1,717 | Retaining unknown token behaver consistency in tokenizer for BERT and XLNET | {
"login": "ziliwang",
"id": 13744942,
"node_id": "MDQ6VXNlcjEzNzQ0OTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/13744942?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ziliwang",
"html_url": "https://github.com/ziliwang",
"followers_url": "https://api.github.com/users/ziliwang/followers",
"following_url": "https://api.github.com/users/ziliwang/following{/other_user}",
"gists_url": "https://api.github.com/users/ziliwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ziliwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ziliwang/subscriptions",
"organizations_url": "https://api.github.com/users/ziliwang/orgs",
"repos_url": "https://api.github.com/users/ziliwang/repos",
"events_url": "https://api.github.com/users/ziliwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/ziliwang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, thanks for this but this was the expected behavior for the Bert tokenizer."
] | 1,572 | 1,572 | 1,572 | CONTRIBUTOR | null | I found the BERT tokenizer and XLNET tokenizer behave difference.
for example, `"His name is 燚"`
In BERT, it's tokenized as: `['his', 'name', 'is', '[UNK]']`,
but in XLNET, it's tokenized as: `['▁His', '▁name', '▁is', '▁', '燚']`
The difference may adverse to the uniform training frame for BERT and XLNET, and trouble the length-based processing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1717/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1717",
"html_url": "https://github.com/huggingface/transformers/pull/1717",
"diff_url": "https://github.com/huggingface/transformers/pull/1717.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1717.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1716 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1716/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1716/comments | https://api.github.com/repos/huggingface/transformers/issues/1716/events | https://github.com/huggingface/transformers/pull/1716 | 516,960,371 | MDExOlB1bGxSZXF1ZXN0MzM2MTIzMzEw | 1,716 | add qa and result | {
"login": "pohanchi",
"id": 34079344,
"node_id": "MDQ6VXNlcjM0MDc5MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/34079344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pohanchi",
"html_url": "https://github.com/pohanchi",
"followers_url": "https://api.github.com/users/pohanchi/followers",
"following_url": "https://api.github.com/users/pohanchi/following{/other_user}",
"gists_url": "https://api.github.com/users/pohanchi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pohanchi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pohanchi/subscriptions",
"organizations_url": "https://api.github.com/users/pohanchi/orgs",
"repos_url": "https://api.github.com/users/pohanchi/repos",
"events_url": "https://api.github.com/users/pohanchi/events{/privacy}",
"received_events_url": "https://api.github.com/users/pohanchi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"this is a template for albert qa(squad). all modification is on the directory \"transformers/new_template \", \"examples/qa_albert/template\" the true tree structure you can refer to https://github.com/pohanchi/huggingface_albert, test squad 1.1 on albert_base and albert_xlarge is very close to the original paper ",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1716?src=pr&el=h1) Report\n> Merging [#1716](https://codecov.io/gh/huggingface/transformers/pull/1716?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8a62835577a2a93642546858b21372e43c1a1ff8?src=pr&el=desc) will **decrease** coverage by `5.09%`.\n> The diff coverage is `0%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/1716?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #1716 +/- ##\n=========================================\n- Coverage 85.14% 80.04% -5.1% \n=========================================\n Files 94 99 +5 \n Lines 13920 14806 +886 \n=========================================\n Hits 11852 11852 \n- Misses 2068 2954 +886\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/1716?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [transformers/new\\_template/tokenization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/1716/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL25ld190ZW1wbGF0ZS90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `0% <0%> (ø)` | |\n| [transformers/new\\_template/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/1716/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL25ld190ZW1wbGF0ZS9tb2RlbGluZ19hbGJlcnQucHk=) | `0% <0%> (ø)` | |\n| [transformers/new\\_template/optimization\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/1716/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL25ld190ZW1wbGF0ZS9vcHRpbWl6YXRpb25fYWxiZXJ0LnB5) | `0% <0%> (ø)` | |\n| [transformers/new\\_template/configuration\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/1716/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL25ld190ZW1wbGF0ZS9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `0% <0%> (ø)` | |\n| [transformers/new\\_template/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/1716/diff?src=pr&el=tree#diff-dHJhbnNmb3JtZXJzL25ld190ZW1wbGF0ZS9fX2luaXRfXy5weQ==) | `0% <0%> (ø)` | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/1716?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/1716?src=pr&el=footer). Last update [8a62835...38bba06](https://codecov.io/gh/huggingface/transformers/pull/1716?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi @pohanchi, great work on your ALBERT implementation! We're in the process of merging #1683 which already implements ALBERT, and would love your opinion on the implementation.",
"Because I don’t know how to upload the file to your team server(\nhttps://amazonS3....), so something roughly method I use is that using\ndrive to store some big file (pytorch model state dict)and let people\ndownload by themselves from drive. I just want to know that whether to do\nthat. By the way, I am a beginner of coderev .what is the converge mean ?\nAnd the bigger is good or other XD.\n\nOn Tue, Nov 5, 2019 at 00:44 Lysandre Debut <[email protected]>\nwrote:\n\n> Hi @pohanchi <https://github.com/pohanchi>, great work on your ALBERT\n> implementation! We're in the process of merging #1683\n> <https://github.com/huggingface/transformers/pull/1683> which already\n> implements ALBERT, and would love your opinion on the implementation.\n>\n> —\n> You are receiving this because you were mentioned.\n>\n>\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/1716?email_source=notifications&email_token=AIEAE4FDQIOBGB3ZCSXYFJ3QSBGOTA5CNFSM4JIPCM6KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEC75BDQ#issuecomment-549441678>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AIEAE4EGLCYTJRVDCKNDRRDQSBGOTANCNFSM4JIPCM6A>\n> .\n>\n",
"My Repository is using lamb optimizer to train albert model. If you use lamb optimizer, make sure adam_epsilon set up to 1e-12 and weight decay set to 0.1 this will help model become stable in training",
"also the learning rate need to small than 5e-5 \r\nif large, sometimes performance will drop. \r\n",
"That's nice to know, thanks @pohanchi ",
"Closing this since ALBERT is now in the library."
] | 1,572 | 1,575 | 1,575 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1716/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1716",
"html_url": "https://github.com/huggingface/transformers/pull/1716",
"diff_url": "https://github.com/huggingface/transformers/pull/1716.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1716.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/1715 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1715/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1715/comments | https://api.github.com/repos/huggingface/transformers/issues/1715/events | https://github.com/huggingface/transformers/pull/1715 | 516,953,300 | MDExOlB1bGxSZXF1ZXN0MzM2MTE3NjQ3 | 1,715 | Retaining unknown token behaver consistency in tokenizer for BERT and XLNET | {
"login": "ziliwang",
"id": 13744942,
"node_id": "MDQ6VXNlcjEzNzQ0OTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/13744942?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ziliwang",
"html_url": "https://github.com/ziliwang",
"followers_url": "https://api.github.com/users/ziliwang/followers",
"following_url": "https://api.github.com/users/ziliwang/following{/other_user}",
"gists_url": "https://api.github.com/users/ziliwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ziliwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ziliwang/subscriptions",
"organizations_url": "https://api.github.com/users/ziliwang/orgs",
"repos_url": "https://api.github.com/users/ziliwang/repos",
"events_url": "https://api.github.com/users/ziliwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/ziliwang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,572 | 1,572 | 1,572 | CONTRIBUTOR | null | I found the BERT tokenizer and XLNET tokenizer behave difference.
for example, `"His name is 燚"`
In BERT, it's tokenized as:` ['his', 'name', 'is', '[UNK]'],`
but in XLNET, it's tokenized as:` ['▁His', '▁name', '▁is', '▁', '燚']`
The difference may adverse to the uniform training frame for BERT and XLNET, and trouble the length-based processing. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1715/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/1715",
"html_url": "https://github.com/huggingface/transformers/pull/1715",
"diff_url": "https://github.com/huggingface/transformers/pull/1715.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/1715.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/1714 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1714/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1714/comments | https://api.github.com/repos/huggingface/transformers/issues/1714/events | https://github.com/huggingface/transformers/issues/1714 | 516,888,508 | MDU6SXNzdWU1MTY4ODg1MDg= | 1,714 | How to train from scratch | {
"login": "anandhperumal",
"id": 12907396,
"node_id": "MDQ6VXNlcjEyOTA3Mzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/12907396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anandhperumal",
"html_url": "https://github.com/anandhperumal",
"followers_url": "https://api.github.com/users/anandhperumal/followers",
"following_url": "https://api.github.com/users/anandhperumal/following{/other_user}",
"gists_url": "https://api.github.com/users/anandhperumal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anandhperumal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anandhperumal/subscriptions",
"organizations_url": "https://api.github.com/users/anandhperumal/orgs",
"repos_url": "https://api.github.com/users/anandhperumal/repos",
"events_url": "https://api.github.com/users/anandhperumal/events{/privacy}",
"received_events_url": "https://api.github.com/users/anandhperumal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If you want to randomly initialize a model simply initialize it via its constructor rather than from the `from_pretrained` method:\r\n\r\n```py\r\nfrom transformers import GPT2Config, GPT2Model\r\n\r\nconfig = GPT2Config() # define your configuration here\r\nmodel = GPT2Model(config) # Initialize your model from your config\r\n```",
"@LysandreJik Thanks for the input.\r\nI did something like this \r\n```\r\n config = GPT2Config(vocab_size)\r\n model = GPT2Model(config)\r\n```\r\nApart from vocab size, I'm keeping everything else to default value how do I make sure that it doesn't have any pre-trained value?",
"The values are only loaded if your instantiate the model by calling ˋGPT2Model.from_pretrained`, so you’re fine 🙂",
"@rlouf Thanks"
] | 1,572 | 1,573 | 1,573 | NONE | null | I would like to train the model from scratch.
How can I drop the trained weight? using the same architecture for Gpt2 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1714/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1714/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1713 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1713/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1713/comments | https://api.github.com/repos/huggingface/transformers/issues/1713/events | https://github.com/huggingface/transformers/issues/1713 | 516,887,054 | MDU6SXNzdWU1MTY4ODcwNTQ= | 1,713 | How to mask lm_labels and compute loss? --- Finetune gpt2: masking the lm_labels with '-1' and padding increase the perplexity a lot! | {
"login": "fabrahman",
"id": 22799593,
"node_id": "MDQ6VXNlcjIyNzk5NTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/22799593?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fabrahman",
"html_url": "https://github.com/fabrahman",
"followers_url": "https://api.github.com/users/fabrahman/followers",
"following_url": "https://api.github.com/users/fabrahman/following{/other_user}",
"gists_url": "https://api.github.com/users/fabrahman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fabrahman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fabrahman/subscriptions",
"organizations_url": "https://api.github.com/users/fabrahman/orgs",
"repos_url": "https://api.github.com/users/fabrahman/repos",
"events_url": "https://api.github.com/users/fabrahman/events{/privacy}",
"received_events_url": "https://api.github.com/users/fabrahman/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Can any one help regarding this question? I really appreciate that. Is there any example of finetuning and masking the context and padded indices? I am following [this](https://medium.com/huggingface/how-to-build-a-state-of-the-art-conversational-ai-with-transfer-learning-2d818ac26313), but I get a uge perplexity compared to when I don't mask.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,572 | 1,579 | 1,579 | NONE | null | Hi,
I wanted to finetune gpt2 in a seq2seq format. For that I followed the same approach used for convAI2 explained [here](https://medium.com/huggingface/how-to-build-a-state-of-the-art-conversational-ai-with-transfer-learning-2d818ac26313).
I followed two steps and each step highly increase the ppl.
1) first I masked part of the lm_labels (with '-1') which I didn't wanted to calculate loss for. Doing that I realized the ppl increased but then I also try second step below
2) I padded my input to a maximum length 256 and masked the padded indices as well. Doing that increases the valid ppl (while training) starting from ```perplexity = tensor(6.8035e+10)``` rather than ```perplexity = tensor(17.67)``` when I don't mask lm_labels for padded indices and parts of input sequence that I don't want to compute loss for.
This ppl difference is huge and I am not sure if I have to taken care of sth else beside just masking lm_labels with '-1'?
Here is how I modify ```run_lm_finetuning.py``` script to pad my samples to a same length:
```
def pad(x, padding, padding_length=128):
return x + [padding] * (padding_length - len(x))
class TextDataset(Dataset):
def __init__(self, tokenizer, file_path='train', block_size=512):
assert os.path.isfile(file_path)
directory, filename = os.path.split(file_path)
cached_features_file = os.path.join(directory, 'cached_lm_padded' + str(block_size) + '_' + filename)
if os.path.exists(cached_features_file):
logger.info("Loading features from cached file %s", cached_features_file)
with open(cached_features_file, 'rb') as handle:
self.examples = pickle.load(handle)
else:
logger.info("Creating features from dataset file at %s", directory)
self.examples = []
with open(file_path, encoding="utf-8") as f:
text = f.readlines()
text = [i.strip() for i in text]
for sent in text:
tokenized_text = tokenizer.convert_tokens_to_ids(tokenizer.tokenize(sent))
if len(tokenized_text) > block_size: # Truncate in block of block_size
self.examples.append(tokenizer.build_inputs_with_special_tokens(tokenized_text[:block_size]))
else:
self.examples.append(pad(tokenized_text, tokenizer.convert_tokens_to_ids('<pad>'), block_size))
logger.info("Saving features into cached file %s", cached_features_file)
with open(cached_features_file, 'wb') as handle:
pickle.dump(self.examples, handle, protocol=pickle.HIGHEST_PROTOCOL)
def __len__(self):
return len(self.examples)
def __getitem__(self, item):
return torch.tensor(self.examples[item])
```
And this is how I mask the lm_labels, given labels =input_ids and start and end being my desired indices to mask all indices before start and after end (exclusive):
```
def mask_intervals(labels, start, end):
x = labels.clone()
for i in range(x.shape[0]):
start_index = (labels[i, :] == start).nonzero()
end_index = (labels[i, :] == end).nonzero()
x[i, :start_index + 1] = -1
x[i, end_index:] = -1
return x
```
I want to know if masking with '-1' suffices or should I change some other part of the code regarding loss computation? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1713/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1713/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1712 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1712/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1712/comments | https://api.github.com/repos/huggingface/transformers/issues/1712/events | https://github.com/huggingface/transformers/issues/1712 | 516,886,153 | MDU6SXNzdWU1MTY4ODYxNTM= | 1,712 | Scheduler documentation blocks subtly wrong | {
"login": "DomHudson",
"id": 10864294,
"node_id": "MDQ6VXNlcjEwODY0Mjk0",
"avatar_url": "https://avatars.githubusercontent.com/u/10864294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DomHudson",
"html_url": "https://github.com/DomHudson",
"followers_url": "https://api.github.com/users/DomHudson/followers",
"following_url": "https://api.github.com/users/DomHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/DomHudson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DomHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DomHudson/subscriptions",
"organizations_url": "https://api.github.com/users/DomHudson/orgs",
"repos_url": "https://api.github.com/users/DomHudson/repos",
"events_url": "https://api.github.com/users/DomHudson/events{/privacy}",
"received_events_url": "https://api.github.com/users/DomHudson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, happy to welcome a PR on that if you want to fix this",
"Closed by https://github.com/huggingface/transformers/pull/1737"
] | 1,572 | 1,572 | 1,572 | NONE | null | ## Summary
Hi, I noticed that in `optimization.py` the way many of the schedulers describe the learning rate is slightly wrong.
For example, `WarmupLinearSchedule` says
```
Linearly increases learning rate from 0 to 1 over `warmup_steps` training steps.
Linearly decreases learning rate from 1. to 0. over remaining `t_total - warmup_steps` steps.
```
It actually multiplies the learning rate (set in the optimizer) by this amount. I understand that this is probably implied, but I think the documentation could be a bit clearer on this aspect!
Many thanks,
Dom
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1712/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/1711 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/1711/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/1711/comments | https://api.github.com/repos/huggingface/transformers/issues/1711/events | https://github.com/huggingface/transformers/issues/1711 | 516,880,276 | MDU6SXNzdWU1MTY4ODAyNzY= | 1,711 | transformers module doesn't work with torch compiled on Cuda 10.0? | {
"login": "ehsan-soe",
"id": 12740904,
"node_id": "MDQ6VXNlcjEyNzQwOTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/12740904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ehsan-soe",
"html_url": "https://github.com/ehsan-soe",
"followers_url": "https://api.github.com/users/ehsan-soe/followers",
"following_url": "https://api.github.com/users/ehsan-soe/following{/other_user}",
"gists_url": "https://api.github.com/users/ehsan-soe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ehsan-soe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ehsan-soe/subscriptions",
"organizations_url": "https://api.github.com/users/ehsan-soe/orgs",
"repos_url": "https://api.github.com/users/ehsan-soe/repos",
"events_url": "https://api.github.com/users/ehsan-soe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ehsan-soe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Solved! closing."
] | 1,572 | 1,572 | 1,572 | NONE | null | Hi,
I have been using this repo for long time and it was working okay. Very recently it gave me ```ModuleNotFoundError: No module named 'transformers'``` error and I had to reinstall it from source.
It seems like during installation my torch version (or the Cuda it is compiled on) got updated.
I reinstall my PyTorch with cudatoolkit=10.0 since I have Cuda 10.0 and I cannot update it now.
But I realized doing that, I am not able to import transformers again.
Any idea what should I do?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/1711/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/1711/timeline | completed | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.