url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
β | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
β | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/2612 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2612/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2612/comments | https://api.github.com/repos/huggingface/transformers/issues/2612/events | https://github.com/huggingface/transformers/issues/2612 | 553,675,038 | MDU6SXNzdWU1NTM2NzUwMzg= | 2,612 | Error in fine tuning Roberta for QA | {
"login": "houdaM97",
"id": 43147098,
"node_id": "MDQ6VXNlcjQzMTQ3MDk4",
"avatar_url": "https://avatars.githubusercontent.com/u/43147098?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/houdaM97",
"html_url": "https://github.com/houdaM97",
"followers_url": "https://api.github.com/users/houdaM97/followers",
"following_url": "https://api.github.com/users/houdaM97/following{/other_user}",
"gists_url": "https://api.github.com/users/houdaM97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/houdaM97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/houdaM97/subscriptions",
"organizations_url": "https://api.github.com/users/houdaM97/orgs",
"repos_url": "https://api.github.com/users/houdaM97/repos",
"events_url": "https://api.github.com/users/houdaM97/events{/privacy}",
"received_events_url": "https://api.github.com/users/houdaM97/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Please [format your code correctly](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks). Now it is very unreadable.",
"Thank you for mentioning this, it's done!",
"Something seems to have gone wrong. Can you check? There is a line \"and here is my training script:\" that is in the code block but shouldn't be. Also the last line.",
"yeah i corrected it",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | NONE | null | ## β Questions & Help
<!-- A clear and concise description of the question. -->
Hi,
I tried to fine tune RobertaModel for question answering task, i implemented TFRobertaForQuestionAnswering but when i run the training script i got this error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[0,11] = 1 is not in [0, 1) [Op:ResourceGather] name: tf_roberta_for_question_answering/tf_roberta_model/roberta/embeddings/token_type_embeddings/embedding_lookup/
Here is my class TFRobertaForQuestionAnswering
```python
from transformers import TFRobertaPreTrainedModel, RobertaConfig, TFRobertaModel
import tensorflow as tf
class TFRobertaForQuestionAnswering(TFRobertaPreTrainedModel):
config_class = RobertaConfig
#pretrained_model_archive_map = ROBERTA_PRETRAINED_MODEL_ARCHIVE_MAP
base_model_prefix = "roberta"
def __init__(self, config, *inputs, **kwargs):
config.vocab_size = config.vocab_size + 2
super().__init__(config, *inputs, **kwargs)
def get_initializer(initializer_range=0.02):
"""Creates a `tf.initializers.truncated_normal` with the given range.
Args:
initializer_range: float, initializer range for stddev.
Returns:
TruncatedNormal initializer with stddev = `initializer_range`.
"""
return tf.keras.initializers.TruncatedNormal(stddev=initializer_range)
self.num_labels = config.num_labels
self.roberta = TFRobertaModel(config)
self.qa_outputs = tf.keras.layers.Dense(
config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="qa_outputs")
def call(
self,
input_ids,
start_positions=None,
end_positions=None,
** kwargs
):
outputs = self.roberta(
input_ids,
** kwargs
)
sequence_output = outputs[0]
logits = self.qa_outputs(sequence_output)
start_logits, end_logits = tf.split(logits, 2, axis=-1)
start_logits = tf.squeeze(start_logits, -1)
end_logits = tf.squeeze(end_logits, -1)
outputs = (start_logits, end_logits,) + outputs[2:]
if start_positions is not None and end_positions is not None:
# If we are on multi-GPU, split add a dimension
if len(tf.size(start_positions)) > 1:
start_positions = tf.squeeze(start_positions, -1)
if len(tf.size(end_positions)) > 1:
end_positions = tf.squeeze(end_positions, -1)
# sometimes the start/end positions are outside our model inputs, we ignore these terms
#ignored_index = tf.size(start_logits, 1)
#with tf.Session() as sess:
# scalar = ignored_index.eval()
#tf.clip_by_value(start_positions, 0, ignored_index)
#tf.clip_by_value(end_positions, 0, ignored_index)
loss_fct = tf.keras.losses.SparseCategoricalCrossentropy(from_logits = True)
start_loss = loss_fct(start_logits, start_positions)
end_loss = loss_fct(end_logits, end_positions)
total_loss = (start_loss + end_loss) / 2
outputs = (total_loss,) + outputs
return outputs # (loss), start_logits, end_logits, (hidden_states), (attentions)
```
and here is my training script:
```python
import tensorflow as tf
from transformers import squad_convert_examples_to_features, SquadV2Processor, RobertaTokenizer
from modelRoberta import TFRobertaForQuestionAnswering
from pathlib import Path
import six
import numpy as np
MAX_QUERY_LENGTH = 64
MAX_SEQ_LENGTH = 384
MAX_DOC_STRIDE = 128
MAX_ANSWER_LENGTH = 64
N_TOK_FOR_CONTEXT = 20
def get_shape_list(tensor):
shape = tensor.shape.as_list()
non_static_indexes = []
for (index, dim) in enumerate(shape):
if dim is None:
non_static_indexes.append(index)
if not non_static_indexes:
return shape
dyn_shape = tf.shape(tensor)
for index in non_static_indexes:
shape[index] = dyn_shape[index]
return shape
tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
model = TFRobertaForQuestionAnswering.from_pretrained("roberta-base")
squad = SquadV2Processor()
train_examples = squad.get_train_examples(data_dir=Path(__file__).parent, filename="train.json")
test_examples = squad.get_dev_examples(data_dir=Path(__file__).parent, filename="test.json")
train_dataset = squad_convert_examples_to_features(train_examples[:1], tokenizer=tokenizer, max_seq_length=MAX_SEQ_LENGTH,
doc_stride=MAX_DOC_STRIDE, max_query_length=MAX_QUERY_LENGTH,
is_training=True, return_dataset='tf')
test_dataset = squad_convert_examples_to_features(test_examples[:1], tokenizer=tokenizer, max_seq_length=MAX_SEQ_LENGTH,
doc_stride=MAX_DOC_STRIDE, max_query_length=MAX_QUERY_LENGTH,
is_training=False, return_dataset='tf')
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
batch_size = 1
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
epochs = 3
for epoch in range(epochs):
print('Start of epoch %d' % (epoch,))
for step, batch in enumerate(train_dataset):
with tf.GradientTape() as tape:
inputs_batch = {
"inputs_ids": batch[0]['input_ids'],
"token_type_ids": batch[0]['token_type_ids'],
}
start_position = batch[1]['start_position']
end_position = batch[1]['end_position']
seq_length = get_shape_list(inputs_batch['inputs_ids'])[1]
start_positions = tf.one_hot(start_position, on_value=1.0, off_value=0.0, depth=seq_length, dtype=tf.float32)
end_positions = tf.one_hot(end_position,on_value=1.0, off_value=0.0, depth=seq_length, dtype=tf.float32)
outputs = model(inputs_batch['inputs_ids'], token_type_ids= inputs_batch['token_type_ids'], training=True) # Logits for this minibatch
start_logits, end_logits = outputs[:2]
start_logits = tf.nn.log_softmax(start_logits, axis=-1)
end_logits = tf.nn.log_softmax(end_logits, axis=-1)
seq_height = get_shape_list(inputs_batch['inputs_ids'])[0]
start_logits = tf.keras.backend.reshape(start_logits, shape=(seq_height*seq_length, 1))
end_logits = tf.keras.backend.reshape(end_logits, shape=(seq_height * seq_length, 1))
start_positions = tf.keras.backend.reshape(start_positions, shape=(seq_height * seq_length, 1))
end_positions = tf.keras.backend.reshape(end_positions, shape=(seq_height * seq_length, 1))
start_loss = -tf.reduce_mean(tf.reduce_sum(start_positions * start_logits, axis=-1))
end_loss = -tf.reduce_mean(tf.reduce_sum(end_positions * end_logits, axis=-1))
loss_value = (start_loss + end_loss) / 2
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
if step % 200 == 0:
print('Training loss (for one batch) at step %s: %s' % (step, float(loss_value)))
print('Seen so far: %s samples' % ((step + 1) * batch_size))
context = "The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave their name to Normandy, a region in France. They were descended from Norse ('Norman' comes from 'Norseman') raiders and pirates from Denmark, Iceland and Norway who, under their leader Rollo, agreed to swear fealty to King Charles III of West Francia. Through generations of assimilation and mixing with the native Frankish and Roman-Gaulish populations, their descendants would gradually merge with the Carolingian-based cultures of West Francia. The distinct cultural and ethnic identity of the Normans emerged initially in the first half of the 10th century, and it continued to evolve over the succeeding centuries."
question = "In what country is Normandy located?"
en_plus = tokenizer.encode_plus(context, question, add_special_tokens=True)
en = en_plus['input_ids']
token_type_ids = en_plus['token_type_ids']
input_ids = tf.constant([en])
segments_tensors = tf.constant([token_type_ids])
outputs = model(input_ids)
start_scores, end_scores = outputs[:2]
ss = tf.argmax(start_scores.numpy()[0]).numpy()
es = tf.argmax(end_scores.numpy()[0]).numpy()
answer = tokenizer.decode(en[ss: es+1], clean_up_tokenization_spaces=True)
print(ss)
print(es)
print(answer)
model.save_pretrained('./save/')
```
Thanks in advance for helping me. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2612/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2612/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2611 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2611/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2611/comments | https://api.github.com/repos/huggingface/transformers/issues/2611/events | https://github.com/huggingface/transformers/issues/2611 | 553,620,840 | MDU6SXNzdWU1NTM2MjA4NDA= | 2,611 | Finetuning my language model | {
"login": "paulthemagno",
"id": 38130299,
"node_id": "MDQ6VXNlcjM4MTMwMjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/38130299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paulthemagno",
"html_url": "https://github.com/paulthemagno",
"followers_url": "https://api.github.com/users/paulthemagno/followers",
"following_url": "https://api.github.com/users/paulthemagno/following{/other_user}",
"gists_url": "https://api.github.com/users/paulthemagno/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paulthemagno/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paulthemagno/subscriptions",
"organizations_url": "https://api.github.com/users/paulthemagno/orgs",
"repos_url": "https://api.github.com/users/paulthemagno/repos",
"events_url": "https://api.github.com/users/paulthemagno/events{/privacy}",
"received_events_url": "https://api.github.com/users/paulthemagno/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I am having the same problem when finetuning my own language model with run_lm_finetuning.py on camembert Model. I guess this might be related to the fact that it is reading a big file at once (103M, ~500k lines).\r\n Since the code reads whole data at once, it requires so much memory to handle huge corpus.\r\n\r\nThis pull request [2339](https://github.com/huggingface/transformers/pull/2339) is suggesting:\r\n- read corpus by each lines\r\n- flatten 2-dimension array by itertools.chain, it requies less memory and fast\r\nYou can find the code [here](https://github.com/huggingface/transformers/pull/2339/commits/537a1de53d824b5851bce32cb5eafaef3f9ce5ef#diff-713f433a085810c3d63a417486e56a88)\r\n\r\n",
"> I am having the same problem when finetuning my own language model with run_lm_finetuning.py on camembert Model. I guess this might be related to the fact that it is reading a big file at once (103M, ~500k lines).\r\n> Since the code reads whole data at once, it requires so much memory to handle huge corpus.\r\n> \r\n> This pull request [2339](https://github.com/huggingface/transformers/pull/2339) is suggesting:\r\n> \r\n> * read corpus by each lines\r\n> * flatten 2-dimension array by itertools.chain, it requies less memory and fast\r\n> You can find the code [here](https://github.com/huggingface/transformers/pull/2339/commits/537a1de53d824b5851bce32cb5eafaef3f9ce5ef#diff-713f433a085810c3d63a417486e56a88)\r\n\r\n@HendZouari this is interesting.\r\nI thought the same thing, but I'm not sure the reason is the size of data, since I verified that with multlingual BERT, it works. \r\n",
"Can you guys try out the recently-merged-to-master `LineByLineTextDataset`, defined at\r\nhttps://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py#L124 ?\r\n\r\nYou can use it by adding the `--line_by_line` flag to your command. (btw: tokenization will soon be _way_ faster everywhere in the library as we are rolling out `tokenizers` integration everywhere)",
"For large datasets that don't fit in memory, I use a lazy dataset, modified from [this Github post](https://github.com/pytorch/text/issues/130#issuecomment-333306652).\r\n\r\n```python\r\nclass LazyTextDataset(Dataset):\r\n def __init__(self, fin):\r\n # get absolute path\r\n # convert to str, linecache doesn't accept Path objects\r\n self.fin = str(Path(fin).resolve())\r\n self.num_entries = self._get_n_lines(self.fin)\r\n\r\n @staticmethod\r\n def _get_n_lines(fin):\r\n with open(fin, encoding='utf-8') as fhin:\r\n for line_idx, _ in enumerate(fhin, 1):\r\n pass\r\n\r\n return line_idx\r\n\r\n def __getitem__(self, idx):\r\n # linecache starts counting from one, not zero, +1 the given index\r\n return linecache.getline(self.fin, idx+1)\r\n\r\n def __len__(self):\r\n return self.num_entries\r\n```\r\n\r\nWith a bit of work you can modify this to return the tokenized strings. I would advise you to write a custom collate_fn for the dataloader, which can be parallellized by using the `n_workers` argument.\r\n\r\nSomething like this (untested)\r\n\r\n```python\r\n from torch.utils.data.dataloader import default_collate\r\n\r\n def collate(data):\r\n data = default_collate(data)\r\n return tokenizer.encode(data)\r\n```\r\n\r\nThat being said, it might be easier to just use wait a bit until `tokenizers` is implemented everywhere, as @julien-c mentions.",
"@julien-c it's woking now! The flag `--line_by_line` was fundamental for me π€©.\r\nThanks also to @HendZouari and @BramVanroy: I guess If I had followed your advices, the program would have worked well in the same way :)",
"I am having this issue while using --line_by_line flag\r\n```\r\nTraceback (most recent call last):\r\n File \"run_lm_finetuning.py\", line 785, in <module>\r\n main()\r\n File \"run_lm_finetuning.py\", line 730, in main\r\n train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False)\r\n File \"run_lm_finetuning.py\", line 147, in load_and_cache_examples\r\n return LineByLineTextDataset(tokenizer, args, file_path=file_path, block_size=args.block_size)\r\n File \"run_lm_finetuning.py\", line 135, in __init__\r\n self.examples = tokenizer.batch_encode_plus(lines, max_length=block_size)[\"input_ids\"]\r\nAttributeError: 'BertTokenizer' object has no attribute 'batch_encode_plus'\r\n```\r\n\r\nWithout this flag I run into another loss function related error for which I just opened an issue.",
"`AttributeError: 'BertTokenizer' object has no attribute 'batch_encode_plus'`\r\n\r\nYou need to update `transformers`. `batch_encode_plus` was only introduced recently.",
"Yes. Upgraded and now working. Thank you.",
"@paulthemagno Can you close this? Thanks ",
"I have reopened the issue for a stange RuntimeError.\r\nIn the middle of the training (after several hours in which it was working with no problem), it crashes with this log:\r\n```\r\nFile \"finetuning.py\", line 801, in <module>51:22, 4.94it/s]\r\n main()\r\n File \"finetuning.py\", line 750, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer)\r\n File \"finetuning.py\", line 342, in train\r\n inputs, labels = mask_tokens(batch, tokenizer, args) if args.mlm else (batch, batch)\r\n File \"finetuning.py\", line 222, in mask_tokens\r\n inputs[indices_random] = random_words[indices_random]\r\nRuntimeError: expected dtype Float but got dtype Long\r\nEpoch: 55%|ββββββ | 6/11 [20:26:33<17:02:07, 12265.60s/it]\r\nIteration: 69%|βββββββ | 33378/48603 [1:47:45<49:09, 5.16it/s]\r\n```\r\n\r\nIt fails in the _mask_tokens()_ function:\r\n```python\r\n# 10% of the time, we replace masked input tokens with random word\r\n indices_random = torch.bernoulli(torch.full(labels.shape, 0.5)).bool() & masked_indices & ~indices_replaced\r\n random_words = torch.randint(len(tokenizer), labels.shape, dtype=torch.long)\r\n inputs[indices_random] = random_words[indices_random]\r\n```\r\n\r\nShould I set `dtype=torch.float`? Why does It work fine for so much time and it suddenly gives this error at the 6th epoch on 11? ",
"I have the same issue, but to me it fails immediately.\r\n```\r\n File \"run_lm_finetuning.py\", line 340, in train\r\n inputs, labels = mask_tokens(batch, tokenizer, args) if args.mlm else (batch, batch)\r\n File \"run_lm_finetuning.py\", line 218, in mask_tokens\r\n inputs[indices_random] = random_words[indices_random]\r\nRuntimeError: expected dtype Float but got dtype Long\r\n```\r\nIt is triggered by using `--line_by_line`.",
"I have the same issue as @paulthemagno 's. Mine fails while fine-tuning a basic uncased bert model using the new --line_by_line flag about a quarter way through the epoch. \r\n\r\n```Traceback (most recent call last):ββββββββββββββββββββ | 2184/8205 [06:29<18:54, 5.31it/s]\r\n\r\n File \"run_lm_finetuning.py\", line 692, in <module>\r\n main()\r\n File \"run_lm_finetuning.py\", line 641, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer)\r\n File \"run_lm_finetuning.py\", line 320, in train\r\n inputs, labels = mask_tokens(batch, tokenizer, args) if args.mlm else (batch, batch)\r\n File \"run_lm_finetuning.py\", line 193, in mask_tokens\r\n inputs[indices_random] = random_words[indices_random]\r\nRuntimeError: expected dtype Float but got dtype Long\r\nEpoch: 0%| | 0/1 [06:29<?, ?it/s]\r\nIteration: 27%|βββββββββββββββββββββββββββββββββββββ \r\n\r\n2184/8205 [06:29<17:54, 5.61it/s]```",
"I tried to set `dtype=torch.float` but it fails immediately after the launch.\r\nThe only way I found is to restart from the last saved checkpoint rather than from the original language model. If someone knew how to fix it, I'd appreciate it.",
"Can you guys open a new issue for this? (w/ PyTorch version + ideally a small reproduction case)",
"> Can you guys open a new issue for this? (w/ PyTorch version + ideally a small reproduction case)\r\n\r\nYes, done now #2728"
] | 1,579 | 1,580 | 1,580 | NONE | null | ## β Questions & Help
I have a problem about a finetuning of my own language model ([model](https://mxmdownloads.s3.amazonaws.com/umberto/umberto-commoncrawl-cased-v1.tar.gz) and [sentencepiece](https://mxmdownloads.s3.amazonaws.com/umberto/umberto-commoncrawl-cased-v1-sentencepiece.bpe.model)). I'm trying to use [run_lm_finetuning.py](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py).
When it tokenizes the text of the training set, the program enters a loop without giving feedback.
It blocks at [Line 105](https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py#L105) in the _tokenize_ function.
```python
...
logger.info("Creating features from dataset file at %s", directory)
self.examples = []
with open(file_path, encoding="utf-8") as f:
text = f.read()
tokenized_text = tokenizer.convert_tokens_to_ids(tokenizer.tokenize(text))
for i in range(0, len(tokenized_text) - block_size + 1, block_size): # Truncate in block of block_size
self.examples.append(tokenizer.build_inputs_with_special_tokens(tokenized_text[i : i + block_size]))
# Note that we are loosing the last truncated example here for the sake of simplicity (no padding)
# If your dataset is small, first you should loook for a bigger one :-) and second you
# can change this behavior by adding (model specific) padding.
...
```
I have tried to wait for more than 1 hour, but nothing. Using BERT instead of my model, the finetuning starts (even if after several minutes with no log meanwhile).
I launched this:
```bash
python3 run_lm_finetuning.py \
--train_data_file /path/to/train.txt \
--eval_data_file /path/to/eval.txt \
--output_dir /path/to/output \
--mlm \
--do_train \
--do_eval \
--model_type roberta \
--model_name_or_path /path/to/my/model \
--per_gpu_train_batch_size 8 \
--per_gpu_eval_batch_size 8 \
--overwrite_output_dir \
--overwrite_cache \
--max_steps 500000 \
--block_size 128\
--save_steps 50000 \
--eval_all_checkpoints
```
My impression is that something is wrong with the tokenization on the model, any ideas? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2611/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2611/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2610 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2610/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2610/comments | https://api.github.com/repos/huggingface/transformers/issues/2610/events | https://github.com/huggingface/transformers/issues/2610 | 553,573,355 | MDU6SXNzdWU1NTM1NzMzNTU= | 2,610 | run_ner.py huge discrepancy between eval and predict (or "dev" and "test" evaluation modes) | {
"login": "MatejUlcar",
"id": 26550612,
"node_id": "MDQ6VXNlcjI2NTUwNjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/26550612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MatejUlcar",
"html_url": "https://github.com/MatejUlcar",
"followers_url": "https://api.github.com/users/MatejUlcar/followers",
"following_url": "https://api.github.com/users/MatejUlcar/following{/other_user}",
"gists_url": "https://api.github.com/users/MatejUlcar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MatejUlcar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MatejUlcar/subscriptions",
"organizations_url": "https://api.github.com/users/MatejUlcar/orgs",
"repos_url": "https://api.github.com/users/MatejUlcar/repos",
"events_url": "https://api.github.com/users/MatejUlcar/events{/privacy}",
"received_events_url": "https://api.github.com/users/MatejUlcar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | NONE | null | ## β Questions & Help
I'm comparing two different pretrained Bert models on the NER task. One is a bert-base-multilingual-cased model, which works fine, consistently, as one would expect. The other is our own pretrained multilingual Bert, which is trained on fewer languages and has so far shown better results on those few languages. However, when running run_ner.py from here, it doesn't evaluate consistently. The `dev.txt` dataset is completely identical to the `test.txt`dataset, so I'd expect identical results, but I get
```
--do_eval
f1 = 0.7603833865814698
loss = 0.06039658671007991
precision = 0.7531645569620253
recall = 0.7677419354838709
```
and
```
--do_predict
f1 = 0.025925925925925925
loss = 0.41404916612165316
precision = 0.030434782608695653
recall = 0.02258064516129032
```
I also tried `--evaluate_during_training` flag and I get "solid" results already from very few steps and identical to `--do_eval` at the end of training, but `--do_predict` are always much worse. Surprisingly, though, this doesn't occur with bert-base-multilingual-cased model even if I save it to disk and point to that folder. Additionally if under the clause if args.predict I change the mode from "test" to "dev" in function evaluate() I get good results. I repeat, dev and test are identical files that differ only in name.
The problem occurs on any max sequence length, after deleting cache, etc. No errors are displayed during training or evaluation/prediction.
Please help if you have any ideas what might be going wrong. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2610/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/2610/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2609 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2609/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2609/comments | https://api.github.com/repos/huggingface/transformers/issues/2609/events | https://github.com/huggingface/transformers/issues/2609 | 553,449,716 | MDU6SXNzdWU1NTM0NDk3MTY= | 2,609 | Bad Results with Albert | {
"login": "chikubee",
"id": 25073753,
"node_id": "MDQ6VXNlcjI1MDczNzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/25073753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chikubee",
"html_url": "https://github.com/chikubee",
"followers_url": "https://api.github.com/users/chikubee/followers",
"following_url": "https://api.github.com/users/chikubee/following{/other_user}",
"gists_url": "https://api.github.com/users/chikubee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chikubee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chikubee/subscriptions",
"organizations_url": "https://api.github.com/users/chikubee/orgs",
"repos_url": "https://api.github.com/users/chikubee/repos",
"events_url": "https://api.github.com/users/chikubee/events{/privacy}",
"received_events_url": "https://api.github.com/users/chikubee/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi. Many people are reporting unstable results or just unexpected results. You can search for issues in this library, and even in other ones (e.g. https://github.com/deepset-ai/FARM/issues/202#issuecomment-577077201). It seems that ALBERT is very sensitive to hyperparameters and even then... For now there seems to be no solution. It is probably best to stick to another model. I'd recommend RoBERTa but it depends on your use-case.",
"@BramVanroy I tried with roberta-base as well, the token level similarity is coming out very bad. \r\nSmoking is getting matched with software.",
"Can you share a repo to your full code?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | NONE | null | ## β Questions & Help
Trying to understand why is the cosine similarity between tokens with Albert way bad in comparison to DistilBert.
Any inferences on the same would be helpful.
Thanks in advance.
Embeddings constructed for a token by summing the last 4 encoded layers.
Distance metric: cosine
Results with DistilBert
<img width="754" alt="Screenshot 2020-01-22 at 4 05 06 PM" src="https://user-images.githubusercontent.com/25073753/72887172-2e6feb80-3d31-11ea-876b-0ba8eac22234.png">
<img width="613" alt="Screenshot 2020-01-22 at 4 12 27 PM" src="https://user-images.githubusercontent.com/25073753/72887650-25334e80-3d32-11ea-8255-26109f604c84.png">
Results with Albert
<img width="735" alt="Screenshot 2020-01-22 at 4 04 56 PM" src="https://user-images.githubusercontent.com/25073753/72887227-447dac00-3d31-11ea-8797-9873c8439879.png">
<img width="499" alt="Screenshot 2020-01-22 at 4 10 16 PM" src="https://user-images.githubusercontent.com/25073753/72887665-2bc1c600-3d32-11ea-95aa-83b3192e9d49.png">
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2609/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2609/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2608 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2608/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2608/comments | https://api.github.com/repos/huggingface/transformers/issues/2608/events | https://github.com/huggingface/transformers/issues/2608 | 553,445,324 | MDU6SXNzdWU1NTM0NDUzMjQ= | 2,608 | Bug in the command line tool: os.DirEntry not supported in Python 3.5 | {
"login": "netw0rkf10w",
"id": 8569515,
"node_id": "MDQ6VXNlcjg1Njk1MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8569515?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/netw0rkf10w",
"html_url": "https://github.com/netw0rkf10w",
"followers_url": "https://api.github.com/users/netw0rkf10w/followers",
"following_url": "https://api.github.com/users/netw0rkf10w/following{/other_user}",
"gists_url": "https://api.github.com/users/netw0rkf10w/gists{/gist_id}",
"starred_url": "https://api.github.com/users/netw0rkf10w/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/netw0rkf10w/subscriptions",
"organizations_url": "https://api.github.com/users/netw0rkf10w/orgs",
"repos_url": "https://api.github.com/users/netw0rkf10w/repos",
"events_url": "https://api.github.com/users/netw0rkf10w/events{/privacy}",
"received_events_url": "https://api.github.com/users/netw0rkf10w/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1802861720,
"node_id": "MDU6TGFiZWwxODAyODYxNzIw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20CLI",
"name": "Core: CLI",
"color": "FF6426",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Though the core of the library is Python 3.5+, the CLI is actually Python3.6+ as for instance the serving subcommand uses FastAPI which is Py36+ only (cc @mfuntowicz)\r\n\r\nDo we have a way to make that clear in the doc @LysandreJik?",
"Personal opinion: supporting 3.6+ only seems realistic and may make maintenance easier. [AllenNLP](https://github.com/allenai/allennlp) also requires 3.6.1+. Of course I don't know how large the 3.5 user-base is for `transformers` so it might be worth maintaining. That being said, if there are plans to stop support for 3.5, it might be a good idea to announce this in a release (\"last supported release for 3.5\"). ",
"@BramVanroy According to the PyPI stats at https://pypistats.org/packages/transformers (look for the `Daily Download Proportions of transformers package - Python Minor` graph) around 1% of _pip installs_ are on Python 3.5.",
"> @BramVanroy According to the PyPI stats at https://pypistats.org/packages/transformers (look for the `Daily Download Proportions of transformers package - Python Minor` graph) around 1% of _pip installs_ are on Python 3.5.\r\n\r\nAh, I didn't know this website - thanks! I'm not sure if 1% is worth the effort, then again I don't know how much additional effort (and resources for CI) are needed to maintain for 3.5 anyway. (But 3.6 has f-strings and ordered dicts (officially in 3.7), PathLike, better `typing`, sooo... :D)\r\n\r\nPS: I wonder what happened on December 24 or thereabouts, with the spike in 3.5 installations. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Closing this as the lib is now officially Python 3.6+"
] | 1,579 | 1,585 | 1,585 | NONE | null | Hi,
[This line](https://github.com/huggingface/transformers/blob/1a8e87be4e2a1b551175bd6f0f749f3d2289010f/src/transformers/commands/user.py#L162) will cause a syntax error in Python 3.5 as `os.DirEntry` does not exist.
You should either update the code for backward compatibility or update the README replacing 3.5+ by 3.6 (I believe the former would be preferred).
Best regards.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2608/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2608/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2607 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2607/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2607/comments | https://api.github.com/repos/huggingface/transformers/issues/2607/events | https://github.com/huggingface/transformers/pull/2607 | 553,339,959 | MDExOlB1bGxSZXF1ZXN0MzY1Njg5NjA5 | 2,607 | Fix inconsistency between T5WithLMHeadModel's doc and it's behavior | {
"login": "nalourie-ai2",
"id": 23320271,
"node_id": "MDQ6VXNlcjIzMzIwMjcx",
"avatar_url": "https://avatars.githubusercontent.com/u/23320271?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nalourie-ai2",
"html_url": "https://github.com/nalourie-ai2",
"followers_url": "https://api.github.com/users/nalourie-ai2/followers",
"following_url": "https://api.github.com/users/nalourie-ai2/following{/other_user}",
"gists_url": "https://api.github.com/users/nalourie-ai2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nalourie-ai2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nalourie-ai2/subscriptions",
"organizations_url": "https://api.github.com/users/nalourie-ai2/orgs",
"repos_url": "https://api.github.com/users/nalourie-ai2/repos",
"events_url": "https://api.github.com/users/nalourie-ai2/events{/privacy}",
"received_events_url": "https://api.github.com/users/nalourie-ai2/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2607?src=pr&el=h1) Report\n> Merging [#2607](https://codecov.io/gh/huggingface/transformers/pull/2607?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1a8e87be4e2a1b551175bd6f0f749f3d2289010f?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2607?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2607 +/- ##\n=======================================\n Coverage 74.53% 74.53% \n=======================================\n Files 87 87 \n Lines 14819 14819 \n=======================================\n Hits 11046 11046 \n Misses 3773 3773\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2607?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2607/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `81.09% <ΓΈ> (ΓΈ)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2607?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2607?src=pr&el=footer). Last update [1a8e87b...980d1f8](https://codecov.io/gh/huggingface/transformers/pull/2607?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi! Indeed, we changed all the Cross Entropy ignored indices to be -100 to respect the PyTorch default. The docstrings would need to be changed, instead of the index. We could even remove the argument:\r\n\r\n```py\r\nloss_fct = CrossEntropyLoss()\r\n```\r\n\r\nto keep in sync with pytorch.",
"@LysandreJik, that makes perfect sense. Thanks for the background!\r\n\r\nI've updated this PR to change the doc string instead. \r\n\r\nRemoving the `ignore_index` argument would mean that the behavior of the method depends on which PyTorch version a user has installed, which could be counter-intuitive. Similarly, to ensure the doc string is accurate we'd have to add a unit test for it and run against all versions of PyTorch that a user might reasonably install.\r\n\r\nFor simplicity / consistency, I'd suggest continuing to explicitly pass the argument.\r\n\r\nI'm happy to add a commit removing it though, if you feel otherwise.",
"Thanks for updating, we can keep the `ignore_index` argument.\r\n\r\nThanks @nalourie-ai2 !"
] | 1,579 | 1,579 | 1,579 | CONTRIBUTOR | null | The doc string for `T5WithLMHeadModel` is currently inconsistent with it's behavior.
The doc string says that the forward method ignores indices of -1 when computing the loss; however, the method instead ignores indices of -100. This pull request changes the method to ignore indices of -1, making the two consistent.
It's worth noting, there is a [commit](https://github.com/huggingface/transformers/commit/1b59b57b57010e6119282f3dbf37f8c7c6d6313e#diff-7370db3a19209bf984cc40925aaf2b71) by @thomwolf that changed this value from -1 to -100, though I couldn't find why the change was made. Thomas, perhaps you remember if the change is still important? If it is, I can instead update this PR to change the doc string. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2607/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2607/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2607",
"html_url": "https://github.com/huggingface/transformers/pull/2607",
"diff_url": "https://github.com/huggingface/transformers/pull/2607.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2607.patch",
"merged_at": 1579879701000
} |
https://api.github.com/repos/huggingface/transformers/issues/2606 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2606/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2606/comments | https://api.github.com/repos/huggingface/transformers/issues/2606/events | https://github.com/huggingface/transformers/issues/2606 | 553,168,121 | MDU6SXNzdWU1NTMxNjgxMjE= | 2,606 | Upload CLI: on Windows, uniformize paths/urls separators | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1802861720,
"node_id": "MDU6TGFiZWwxODAyODYxNzIw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20CLI",
"name": "Core: CLI",
"color": "FF6426",
"default": false,
"description": ""
}
] | closed | false | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This was completed but forgot to close the issue."
] | 1,579 | 1,584 | 1,584 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2606/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2606/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/2605 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2605/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2605/comments | https://api.github.com/repos/huggingface/transformers/issues/2605/events | https://github.com/huggingface/transformers/issues/2605 | 553,154,537 | MDU6SXNzdWU1NTMxNTQ1Mzc= | 2,605 | glue.py when using mrpc and similar data does not work | {
"login": "nargesam",
"id": 24642904,
"node_id": "MDQ6VXNlcjI0NjQyOTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/24642904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nargesam",
"html_url": "https://github.com/nargesam",
"followers_url": "https://api.github.com/users/nargesam/followers",
"following_url": "https://api.github.com/users/nargesam/following{/other_user}",
"gists_url": "https://api.github.com/users/nargesam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nargesam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nargesam/subscriptions",
"organizations_url": "https://api.github.com/users/nargesam/orgs",
"repos_url": "https://api.github.com/users/nargesam/repos",
"events_url": "https://api.github.com/users/nargesam/events{/privacy}",
"received_events_url": "https://api.github.com/users/nargesam/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hello! Indeed, you can't access the dictionary by using `example.label`. Are you passing your `tf.data.Dataset` as the `examples` argument to the `glue_convert_examples_to_features`?",
"yes, train_data is the examples. and the spec is: \r\n\r\n{'idx': TensorSpec(shape=(), dtype=tf.string, name=None),\r\n'sentence1': TensorSpec(shape=(), dtype=tf.string, name=None),\r\n'sentence2': TensorSpec(shape=(), dtype=tf.string, name=None),\r\n'label': TensorSpec(shape=(), dtype=tf.int32, name=None)}\r\n\r\nshouldn't it look like this? \r\n",
"I think the code needs to change to example[\"label\"]. ",
"Here's the format of my data that is passed to glue_convert_examples_to_features():\r\n\r\n{'idx': <tf.Tensor: shape=(), dtype=string, numpy=b'TEXT'>, 'sentence1': <tf.Tensor: shape=(), dtype=string, numpy=b\"TEXT TEXT TEXT\">, 'sentence2': <tf.Tensor: shape=(), dtype=string, numpy=b'text'>, 'label': <tf.Tensor: shape=(), dtype=int32, numpy=1>}\r\n",
"Do you mind letting me know on which version of transformers you're running your code?\r\n\r\nFor a couple of versions now we handle `tf.data.Dataset` using our `get_example_from_tensor_dict` method, as can be seen [here](https://github.com/huggingface/transformers/blob/master/src/transformers/data/processors/glue.py#L85). This transforms your dictionary into an `InputExample`. \r\n\r\nIt should have been able to convert your dataset as well.",
"yes, I use the get_example_from_tensor_dict(). the problem is, even though I pass my label_list=[numpy.int64(1), numpy.int64(0)], and my data type is int32: 'label': <tf.Tensor: shape=(), dtype=int32, numpy=1>, \r\n\r\nwhen glue.py tries to run label = label_map[example.label], the type of example.label is <class: str>! which should not be, and should be int! \r\n\r\nWhat I did, was to clone your repo on my local device, changed that line to label = label_map[int(example.label)], and it works fine now! \r\n\r\nI am guessing when you create the label_map dictionary, the keys are str, but needs to be int. \r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): BERT
Language I am using the model on (English, Chinese....): ENGLISH
The problem arise when using:
* [ ] the official example scripts: (give details):
I am using my dataset with format [idx, sentence1, sentence2, label] in form of dict of tf.data.Dataset and using glue_convert_examples_to_features() to convert my dataset to features
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
I am using MRPC task
## To Reproduce
Steps to reproduce the behavior:
0- Using BertTokenizer.from_pretrained('bert-base-cased') to create tokenizer
1-read csv file as dataframe
2-convert dataframe to dict of tensors using tf.data.Dataset.from_tensor_slices(dict(train))
the element_spec of my data is :
{'idx': TensorSpec(shape=(), dtype=tf.string, name=None),
'sentence1': TensorSpec(shape=(), dtype=tf.string, name=None),
'sentence2': TensorSpec(shape=(), dtype=tf.string, name=None),
'label': TensorSpec(shape=(), dtype=tf.int32, name=None)}
3- when using glue_convert_examples_to_features(train_data, tokenizer, label_list=[1,0], max_length=128 , task='mrpc'), I will have an error in glue.py in glue_convert_examples_to_features : label = label_map[example.label]
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS:
* Python version: 3.7
* PyTorch version: TF 2.1.0
* PyTorch Transformers version (or branch):
* Using GPU ?
* Distributed or parallel setup ?
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
I think you can't access the dictionary by example.label. When I copy paste it to my code, it will actually work with example["label"].
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2605/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2605/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2604 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2604/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2604/comments | https://api.github.com/repos/huggingface/transformers/issues/2604/events | https://github.com/huggingface/transformers/issues/2604 | 553,073,485 | MDU6SXNzdWU1NTMwNzM0ODU= | 2,604 | Can not upload BertTokenizer.from_pretrained() from an AWS S3 bucket | {
"login": "bnicholl",
"id": 26211830,
"node_id": "MDQ6VXNlcjI2MjExODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/26211830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bnicholl",
"html_url": "https://github.com/bnicholl",
"followers_url": "https://api.github.com/users/bnicholl/followers",
"following_url": "https://api.github.com/users/bnicholl/following{/other_user}",
"gists_url": "https://api.github.com/users/bnicholl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bnicholl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bnicholl/subscriptions",
"organizations_url": "https://api.github.com/users/bnicholl/orgs",
"repos_url": "https://api.github.com/users/bnicholl/repos",
"events_url": "https://api.github.com/users/bnicholl/events{/privacy}",
"received_events_url": "https://api.github.com/users/bnicholl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi Ben,\r\n\r\nUnless I misunderstand what you're trying to do, this is not really what `save_pretrained()` and `from_pretrained()` are made for.\r\n\r\n`save_pretrained()` lets you save a tokenizer or a model _locally_, inside a local folder. (you can then upload those files to your own s3 bucket, or use the `transformers-cli` to upload to our bucket). \r\n\r\n`from_pretrained()` lets you re-spawn a model or tokenizer from either a local folder, a model shortcut (hardcoded in the library's code), or a community model identifier (which resolves to files on our S3 bucket)\r\n\r\nLet me know if things are clearer\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | NONE | null | ## π Migration
the model I am using is BertForSequenceClassification
The problem arises when I serialize my Bert model, and then upload to an AWS S3 bucket. Once my model is inside of S3, I can not import the model via BertTokenizer.from_pretrained()
For example, in order to save my model to S3, my code reads,
```
byte_obj = pickle.dumps(model)
s3_resource = boto3.resource('s3')
s3_resource.Object("s3 name", "bertClassifier").put(Body=byte_obj)
```
This saves the BertForSequenceClassification model. I can not use the `model.save_pretrained("s3 name")`, as I get an error from AWS. I beleive in order to transfer files in AWS, one needs to first pickle the file.
When I want to re upload the model, I can not use
`BertTokenizer.from_pretrained("s3 name")`
or
`BertForSequenceClassification.from_pretrained("s3 name")`
because the object I am trying to load is serialized via pickle module. Instead I upload the file this way.
```
session = boto3.session.Session()
s3client = session.client('s3')
response = s3client.get_object(Bucket='s3 name', Key='bertClassifier')
body_string = response['Body'].read()
bert_nn = pickle.loads(body_string)
```
This succesfully loads the BertForSequenceClassification model, but I have no way of loading BertTokenizer from this same pre trained model. Again, becuase I am not able to upload via the BertTokenizer.from_pretrained("s3 name") function.
Is there a work around for this?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2604/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2604/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2603 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2603/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2603/comments | https://api.github.com/repos/huggingface/transformers/issues/2603/events | https://github.com/huggingface/transformers/issues/2603 | 552,989,439 | MDU6SXNzdWU1NTI5ODk0Mzk= | 2,603 | XLNet: Incorrect segment id for CLS token | {
"login": "ChristophAlt",
"id": 6420705,
"node_id": "MDQ6VXNlcjY0MjA3MDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6420705?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChristophAlt",
"html_url": "https://github.com/ChristophAlt",
"followers_url": "https://api.github.com/users/ChristophAlt/followers",
"following_url": "https://api.github.com/users/ChristophAlt/following{/other_user}",
"gists_url": "https://api.github.com/users/ChristophAlt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChristophAlt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChristophAlt/subscriptions",
"organizations_url": "https://api.github.com/users/ChristophAlt/orgs",
"repos_url": "https://api.github.com/users/ChristophAlt/repos",
"events_url": "https://api.github.com/users/ChristophAlt/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChristophAlt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Indeed, this is an error ! Thanks for letting us know, it was patched with 088fa7b!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | NONE | null | Thanks for the great work!
It seems that the XLNetTokenizer assigns an incorrect segment id to the CLS token when a single sequence of token ids is provided. If token_ids_1 is None, all segment ids are '0', including the segment id of the CLS token. In my understanding, the segment ids should always differ.
```python
if token_ids_1 is None:
return len(token_ids_0 + sep + cls) * [0]
return len(token_ids_0 + sep) * [0] + len(token_ids_1 + sep) * [1] + cls_segment_id
```
https://github.com/huggingface/transformers/blob/983c484fa2fcad307d37cb81f3e1125aa7b9dc37/src/transformers/tokenization_xlnet.py#L243
The original implementation assigns a different segment id to the CLS token in both cases (single sequence of tokens and pair of sequences): https://github.com/zihangdai/xlnet/blob/bbaa3a6fa0b3a2ee694e8cf66167434f9eca9660/classifier_utils.py#L109 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2603/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2603/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2602 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2602/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2602/comments | https://api.github.com/repos/huggingface/transformers/issues/2602/events | https://github.com/huggingface/transformers/pull/2602 | 552,957,982 | MDExOlB1bGxSZXF1ZXN0MzY1MzcxNTg5 | 2,602 | Edit a way to get `projected_context_layer` | {
"login": "jinkilee",
"id": 6321520,
"node_id": "MDQ6VXNlcjYzMjE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6321520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jinkilee",
"html_url": "https://github.com/jinkilee",
"followers_url": "https://api.github.com/users/jinkilee/followers",
"following_url": "https://api.github.com/users/jinkilee/following{/other_user}",
"gists_url": "https://api.github.com/users/jinkilee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jinkilee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jinkilee/subscriptions",
"organizations_url": "https://api.github.com/users/jinkilee/orgs",
"repos_url": "https://api.github.com/users/jinkilee/repos",
"events_url": "https://api.github.com/users/jinkilee/events{/privacy}",
"received_events_url": "https://api.github.com/users/jinkilee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, please check the [contribution guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests) for the code quality tests to pass.\r\n\r\nWhy did you edit this, does this solve a bug or add new functionality?",
"Hi i edited this because i read a comment like..\r\nβ # Should find a better way to do this β\r\nNot because of bug or new features",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2602?src=pr&el=h1) Report\n> Merging [#2602](https://codecov.io/gh/huggingface/transformers/pull/2602?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/23c6998bf46e43092fc59543ea7795074a720f08?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2602?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2602 +/- ##\n==========================================\n+ Coverage 74.61% 74.61% +<.01% \n==========================================\n Files 87 87 \n Lines 14802 14804 +2 \n==========================================\n+ Hits 11044 11046 +2 \n Misses 3758 3758\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2602?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/2602/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `79.03% <100%> (+0.11%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2602?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2602?src=pr&el=footer). Last update [23c6998...a96e39e](https://codecov.io/gh/huggingface/transformers/pull/2602?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,579 | 1,580 | 1,580 | NONE | null | I edited a way to get `projected_context_layer`.
Instead of doing
'''
w = (
self.dense.weight.t()
.view(self.num_attention_heads, self.attention_head_size, self.hidden_size)
.to(context_layer.dtype)
)
b = self.dense.bias.to(context_layer.dtype)
projected_context_layer = torch.einsum("bfnd,ndh->bfh", context_layer, w) + b
'''
I added `self.merge_last_ndims` at `AlbertAttention`.
'''
def merge_last_ndims(self, x, n_dims):
s = x.size()
assert n_dims > 1 and n_dims < len(s)
return x.view(*s[:-n_dims], -1)
'''
I commited this one yesterday, but that one didn't pass the tests, so I re-wrote my code and re-committing now. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2602/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2602/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2602",
"html_url": "https://github.com/huggingface/transformers/pull/2602",
"diff_url": "https://github.com/huggingface/transformers/pull/2602.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2602.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2601 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2601/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2601/comments | https://api.github.com/repos/huggingface/transformers/issues/2601/events | https://github.com/huggingface/transformers/issues/2601 | 552,946,884 | MDU6SXNzdWU1NTI5NDY4ODQ= | 2,601 | unexpected keyword argument 'encoder_hidden_states' when using PreTrainedEncoderDecoder | {
"login": "mmsamiei",
"id": 12582703,
"node_id": "MDQ6VXNlcjEyNTgyNzAz",
"avatar_url": "https://avatars.githubusercontent.com/u/12582703?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmsamiei",
"html_url": "https://github.com/mmsamiei",
"followers_url": "https://api.github.com/users/mmsamiei/followers",
"following_url": "https://api.github.com/users/mmsamiei/following{/other_user}",
"gists_url": "https://api.github.com/users/mmsamiei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmsamiei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmsamiei/subscriptions",
"organizations_url": "https://api.github.com/users/mmsamiei/orgs",
"repos_url": "https://api.github.com/users/mmsamiei/repos",
"events_url": "https://api.github.com/users/mmsamiei/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmsamiei/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I am facing the same issue. Probably `PreTrainedEncoderDecoder` supports only `Bert` to `Bert` models.",
"I'm getting this error too\r\n```\r\nConverting vae...\r\nConverting text encoder...\r\nDownloading100% 1.71G/1.71G [00:24<00:00, 69.1MB/s]\r\nDownloading100% 4.55k/4.55k [00:00<00:00, 746kB/s]\r\nDownloading100% 1.22G/1.22G [00:18<00:00, 66.3MB/s]\r\nDownloading100% 342/342 [00:00<00:00, 57.0kB/s]\r\nSaving diffusion model...\r\nRestored system models.\r\nCheckpoint successfully extracted to /content/gdrive/MyDrive/sd/stable-diffusion-webui/models/dreambooth/TTFM/working\r\nReturning ['default', True, True, 1, '', '', 0, 0, 1, True, True, 50, False, False, 5e-06, 1e-06, 0.0001, '', 5e-05, 1.0, 1.0, 1, 0.5, 1.0, 0.5, 'constant_with_warmup', 0, 75, 'fp16', 100, True, '', 1.0, 512, 1, '', 420420, True, False, True, 25, True, False, True, 5, False, False, False, False, 1, False, 1.0, True, False, True, False, False, '', 7.5, 60, '', '', '', '', '', '', 1, 0, 0, -1, 7.5, 60, '', '', '', '', 7.5, 60, '', '', '', '', '', '', 1, 0, 0, -1, 7.5, 60, '', '', '', '', 7.5, 60, '', '', '', '', '', '', 1, 0, 0, -1, 7.5, 60, '', '', '', 'Loaded config.']\r\nSaved settings.\r\nCustom model name is TTFM\r\nStarting Dreambooth training...\r\nInitializing dreambooth training...\r\nReplace CrossAttention.forward to use default\r\nInstance Bucket 0: Resolution (512, 512), Count: 723\r\nTarget Bucket 0: Resolution (512, 512), Count: 0\r\nWe need a total of 0 images.\r\nNothing to generate.\r\nException importing 8bit adam: No module named 'bitsandbytes'\r\nWARNING:extensions.sd_dreambooth_extension.dreambooth.train_dreambooth:Exception importing 8bit adam: No module named 'bitsandbytes'\r\nTraceback (most recent call last):\r\n File \"/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py\", line 304, in inner_loop\r\n import bitsandbytes as bnb\r\nModuleNotFoundError: No module named 'bitsandbytes'\r\nPreparing dataset\r\nPreparing dataset\r\nPreparing Dataset (With Caching)\r\n100% 723/723 [00:46<00:00, 15.47it/s]\r\nTrain Bucket 1: Resolution (512, 512), Count: 723\r\nTotal images: 241\r\nTotal dataset length (steps): 241\r\nSched breakpoint is 108450\r\n ***** Running training *****\r\n Instance Images: 723\r\n Class Images: 0\r\n Total Examples: 723\r\n Num batches each epoch = 241\r\n Num Epochs = 300\r\n Batch Size Per Device = 3\r\n Gradient Accumulation steps = 3\r\n Total train batch size (w. parallel, distributed & accumulation) = 9\r\n Text Encoder Epochs: 210\r\n Total optimization steps = 216900\r\n Total training steps = 216900\r\n Resuming from checkpoint: False\r\n First resume epoch: 0\r\n First resume step: 0\r\n Lora: False, Adam: False, Prec: bf16\r\n Gradient Checkpointing: True\r\n EMA: True\r\n LR: 4.5e-05)\r\nSteps: 0% 0/216900 [00:00<?, ?it/s]Traceback (most recent call last):\r\n File \"/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/scripts/dreambooth.py\", line 561, in start_training\r\n result = main(config, use_txt2img=use_txt2img)\r\n File \"/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py\", line 973, in main\r\n return inner_loop()\r\n File \"/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/memory.py\", line 116, in decorator\r\n return function(batch_size, grad_size, prof, *args, **kwargs)\r\n File \"/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py\", line 829, in inner_loop\r\n noise_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/accelerate/utils/operations.py\", line 490, in __call__\r\n return convert_to_fp32(self.model_forward(*args, **kwargs))\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/amp/autocast_mode.py\", line 14, in decorate_autocast\r\n return func(*args, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/diffusers/models/unet_2d_condition.py\", line 481, in forward\r\n sample, res_samples = downsample_block(\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/diffusers/models/unet_2d_blocks.py\", line 781, in forward\r\n hidden_states = torch.utils.checkpoint.checkpoint(\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/utils/checkpoint.py\", line 249, in checkpoint\r\n return CheckpointFunction.apply(function, preserve, *args)\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/utils/checkpoint.py\", line 107, in forward\r\n outputs = run_function(*args)\r\n File \"/usr/local/lib/python3.8/dist-packages/diffusers/models/unet_2d_blocks.py\", line 774, in custom_forward\r\n return module(*inputs, return_dict=return_dict)\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/diffusers/models/transformer_2d.py\", line 265, in forward\r\n hidden_states = block(\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/diffusers/models/attention.py\", line 285, in forward\r\n attn_output = self.attn1(\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\nTypeError: forward_default() got an unexpected keyword argument 'encoder_hidden_states'\r\nSteps: 0% 0/216900 [00:01<?, ?it/s]\r\nTraining completed, reloading SD Model.\r\nRestored system models.\r\nReturning result: Exception training model: 'forward_default() got an unexpected keyword argument 'encoder_hidden_states''.\r\n```"
] | 1,579 | 1,674 | 1,585 | NONE | null | ## β Questions & Help
After I have defined my seq2seq class using Encoder Decoder Architecture in the following way:
```
from transformers import PreTrainedEncoderDecoder
model = PreTrainedEncoderDecoder.from_pretrained('bert-base-uncased','gpt2')
```
I try to forward tensor through the model in this following way:
```
model(question_batch, answer_batch)
```
but I got this error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-16-bb34e9576e8b> in <module>()
----> 1 model(test_history, test_knowledge)
2 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
539 result = self._slow_forward(*input, **kwargs)
540 else:
--> 541 result = self.forward(*input, **kwargs)
542 for hook in self._forward_hooks.values():
543 hook_result = hook(self, input, result)
TypeError: forward() got an unexpected keyword argument 'encoder_hidden_states'
```
can anyone help me?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2601/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2601/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2600 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2600/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2600/comments | https://api.github.com/repos/huggingface/transformers/issues/2600/events | https://github.com/huggingface/transformers/issues/2600 | 552,777,072 | MDU6SXNzdWU1NTI3NzcwNzI= | 2,600 | Trouble fine tuning multiple choice | {
"login": "KerenzaDoxolodeo",
"id": 7535438,
"node_id": "MDQ6VXNlcjc1MzU0Mzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7535438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KerenzaDoxolodeo",
"html_url": "https://github.com/KerenzaDoxolodeo",
"followers_url": "https://api.github.com/users/KerenzaDoxolodeo/followers",
"following_url": "https://api.github.com/users/KerenzaDoxolodeo/following{/other_user}",
"gists_url": "https://api.github.com/users/KerenzaDoxolodeo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KerenzaDoxolodeo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KerenzaDoxolodeo/subscriptions",
"organizations_url": "https://api.github.com/users/KerenzaDoxolodeo/orgs",
"repos_url": "https://api.github.com/users/KerenzaDoxolodeo/repos",
"events_url": "https://api.github.com/users/KerenzaDoxolodeo/events{/privacy}",
"received_events_url": "https://api.github.com/users/KerenzaDoxolodeo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi! I believe this issue could stem from your label being negative as well. Could you check that it doesn't fail when computing the loss with a negative label?",
"I'm not sure what is a negative label in the context of multiple-choice, but here's what I did: it successfully computed loss when the label is 0 (which should be representing the first option?) but it fails for 1, 2, and -1.",
"@KerenzaDoxolodeo Hello,I am also trying to finetune RACE on bert model. I am wondering if you had fixed this problem.\r\nAlso can you post your fine tuning command with exact hyperparameters? Thanks.",
"@KerenzaDoxolodeo I ran into a similar problem. You are right in that the error is due to a shape mismatch. However you don't need to change the config file. Instead adapt your processor class by changing the context parameter of your InputExample from\r\n`contexts=[illogicalAnswer]` \r\nto\r\n`contexts=[illogicalAnswer, illogicalAnswer, illogicalAnswer]`\r\n\r\nIf you look at the original SwagProcessor, they copied the context several times such that both the context as well as the endings are lists of size num_labels.\r\n\r\nUnfortunately, the code does not raise an error if you ignore this requirement. If you look closely at what happens within the convert_examples_to_features() method in examples/utils_multiple_choice.py, you'll notice the line\r\n`enumerate(zip(example.contexts, example.endings))`\r\nThis is where everything breaks. Since you only provided a single context per example, this line will lead to also only taking into account a single ending from example.endings.\r\n\r\nYou don't need to adapt the config file as it will adapt to the number of labels automatically. If, for some reason, you still like to change the config, I think you should not manually overwrite the num_labels parameter as this is likely to introduce further errors (like in the config you showed above). Instead load the config from pretrained and provide the number of labels, e.g. like this\r\n`config = BertConfig.from_pretrained(\r\n 'bert-base-uncased',\r\n num_labels=5,\r\n)`\r\nThis will also change the mappings \"id2label\" as well as \"label2id\" appropriately (both are not set properly in your posted example).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,591 | 1,591 | NONE | null | ## β Questions & Help
Hi! I have issues with fine-tuning the multi-choice BERT and I am stuck on an error and I can use some help. When I tried to fine-tune it with my own dataset, it threw the Error ```RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at /opt/conda/conda-bld/pytorch_1570711300255/work/aten/src/THNN/generic/ClassNLLCriterion.c:97````
According to what I have found, this normally happened due to the dimension mismatch between the label and the output layer. When I printed the model, it seems that the model does not have a suitable output layer.
```
` (dropout): Dropout(p=0.1, inplace=False)
(classifier): Linear(in_features=768, out_features=1, bias=True)
)`
```
I have made sure that the Bert Config has received the correct number of the label (`num_labels`)
```
{
"attention_probs_dropout_prob": 0.1,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"num_labels": 3,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pruned_heads": {},
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 30522
}
```
I am using the code from examples/run_multiple_choice.py and examples/utils_multiple_choice.py without touching its logic but I have modified it the data processor.
```
class CSProcessor(object):
def get_train_examples(self):
return self._create_examples(df_all[:8000] ,
df_answer[:8000], "train")
def get_test_examples(self):
"""See base class."""
return self._create_examples(df_sentence[8000:] ,
df_answer[8000:], "test")
def get_labels(self):
"""See base class."""
return ["0", "1", "2"]
def _create_examples(self, df_sentence, df_answer, set_type):
"""Creates examples for the training and dev sets."""
examples = []
for I in range(len(df_sentence)):
race_id = "%s-%s" % (set_type, I)
truth = str(ord(df_answer[1][I]) - ord("A"))
illogicalAnswer = df_sentence["FalseSent"][I]
examples.append(
InputExample(
example_id=race_id,
question="Why it doesn't make senses?",
contexts=[illogicalAnswer],
endings=[df_sentence['OptionA'][I],
df_sentence['OptionB'][I],
df_sentence['OptionC'][I]],
label=truth,
)
)
return examples
```
I appreciate any help :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2600/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2600/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2599 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2599/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2599/comments | https://api.github.com/repos/huggingface/transformers/issues/2599/events | https://github.com/huggingface/transformers/issues/2599 | 552,685,004 | MDU6SXNzdWU1NTI2ODUwMDQ= | 2,599 | Xlnet, Alberta, Roberta are not finetuned for CoLA task | {
"login": "SarikGhazarian",
"id": 31287614,
"node_id": "MDQ6VXNlcjMxMjg3NjE0",
"avatar_url": "https://avatars.githubusercontent.com/u/31287614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SarikGhazarian",
"html_url": "https://github.com/SarikGhazarian",
"followers_url": "https://api.github.com/users/SarikGhazarian/followers",
"following_url": "https://api.github.com/users/SarikGhazarian/following{/other_user}",
"gists_url": "https://api.github.com/users/SarikGhazarian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SarikGhazarian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SarikGhazarian/subscriptions",
"organizations_url": "https://api.github.com/users/SarikGhazarian/orgs",
"repos_url": "https://api.github.com/users/SarikGhazarian/repos",
"events_url": "https://api.github.com/users/SarikGhazarian/events{/privacy}",
"received_events_url": "https://api.github.com/users/SarikGhazarian/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"NM, I was able to solve it by changing some of the hyperparameter values.",
"Hi, glad you could make it work! Do you mind sharing what hyperparameter values you tuned in order to make it work?",
"> Hi, glad you could make it work! Do you mind sharing what hyperparameter values you tuned in order to make it work?\r\n\r\nHi, have you solved the problem? I used bert-base-cased, roberta-base, and xlnet-base-cased to predict CoLA test.tsv, and I got 51.8, 55.6 and 24.7 respectively, I don't know why xlnet got such low Matthew's Corr. Can you help me? thx "
] | 1,579 | 1,589 | 1,580 | NONE | null | ## π Bug
I am currently trying to finetune pretrained models on CoLA task by using run_glue.py. Some of the models such as Bert and DistilBert are finetuned correctly as it is expected (the training loss goes down and the evaluation result is as what has been reported). Even though, for other models such as Roberta, Albert and Xlnet the training loss remains the same. I finetune the models for CoLA accordingly:
python run_glue.py --data_dir=./glue_data/CoLA/ --model_type=roberta
--model_name_or_path=roberta-base --task_name=CoLA
--output_dir=./model_roberta/ --max_seq_len=128 --do_train --do_eval --num_train_epochs=3.0 --save_steps=50 --learning_rate=5e-5
## Observed behavior
(for Roberta-base)
"loss": 0.5889998215436936, "step": 50
"loss": 0.649243945479393, "step": 100
"loss": 0.6612952649593353, "step": 150
"loss": 0.6241107112169266, "step": 200
...
"loss": 0.6236384356021881, "step": 50
...
"loss": 0.6253059101104737, "step": 800
...
(I trained for more epochs (different lr )and still, the loss is near 0.5-0.6)
By debugging the code it seems that the model's output (softmax of logits) during training no matter what is the input is always label 1.
Another hint: I tried to finetune Roberta for other tasks such as STS-B and it finetuned well (loss was going down and the output of the model was not identical for all different inputs).
I was wondering if someone also has faced the same issue. How should I solve this?
OS type and version: Linux-3.10.0
Python: 3.7
Pytorch: 1.3.1
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2599/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2599/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2598 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2598/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2598/comments | https://api.github.com/repos/huggingface/transformers/issues/2598/events | https://github.com/huggingface/transformers/issues/2598 | 552,585,910 | MDU6SXNzdWU1NTI1ODU5MTA= | 2,598 | load tf2 roberta model meet error | {
"login": "bestpredicts",
"id": 12403152,
"node_id": "MDQ6VXNlcjEyNDAzMTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/12403152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bestpredicts",
"html_url": "https://github.com/bestpredicts",
"followers_url": "https://api.github.com/users/bestpredicts/followers",
"following_url": "https://api.github.com/users/bestpredicts/following{/other_user}",
"gists_url": "https://api.github.com/users/bestpredicts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bestpredicts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bestpredicts/subscriptions",
"organizations_url": "https://api.github.com/users/bestpredicts/orgs",
"repos_url": "https://api.github.com/users/bestpredicts/repos",
"events_url": "https://api.github.com/users/bestpredicts/events{/privacy}",
"received_events_url": "https://api.github.com/users/bestpredicts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I am facing the same issue, how did you resolve this? @bestpredicts ",
"Same issue with portuguese bert version\r\n",
"I had the same issue and found that this problem occurs because the default \"RobertaConfig\" is based on \"bert-base-uncased\" config, which is different from \"roberta-base\" config. The right way to initialize the model and configuration is (LysandreJik's solution):\r\n\r\n```python\r\nconfig = RobertaConfig.from_pretrained(\"roberta-base\", output_hidden_states=True)\r\nmodel = RobertaForSequenceClassification.from_pretrained(\"roberta-base\", config=config)\r\n```\r\n\r\nPlease refer to the similar issue:\r\n[#1627](https://github.com/huggingface/transformers/issues/1627)",
"This is annoying, one doesnt have to do this while setting config for TFDistilBert\r\n\r\nhttps://towardsdatascience.com/working-with-hugging-face-transformers-and-tf-2-0-89bf35e3555a"
] | 1,579 | 1,642 | 1,579 | NONE | null | ## β Questions & Help
`
config = RobertaConfig() # print(config) to see settings
config.output_hidden_states = False # Set to True to obtain hidden states
model = TFRobertaModel.from_pretrained('/home/wk/Bert_Pretrained/robert_base/roberta-base-tf_model.h5', config=config) `
errors
`ValueError Traceback (most recent call last)
<ipython-input-19-eac9e3228d6c> in <module>
1 config = RobertaConfig() # print(config) to see settings
2 config.output_hidden_states = False # Set to True to obtain hidden states
----> 3 model = TFRobertaModel.from_pretrained('/home/wk/Bert_Pretrained/robert_base/roberta-base-tf_model.h5', config=config)
~/anaconda3/lib/python3.7/site-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
315 # see https://github.com/tensorflow/tensorflow/blob/00fad90125b18b80fe054de1055770cfb8fe4ba3/tensorflow/python/keras/engine/network.py#L1339-L1357
316 try:
--> 317 model.load_weights(resolved_archive_file, by_name=True)
318 except OSError:
319 raise OSError("Unable to load weights from h5 file. "
~/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in load_weights(self, filepath, by_name)
179 raise ValueError('Load weights is not yet supported with TPUStrategy '
180 'with steps_per_run greater than 1.')
--> 181 return super(Model, self).load_weights(filepath, by_name)
182
183 @trackable.no_automatic_dependency_tracking
~/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/network.py in load_weights(self, filepath, by_name)
1173 f = f['model_weights']
1174 if by_name:
-> 1175 saving.load_weights_from_hdf5_group_by_name(f, self.layers)
1176 else:
1177 saving.load_weights_from_hdf5_group(f, self.layers)
~/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/hdf5_format.py in load_weights_from_hdf5_group_by_name(f, layers)
758 symbolic_weights[i])) +
759 ', but the saved weight has shape ' +
--> 760 str(weight_values[i].shape) + '.')
761
762 else:
ValueError: Layer #0 (named "roberta"), weight <tf.Variable 'tf_roberta_model_5/roberta/embeddings/word_embeddings/weight:0' shape=(30522, 768) dtype=float32, numpy=
array([[-0.02175204, 0.01785859, -0.01712652, ..., 0.0088525 ,
-0.00240036, 0.01757819],
[ 0.01320856, 0.01548896, 0.0290868 , ..., -0.01266216,
0.00756532, -0.01283411],
[ 0.02433892, 0.00970818, -0.01082115, ..., 0.01121136,
0.01314066, 0.0088822 ],
...,
[-0.00798688, -0.03137787, -0.00074065, ..., 0.03188593,
0.02637535, 0.02540809],[ 0.01545427, -0.02784344, 0.01380141, ..., -0.02135191,
-0.01506698, -0.00579444],
[-0.01216899, 0.00676558, 0.01336646, ..., -0.00323554,
0.02038151, 0.02287306]], dtype=float32)> has shape (30522, 768), but the saved weight has shape (50265, 7)`
<!-- A clear and concise description of the question. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2598/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2598/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2597 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2597/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2597/comments | https://api.github.com/repos/huggingface/transformers/issues/2597/events | https://github.com/huggingface/transformers/issues/2597 | 552,585,576 | MDU6SXNzdWU1NTI1ODU1NzY= | 2,597 | Transfer Learning on Text Summarization Model | {
"login": "imayachita",
"id": 3615586,
"node_id": "MDQ6VXNlcjM2MTU1ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3615586?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imayachita",
"html_url": "https://github.com/imayachita",
"followers_url": "https://api.github.com/users/imayachita/followers",
"following_url": "https://api.github.com/users/imayachita/following{/other_user}",
"gists_url": "https://api.github.com/users/imayachita/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imayachita/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imayachita/subscriptions",
"organizations_url": "https://api.github.com/users/imayachita/orgs",
"repos_url": "https://api.github.com/users/imayachita/repos",
"events_url": "https://api.github.com/users/imayachita/events{/privacy}",
"received_events_url": "https://api.github.com/users/imayachita/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1845609017,
"node_id": "MDU6TGFiZWwxODQ1NjA5MDE3",
"url": "https://api.github.com/repos/huggingface/transformers/labels/seq2seq",
"name": "seq2seq",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"type": "User",
"site_admin": false
}
] | [
"@imayachita If your succeeded to use transfer learning on your own data, please update here.",
"`examples/summarization/bart/run_bart_sum.py` now exists :)"
] | 1,579 | 1,585 | 1,585 | NONE | null | Hi all,
Is there any way to do transfer learning on the Text Summarization model (bertabs-finetuned-cnndm)? I would like to continue training it on my dataset.
The code run_summarization.py only does prediction. Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2597/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2597/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2596 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2596/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2596/comments | https://api.github.com/repos/huggingface/transformers/issues/2596/events | https://github.com/huggingface/transformers/issues/2596 | 552,511,882 | MDU6SXNzdWU1NTI1MTE4ODI= | 2,596 | changing the attention head size in MultiBert | {
"login": "mhajiaghayi",
"id": 28695943,
"node_id": "MDQ6VXNlcjI4Njk1OTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/28695943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mhajiaghayi",
"html_url": "https://github.com/mhajiaghayi",
"followers_url": "https://api.github.com/users/mhajiaghayi/followers",
"following_url": "https://api.github.com/users/mhajiaghayi/following{/other_user}",
"gists_url": "https://api.github.com/users/mhajiaghayi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mhajiaghayi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mhajiaghayi/subscriptions",
"organizations_url": "https://api.github.com/users/mhajiaghayi/orgs",
"repos_url": "https://api.github.com/users/mhajiaghayi/repos",
"events_url": "https://api.github.com/users/mhajiaghayi/events{/privacy}",
"received_events_url": "https://api.github.com/users/mhajiaghayi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I got it working"
] | 1,579 | 1,584 | 1,584 | NONE | null | ## π Bug
<!-- Important information -->
I'm trying to use a MultiBERT model with not all 12 attention heads but just 8 attention heads. so, in config file, I changed the following keys
config.num_attention_heads = 8
config.hidden_size = 512
config.pooler_fc_size = 512
I assumed similar to layer size that we have flexibility in changing it, we may have a similar freedom in changing the head size. however, the run_xnli.py code throws the following error.
> size mismatch for bert.encoder.layer.11.output.LayerNorm.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in curr
> ent model is torch.Size([512]).
> size mismatch for bert.pooler.dense.weight: copying a param with shape torch.Size([768, 768]) from checkpoint, the shape in current model is t
> orch.Size([512, 512]).
> size mismatch for bert.pooler.dense.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Si
> ze([512]).
Model I am using MultiBERT.
Language I am using the model on (English, Chinese....):
The problem arise when using:
* [X ] the official example scripts: (give details)
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [X ] an official GLUE/SQUaD task: run_xnli.py
* [ ] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
1. get the run_xnli.py script and added the following config change codes
config.num_attention_heads = 8
config.hidden_size = 512
config.pooler_fc_size = 512
config.pooler_num_attention_heads = 8
## Environment
* OS: windows
* Python version: 3.6
* PyTorch version: 1.1
* PyTorch Transformers version (or branch): 2.2.1
* Using GPU ? yes
* Distributed or parallel setup ? parallel
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2596/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2596/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2595 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2595/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2595/comments | https://api.github.com/repos/huggingface/transformers/issues/2595/events | https://github.com/huggingface/transformers/issues/2595 | 552,449,169 | MDU6SXNzdWU1NTI0NDkxNjk= | 2,595 | RAM leakage when trying to retrieve the hidden states from the GPT-2 model. | {
"login": "h56cho",
"id": 52889259,
"node_id": "MDQ6VXNlcjUyODg5MjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/52889259?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h56cho",
"html_url": "https://github.com/h56cho",
"followers_url": "https://api.github.com/users/h56cho/followers",
"following_url": "https://api.github.com/users/h56cho/following{/other_user}",
"gists_url": "https://api.github.com/users/h56cho/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h56cho/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h56cho/subscriptions",
"organizations_url": "https://api.github.com/users/h56cho/orgs",
"repos_url": "https://api.github.com/users/h56cho/repos",
"events_url": "https://api.github.com/users/h56cho/events{/privacy}",
"received_events_url": "https://api.github.com/users/h56cho/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"To add more, I fixed my code like below:\r\n\r\n```python\r\n# for loop to calculate TVD\r\ndef TVD_loop(test_i, test_dummy_i, nlayer, best_model):\r\n \r\n TVD_tensor = torch.zeros(test_i.size()[1], (nlayer+1), test_i.size()[0]).float()\r\n \r\n # replace every 0's in TVD_tensor to -2\r\n TVD_tensor = torch.where(TVD_tensor == 0.0, torch.tensor(-2.0), TVD_tensor)\r\n \r\n for m in range(test_i.size()[1]):\r\n \r\n gc.collect()\r\n \r\n input_ids = test_i[:,m]\r\n input_ids = torch.tensor(input_ids.tolist()).unsqueeze(0) \r\n\r\n # NOTE: Hidden states are in torch.FloatTensor,\r\n # (one for the output of each layer + the output of the embeddings)\r\n # jth layer\r\n for j in range(nlayer+1):\r\n \r\n del gc.garbage[:]\r\n gc.collect()\r\n \r\n for l in range(m * test_i.size()[0], (m+1) * test_i.size()[0]):\r\n \r\n del gc.garbage[:]\r\n gc.collect()\r\n \r\n tst_hidden_states = best_model(input_ids)[3][j][0, (test_i.size()[0] - 1), :]\r\n \r\n input_ids_dummy = test_dummy_i[:,l]\r\n input_ids_dummy = torch.tensor(input_ids_dummy.tolist()).unsqueeze(0) \r\n \r\n tst_hidden_states_dummy = best_model(input_ids_dummy)[3][j][0, (test_i.size()[0] - 1), :]\r\n \r\n del input_ids_dummy\r\n del gc.garbage[:]\r\n gc.collect()\r\n \r\n # TVD_tensor[i,j,k] denotes for TVD calculated at \r\n # batch i, layer j, and dummy output k\r\n TVD_tensor[m,j,(l % (test_i.size()[0]))] = TVD(tst_hidden_states, tst_hidden_states_dummy)\r\n \r\n del tst_hidden_states\r\n del tst_hidden_states_dummy\r\n del gc.garbage[:]\r\n gc.collect()\r\n \r\n print('l={}, gc_get_count={}'.format(l,gc.get_count()))\r\n \r\n del gc.garbage[:]\r\n gc.collect()\r\n print('j={}, gc_get_count={}'.format(j,gc.get_count()))\r\n \r\n del gc.garbage[:]\r\n del input_ids\r\n gc.collect()\r\n \r\n print('m={}, gc_get_count={}'.format(m,gc.get_count()))\r\n \r\n return TVD_tensor \r\n```\r\n\r\nfrom the code above, when ```m=0, j=0, l=0```, everything is fine, but once ```m=0, j=1, l=0``` is reached, the memory usage starts to accumulate rapidly. How should I fix my code?",
"Did you try detaching the gradient from the hidden states by replacing \r\n`\r\ntst_hidden_states = best_model(input_ids)[3][j][0, (test_i.size()[0] - 1), :]\r\n`\r\nwith\r\n`\r\ntst_hidden_states = best_model(input_ids)[3][j][0, (test_i.size()[0] - 1), :].detach()\r\n`\r\n?\r\nThe gradient needed for backpropagation usually consumes a lot of RAM and is probably not needed in your case.\r\n\r\n> ",
"> Did you try detaching the gradient from the hidden states by replacing\r\n> `tst_hidden_states = best_model(input_ids)[3][j][0, (test_i.size()[0] - 1), :]`\r\n> with\r\n> `tst_hidden_states = best_model(input_ids)[3][j][0, (test_i.size()[0] - 1), :].detach()`\r\n> ?\r\n> The gradient needed for backpropagation usually consumes a lot of RAM and is probably not needed in your case.\r\n> \r\n> >\r\n\r\nDetaching _before_ the slice might be even better? Not sure.",
"Thank you! .detach() solved this RAM leakage issue :)"
] | 1,579 | 1,579 | 1,579 | NONE | null | Hello,
I am trying to retrieve hidden state vectors from my trained GPT-2 model from a loop, and there is a huge RAM leakage associated with the operation. Below are my code:
```python
# for loop to calculate TVD
def TVD_loop(test_i, test_dummy_i, nlayer, best_model):
TVD_tensor = torch.zeros(test_i.size()[1], (nlayer+1), test_i.size()[0]).float()
# replace every 0's in TVD_tensor to -2
TVD_tensor = torch.where(TVD_tensor == 0.0, torch.tensor(-2.0), TVD_tensor)
for m in range(test_i.size()[1]):
input_ids = test_i[:,m]
input_ids = torch.tensor(input_ids.tolist()).unsqueeze(0)
# NOTE: Hidden states are in torch.FloatTensor,
# (one for the output of each layer + the output of the embeddings)
# jth layer
for j in range(nlayer+1):
tst_hidden_states = best_model(input_ids)[3][j][0, (test_i.size()[0] - 1), :]
for l in range(m * test_i.size()[0], (m+1) * test_i.size()[0]):
input_ids_dummy = test_dummy_i[:,l]
input_ids_dummy = torch.tensor(input_ids_dummy.tolist()).unsqueeze(0)
tst_hidden_states_dummy = best_model(input_ids_dummy)[3][j][0, (test_i.size()[0] - 1), :]
# TVD_tensor[i,j,k] denotes for TVDC calculated at
# batch i, layer j, and dummy output k
TVD_tensor[m,j,(l % (test_i.size()[0]))] = TVD(tst_hidden_states, tst_hidden_states_dummy)
return TVD_tensor
```
I have about ~400GB of RAM, but each time the hidden state vector is retrieved (e.g. ```tst_hidden_states = best_model(input_ids)[3][j][0, (test_i.size()[0] - 1), :]```), it uses up about ~2GB of RAM. How can I prevent this? Thank you,
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2595/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2595/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2594 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2594/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2594/comments | https://api.github.com/repos/huggingface/transformers/issues/2594/events | https://github.com/huggingface/transformers/pull/2594 | 552,386,364 | MDExOlB1bGxSZXF1ZXN0MzY0OTA0NzQz | 2,594 | edited a way to get at AlbertAttention.forward | {
"login": "jinkilee",
"id": 6321520,
"node_id": "MDQ6VXNlcjYzMjE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6321520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jinkilee",
"html_url": "https://github.com/jinkilee",
"followers_url": "https://api.github.com/users/jinkilee/followers",
"following_url": "https://api.github.com/users/jinkilee/following{/other_user}",
"gists_url": "https://api.github.com/users/jinkilee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jinkilee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jinkilee/subscriptions",
"organizations_url": "https://api.github.com/users/jinkilee/orgs",
"repos_url": "https://api.github.com/users/jinkilee/repos",
"events_url": "https://api.github.com/users/jinkilee/events{/privacy}",
"received_events_url": "https://api.github.com/users/jinkilee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"will commit with another pull request"
] | 1,579 | 1,579 | 1,579 | NONE | null | '''
# Should find a better way to do this
w = (
self.dense.weight.t()
.view(self.num_attention_heads, self.attention_head_size, self.hidden_size)
.to(context_layer.dtype)
)
b = self.dense.bias.to(context_layer.dtype)
'''
I thought the above code is not necessary.
it can be simply fixed by "merging" `context_layer` at forward().
I committed my "merging" function at AlbertAttention
'''
def merge_tensor(self, x):
s = x.size()[-2]
return torch.cat([x[:,:,i,:] for i in range(s)], dim=-1)
'''
I wanted to make a test by "make test' as described in CONTRIBUTING.md
but I couldn't because i faced some make error.
This is my first open source contribution. If i forgot something, please let me know, so I can fix and follow up. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2594/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2594/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2594",
"html_url": "https://github.com/huggingface/transformers/pull/2594",
"diff_url": "https://github.com/huggingface/transformers/pull/2594.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2594.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2593 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2593/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2593/comments | https://api.github.com/repos/huggingface/transformers/issues/2593/events | https://github.com/huggingface/transformers/pull/2593 | 552,379,217 | MDExOlB1bGxSZXF1ZXN0MzY0ODk4OTM1 | 2,593 | Added custom model dir to PPLM train | {
"login": "ashirviskas",
"id": 11985242,
"node_id": "MDQ6VXNlcjExOTg1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/11985242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashirviskas",
"html_url": "https://github.com/ashirviskas",
"followers_url": "https://api.github.com/users/ashirviskas/followers",
"following_url": "https://api.github.com/users/ashirviskas/following{/other_user}",
"gists_url": "https://api.github.com/users/ashirviskas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashirviskas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashirviskas/subscriptions",
"organizations_url": "https://api.github.com/users/ashirviskas/orgs",
"repos_url": "https://api.github.com/users/ashirviskas/repos",
"events_url": "https://api.github.com/users/ashirviskas/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashirviskas/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | NONE | null | Just an option to save the model to other than the working directory.
Default functionality hasn't changed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2593/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2593/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2593",
"html_url": "https://github.com/huggingface/transformers/pull/2593",
"diff_url": "https://github.com/huggingface/transformers/pull/2593.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2593.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2592 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2592/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2592/comments | https://api.github.com/repos/huggingface/transformers/issues/2592/events | https://github.com/huggingface/transformers/issues/2592 | 552,243,924 | MDU6SXNzdWU1NTIyNDM5MjQ= | 2,592 | RuntimeError: The expanded size of the tensor (449) must match the existing size (2) at non-singleton dimension 2. Target sizes: [4, 2, 449]. Tensor sizes: [1, 2] while using ALBERT | {
"login": "hasnain2808",
"id": 28212972,
"node_id": "MDQ6VXNlcjI4MjEyOTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/28212972?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hasnain2808",
"html_url": "https://github.com/hasnain2808",
"followers_url": "https://api.github.com/users/hasnain2808/followers",
"following_url": "https://api.github.com/users/hasnain2808/following{/other_user}",
"gists_url": "https://api.github.com/users/hasnain2808/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hasnain2808/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hasnain2808/subscriptions",
"organizations_url": "https://api.github.com/users/hasnain2808/orgs",
"repos_url": "https://api.github.com/users/hasnain2808/repos",
"events_url": "https://api.github.com/users/hasnain2808/events{/privacy}",
"received_events_url": "https://api.github.com/users/hasnain2808/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"What is the code that you are executing that leads to this error? ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"How did you fix the problem?"
] | 1,579 | 1,620 | 1,587 | NONE | null | ## β Questions & Help
<!-- A clear and concise description of the question. -->
I wanted to use ALBERT with a double head
as we have one for openaigpt with the name OpenAIGPTDoubleHeadsModel
I am doing it with taking inspiration from OpenAIGPTDoubleHeadsModel
but I am getting this error
` File "train.py", line 266, in <module>
train()
File "train.py", line 258, in train
trainer.run(train_loader, max_epochs=args.n_epochs)
File "/home/moha/venv/huggtrans/lib/python3.6/site-packages/ignite/engine/engine.py", line 446, in run
self._handle_exception(e)
File "/home/moha/venv/huggtrans/lib/python3.6/site-packages/ignite/engine/engine.py", line 410, in _handle_exception
raise e
File "/home/moha/venv/huggtrans/lib/python3.6/site-packages/ignite/engine/engine.py", line 433, in run
hours, mins, secs = self._run_once_on_dataset()
File "/home/moha/venv/huggtrans/lib/python3.6/site-packages/ignite/engine/engine.py", line 399, in _run_once_on_dataset
self._handle_exception(e)
File "/home/moha/venv/huggtrans/lib/python3.6/site-packages/ignite/engine/engine.py", line 410, in _handle_exception
raise e
File "/home/moha/venv/huggtrans/lib/python3.6/site-packages/ignite/engine/engine.py", line 391, in _run_once_on_dataset
self.state.output = self._process_function(self, batch)
File "train.py", line 180, in update
mc_labels=mc_labels, lm_labels=lm_labels
File "/home/moha/venv/huggtrans/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/home/moha/venv/huggtrans/lib/python3.6/site-packages/transformers/modeling_albert.py", line 956, in forward
inputs_embeds=inputs_embeds)
File "/home/moha/venv/huggtrans/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/home/moha/venv/huggtrans/lib/python3.6/site-packages/transformers/modeling_albert.py", line 499, in forward
inputs_embeds=inputs_embeds)
File "/home/moha/venv/huggtrans/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/home/moha/venv/huggtrans/lib/python3.6/site-packages/transformers/modeling_bert.py", line 171, in forward
position_ids = position_ids.unsqueeze(0).expand(input_shape)
RuntimeError: The expanded size of the tensor (449) must match the existing size (2) at non-singleton dimension 2. Target sizes: [4, 2, 449]. Tensor sizes: [1, 2]`
the code block for AlbertDoubleHeadsModel
`class AlbertDoubleHeadsModel(AlbertPreTrainedModel):`
` def __init__(self, config):`
` super(AlbertDoubleHeadsModel, self).__init__(config)`
self.albert = AlbertModel(config)
self.lm_head = nn.Linear(config.embedding_size, config.vocab_size, bias=False)
self.multiple_choice_head = SequenceSummary(config)
self.init_weights()
def get_output_embeddings(self):
return self.lm_head
def forward(self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None,
mc_token_ids=None, lm_labels=None, mc_labels=None):
transformer_outputs = self.albert(input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds)
hidden_states = transformer_outputs[0]
lm_logits = self.lm_head(hidden_states)
mc_logits = self.multiple_choice_head(hidden_states, mc_token_ids).squeeze(-1)
outputs = (lm_logits, mc_logits) + transformer_outputs[1:]
if mc_labels is not None:
loss_fct = CrossEntropyLoss()
loss = loss_fct(mc_logits.view(-1, mc_logits.size(-1)),
mc_labels.view(-1))
outputs = (loss,) + outputs
if lm_labels is not None:
shift_logits = lm_logits[..., :-1, :].contiguous()
shift_labels = lm_labels[..., 1:].contiguous()
loss_fct = CrossEntropyLoss(ignore_index=-1)
loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)),
shift_labels.view(-1))
outputs = (loss,) + outputs
return outputs # (lm loss), (mc loss), lm logits, mc logits, (all hidden_states), (attentions)`
Is there anything that I am missing?
Please do tell me.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2592/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2592/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2591 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2591/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2591/comments | https://api.github.com/repos/huggingface/transformers/issues/2591/events | https://github.com/huggingface/transformers/issues/2591 | 552,161,685 | MDU6SXNzdWU1NTIxNjE2ODU= | 2,591 | What is the f1 score of Squad v2.0 on bert-base? I only got f1 score 74.78. | {
"login": "YJYJLee",
"id": 28900943,
"node_id": "MDQ6VXNlcjI4OTAwOTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/28900943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YJYJLee",
"html_url": "https://github.com/YJYJLee",
"followers_url": "https://api.github.com/users/YJYJLee/followers",
"following_url": "https://api.github.com/users/YJYJLee/following{/other_user}",
"gists_url": "https://api.github.com/users/YJYJLee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YJYJLee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YJYJLee/subscriptions",
"organizations_url": "https://api.github.com/users/YJYJLee/orgs",
"repos_url": "https://api.github.com/users/YJYJLee/repos",
"events_url": "https://api.github.com/users/YJYJLee/events{/privacy}",
"received_events_url": "https://api.github.com/users/YJYJLee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Your result is fine. This [poster](https://web.stanford.edu/class/cs224n/posters/15848021.pdf) says that they achieved **76.70**. Maybe you can get there as well when you train for 2 more epochs.",
"Thank you for your reply! :)",
"Please close the question if the answer suits your needs.",
"Sorry, I closed it!"
] | 1,579 | 1,579 | 1,579 | NONE | null | ## β Questions & Help
<!-- A clear and concise description of the question. -->
Hello, I am doing some experiment of squad v2.0 on bert-base (NOT bert-large).
According to the BERT paper, bert-large achieves f1 score 81.9 with squad v2.0.
Since I couldn't find the official result for bert-base, I am not sure if I am getting the right f1 score.
Has anyone tried running squad v2.0 on bert base?
I got f1 score **74.78** for squad v2.0 result on bert-base, using below command:
sudo python3 ../../../run_squad.py \
--model_type bert \
--model_name_or_path bert-base-cased \
--do_train \
--do_eval \
--train_file $SQUAD2_DIR/train-v2.0.json \
--predict_file $SQUAD2_DIR/dev-v2.0.json \
--per_gpu_train_batch_size 4 \
--learning_rate 4e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--version_2_with_negative \
--overwrite_output_dir \
--output_dir ../../../bert_base/$TASK_NAME/ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2591/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2591/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2590 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2590/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2590/comments | https://api.github.com/repos/huggingface/transformers/issues/2590/events | https://github.com/huggingface/transformers/issues/2590 | 552,092,601 | MDU6SXNzdWU1NTIwOTI2MDE= | 2,590 | run_glue.py, CoLA : MCC goes to 0, in some hyperparameter cases | {
"login": "drcdr",
"id": 18181131,
"node_id": "MDQ6VXNlcjE4MTgxMTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/18181131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drcdr",
"html_url": "https://github.com/drcdr",
"followers_url": "https://api.github.com/users/drcdr/followers",
"following_url": "https://api.github.com/users/drcdr/following{/other_user}",
"gists_url": "https://api.github.com/users/drcdr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drcdr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drcdr/subscriptions",
"organizations_url": "https://api.github.com/users/drcdr/orgs",
"repos_url": "https://api.github.com/users/drcdr/repos",
"events_url": "https://api.github.com/users/drcdr/events{/privacy}",
"received_events_url": "https://api.github.com/users/drcdr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using: roberta-large
Language I am using the model on: English
The problem arise when using:
* [x] the official example scripts: run_glue.py
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: CoLA
## To Reproduce
Steps to reproduce the behavior:
1. Running the following command is one way to create an mcc of 0. Possible 'non-standard' items here are: LR up to 3e-5, different seeds, warmup, and evaluate-during-training.
```
python run_glue_orig.py --model_type roberta --model_name_or_path roberta-large --task_name CoLA --do_train --do_eval --do_lower_case --evaluate_during_training --logging_steps 50 --save_steps 1000000 --data_dir /home/_DBS/GLUE/CoLA --max_seq_length 128 --per_gpu_eval_batch_size 8 --per_gpu_train_batch_size 8 --learning_rate 3e-5 --num_train_epochs 10 --warmup_steps 641 --output_dir try_fail_1 --seed 3
```
For reference, the args are:
```
Namespace(adam_epsilon=1e-08, cache_dir='', config_name='', data_dir='/home/_DBS/GLUE/CoLA', do_eval=True, do_lower_case=True, do_train=True, eval_all_checkpoints=False, evaluate_during_training=True, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, learning_rate=3e-05, local_rank=-1, logging_steps=50, max_grad_norm=1.0, max_seq_length=128, max_steps=-1, model_name_or_path='roberta-large', model_type='roberta', no_cuda=False, num_train_epochs=10.0, output_dir='try_fail_1', overwrite_cache=False, overwrite_output_dir=False, per_gpu_eval_batch_size=8, per_gpu_train_batch_size=8, save_steps=1000000, seed=3, server_ip='', server_port='', task_name='CoLA', tokenizer_name='', warmup_steps=641, weight_decay=0.0)
```
## Expected behavior
Non-zero mcc values for CoLA, after warmup, even for LR = 3e-5. (Agree?)
## Environment
* OS: Ubuntu 18.04 Platform Linux-4.15.0-74-generic-x86_64-with-debian-buster-sid
* Python version: Python 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34)
* PyTorch version: 1.2.0.dev20190702 py3.6_cuda10.0.130_cudnn7.5.1_0
* PyTorch Transformers version (or branch): 2.3.0
* Using GPU : TitanXP, qty. 1
* Distributed or parallel setup : no
* Any other relevant information:
## Additional context
* I am trying to reproduce the CoLA results for BERT and RoBERTa. I've built a test harness that calls a lightly modified version of run_glue.py, following the approach of paragraph 5.1 [here](https://arxiv.org/pdf/1907.11692.pdf).
* Once I saw the mcc=0 values in my version, I went back and reproduced the problem in the standard run_glue.py version.
* I have not seen this (mcc->0) behavior when running the hyperparameter search on bert-base-cased, or roberta-base. Just on roberta-large.
* The output of the standard run_glue.py is too large to include here, but here is the output from my version, showing what mcc looks like during the run, with the same parameters as above (you need to scroll to the right to see the mcc values):
```
2020-01-19 13:14:10 Ep 0/10 Gstep 50 Step 49: Loss= 0.680712 LR=2.340094e-06 mcc= 0.000000,
2020-01-19 13:14:45 Ep 0/10 Gstep 100 Step 99: Loss= 0.610892 LR=4.680187e-06 mcc= 0.000000,
2020-01-19 13:15:19 Ep 0/10 Gstep 150 Step 149: Loss= 0.627466 LR=7.020281e-06 mcc= 0.000000,
2020-01-19 13:15:54 Ep 0/10 Gstep 200 Step 199: Loss= 0.603764 LR=9.360374e-06 mcc= 0.000000,
2020-01-19 13:16:28 Ep 0/10 Gstep 250 Step 249: Loss= 0.576851 LR=1.170047e-05 mcc= 0.222055,
2020-01-19 13:17:03 Ep 0/10 Gstep 300 Step 299: Loss= 0.581703 LR=1.404056e-05 mcc= 0.046356,
2020-01-19 13:17:37 Ep 0/10 Gstep 350 Step 349: Loss= 0.580515 LR=1.638066e-05 mcc= 0.176324,
2020-01-19 13:18:12 Ep 0/10 Gstep 400 Step 399: Loss= 0.499536 LR=1.872075e-05 mcc= 0.452767,
2020-01-19 13:18:46 Ep 0/10 Gstep 450 Step 449: Loss= 0.586593 LR=2.106084e-05 mcc= 0.459277,
2020-01-19 13:19:21 Ep 0/10 Gstep 500 Step 499: Loss= 0.619785 LR=2.340094e-05 mcc= 0.465695,
2020-01-19 13:19:55 Ep 0/10 Gstep 550 Step 549: Loss= 0.554341 LR=2.574103e-05 mcc= 0.480823,
2020-01-19 13:20:29 Ep 0/10 Gstep 600 Step 599: Loss= 0.573441 LR=2.808112e-05 mcc= 0.272740,
2020-01-19 13:21:04 Ep 0/10 Gstep 650 Step 649: Loss= 0.581185 LR=2.997313e-05 mcc= 0.347692,
2020-01-19 13:21:38 Ep 0/10 Gstep 700 Step 699: Loss= 0.662212 LR=2.982386e-05 mcc= 0.000000,
2020-01-19 13:22:13 Ep 0/10 Gstep 750 Step 749: Loss= 0.601005 LR=2.967459e-05 mcc= 0.000000,
2020-01-19 13:22:47 Ep 0/10 Gstep 800 Step 799: Loss= 0.648288 LR=2.952533e-05 mcc= 0.000000,
2020-01-19 13:23:22 Ep 0/10 Gstep 850 Step 849: Loss= 0.637533 LR=2.937606e-05 mcc= 0.000000,
2020-01-19 13:23:56 Ep 0/10 Gstep 900 Step 899: Loss= 0.633061 LR=2.922679e-05 mcc= 0.000000,
2020-01-19 13:24:30 Ep 0/10 Gstep 950 Step 949: Loss= 0.647681 LR=2.907752e-05 mcc= 0.000000,
2020-01-19 13:25:05 Ep 0/10 Gstep 1000 Step 999: Loss= 0.608788 LR=2.892825e-05 mcc= 0.000000,
2020-01-19 13:25:39 Ep 0/10 Gstep 1050 Step 1049: Loss= 0.620082 LR=2.877898e-05 mcc= 0.000000,
2020-01-19 13:26:14 Ep 1/10 Gstep 1100 Step 30: Loss= 0.604326 LR=2.862971e-05 mcc= 0.000000,
2020-01-19 13:26:48 Ep 1/10 Gstep 1150 Step 80: Loss= 0.599047 LR=2.848045e-05 mcc= 0.000000,
2020-01-19 13:27:22 Ep 1/10 Gstep 1200 Step 130: Loss= 0.635962 LR=2.833118e-05 mcc= 0.000000,
2020-01-19 13:27:57 Ep 1/10 Gstep 1250 Step 180: Loss= 0.576796 LR=2.818191e-05 mcc= 0.000000,
2020-01-19 13:28:31 Ep 1/10 Gstep 1300 Step 230: Loss= 0.627478 LR=2.803264e-05 mcc= 0.000000,
2020-01-19 13:29:06 Ep 1/10 Gstep 1350 Step 280: Loss= 0.584176 LR=2.788337e-05 mcc= 0.000000,
2020-01-19 13:29:40 Ep 1/10 Gstep 1400 Step 330: Loss= 0.604549 LR=2.773410e-05 mcc= 0.000000,
2020-01-19 13:30:14 Ep 1/10 Gstep 1450 Step 380: Loss= 0.619358 LR=2.758483e-05 mcc= 0.000000,
2020-01-19 13:30:49 Ep 1/10 Gstep 1500 Step 430: Loss= 0.582581 LR=2.743557e-05 mcc= 0.000000,
2020-01-19 13:31:23 Ep 1/10 Gstep 1550 Step 480: Loss= 0.599344 LR=2.728630e-05 mcc= 0.000000,
2020-01-19 13:31:58 Ep 1/10 Gstep 1600 Step 530: Loss= 0.628334 LR=2.713703e-05 mcc= 0.000000,
2020-01-19 13:32:32 Ep 1/10 Gstep 1650 Step 580: Loss= 0.619170 LR=2.698776e-05 mcc= 0.000000,
2020-01-19 13:33:07 Ep 1/10 Gstep 1700 Step 630: Loss= 0.647814 LR=2.683849e-05 mcc= 0.000000,
2020-01-19 13:33:41 Ep 1/10 Gstep 1750 Step 680: Loss= 0.573997 LR=2.668922e-05 mcc= 0.000000,
2020-01-19 13:34:16 Ep 1/10 Gstep 1800 Step 730: Loss= 0.591928 LR=2.653995e-05 mcc= 0.000000,
2020-01-19 13:34:50 Ep 1/10 Gstep 1850 Step 780: Loss= 0.636912 LR=2.639069e-05 mcc= 0.000000,
2020-01-19 13:35:25 Ep 1/10 Gstep 1900 Step 830: Loss= 0.647883 LR=2.624142e-05 mcc= 0.000000,
2020-01-19 13:35:59 Ep 1/10 Gstep 1950 Step 880: Loss= 0.612844 LR=2.609215e-05 mcc= 0.000000,
2020-01-19 13:36:34 Ep 1/10 Gstep 2000 Step 930: Loss= 0.661776 LR=2.594288e-05 mcc= 0.000000,
2020-01-19 13:37:08 Ep 1/10 Gstep 2050 Step 980: Loss= 0.648593 LR=2.579361e-05 mcc= 0.000000,
2020-01-19 13:37:42 Ep 1/10 Gstep 2100 Step 1030: Loss= 0.628153 LR=2.564434e-05 mcc= 0.000000,
2020-01-19 13:38:17 Ep 2/10 Gstep 2150 Step 11: Loss= 0.589748 LR=2.549507e-05 mcc= 0.000000,
2020-01-19 13:38:51 Ep 2/10 Gstep 2200 Step 61: Loss= 0.583273 LR=2.534581e-05 mcc= 0.000000,
2020-01-19 13:39:26 Ep 2/10 Gstep 2250 Step 111: Loss= 0.631948 LR=2.519654e-05 mcc= 0.000000,
2020-01-19 13:40:00 Ep 2/10 Gstep 2300 Step 161: Loss= 0.628461 LR=2.504727e-05 mcc= 0.000000,
2020-01-19 13:40:35 Ep 2/10 Gstep 2350 Step 211: Loss= 0.644976 LR=2.489800e-05 mcc= 0.000000,
2020-01-19 13:41:09 Ep 2/10 Gstep 2400 Step 261: Loss= 0.612494 LR=2.474873e-05 mcc= 0.000000,
2020-01-19 13:41:44 Ep 2/10 Gstep 2450 Step 311: Loss= 0.628107 LR=2.459946e-05 mcc= 0.000000,
2020-01-19 13:42:18 Ep 2/10 Gstep 2500 Step 361: Loss= 0.663605 LR=2.445019e-05 mcc= 0.000000,
2020-01-19 13:42:52 Ep 2/10 Gstep 2550 Step 411: Loss= 0.586811 LR=2.430093e-05 mcc= 0.000000,
2020-01-19 13:43:27 Ep 2/10 Gstep 2600 Step 461: Loss= 0.620628 LR=2.415166e-05 mcc= 0.000000,
2020-01-19 13:44:01 Ep 2/10 Gstep 2650 Step 511: Loss= 0.608283 LR=2.400239e-05 mcc= 0.000000,
2020-01-19 13:44:36 Ep 2/10 Gstep 2700 Step 561: Loss= 0.620366 LR=2.385312e-05 mcc= 0.000000,
2020-01-19 13:45:10 Ep 2/10 Gstep 2750 Step 611: Loss= 0.631265 LR=2.370385e-05 mcc= 0.000000,
2020-01-19 13:45:44 Ep 2/10 Gstep 2800 Step 661: Loss= 0.636871 LR=2.355458e-05 mcc= 0.000000,
2020-01-19 13:46:19 Ep 2/10 Gstep 2850 Step 711: Loss= 0.639280 LR=2.340531e-05 mcc= 0.000000,
2020-01-19 13:46:53 Ep 2/10 Gstep 2900 Step 761: Loss= 0.603878 LR=2.325605e-05 mcc= 0.000000,
2020-01-19 13:47:28 Ep 2/10 Gstep 2950 Step 811: Loss= 0.647503 LR=2.310678e-05 mcc= 0.000000,
2020-01-19 13:48:02 Ep 2/10 Gstep 3000 Step 861: Loss= 0.580940 LR=2.295751e-05 mcc= 0.000000,
2020-01-19 13:48:37 Ep 2/10 Gstep 3050 Step 911: Loss= 0.612520 LR=2.280824e-05 mcc= 0.000000,
2020-01-19 13:49:11 Ep 2/10 Gstep 3100 Step 961: Loss= 0.607909 LR=2.265897e-05 mcc= 0.000000,
2020-01-19 13:49:46 Ep 2/10 Gstep 3150 Step 1011: Loss= 0.596036 LR=2.250970e-05 mcc= 0.000000,
2020-01-19 13:50:20 Ep 2/10 Gstep 3200 Step 1061: Loss= 0.597132 LR=2.236043e-05 mcc= 0.000000,
2020-01-19 13:50:55 Ep 3/10 Gstep 3250 Step 42: Loss= 0.611773 LR=2.221117e-05 mcc= 0.000000,
2020-01-19 13:51:29 Ep 3/10 Gstep 3300 Step 92: Loss= 0.605382 LR=2.206190e-05 mcc= 0.000000,
2020-01-19 13:52:03 Ep 3/10 Gstep 3350 Step 142: Loss= 0.619225 LR=2.191263e-05 mcc= 0.000000,
2020-01-19 13:52:38 Ep 3/10 Gstep 3400 Step 192: Loss= 0.628465 LR=2.176336e-05 mcc= 0.000000,
2020-01-19 13:53:12 Ep 3/10 Gstep 3450 Step 242: Loss= 0.626908 LR=2.161409e-05 mcc= 0.000000,
2020-01-19 13:53:47 Ep 3/10 Gstep 3500 Step 292: Loss= 0.636135 LR=2.146482e-05 mcc= 0.000000,
2020-01-19 13:54:21 Ep 3/10 Gstep 3550 Step 342: Loss= 0.624727 LR=2.131555e-05 mcc= 0.000000,
2020-01-19 13:54:55 Ep 3/10 Gstep 3600 Step 392: Loss= 0.620360 LR=2.116629e-05 mcc= 0.000000,
2020-01-19 13:55:30 Ep 3/10 Gstep 3650 Step 442: Loss= 0.578948 LR=2.101702e-05 mcc= 0.000000,
2020-01-19 13:56:04 Ep 3/10 Gstep 3700 Step 492: Loss= 0.644757 LR=2.086775e-05 mcc= 0.000000,
2020-01-19 13:56:39 Ep 3/10 Gstep 3750 Step 542: Loss= 0.599062 LR=2.071848e-05 mcc= 0.000000,
2020-01-19 13:57:13 Ep 3/10 Gstep 3800 Step 592: Loss= 0.623814 LR=2.056921e-05 mcc= 0.000000,
2020-01-19 13:57:48 Ep 3/10 Gstep 3850 Step 642: Loss= 0.607594 LR=2.041994e-05 mcc= 0.000000,
2020-01-19 13:58:22 Ep 3/10 Gstep 3900 Step 692: Loss= 0.636492 LR=2.027067e-05 mcc= 0.000000,
2020-01-19 13:58:57 Ep 3/10 Gstep 3950 Step 742: Loss= 0.596911 LR=2.012141e-05 mcc= 0.000000,
2020-01-19 13:59:31 Ep 3/10 Gstep 4000 Step 792: Loss= 0.615585 LR=1.997214e-05 mcc= 0.000000,
2020-01-19 14:00:06 Ep 3/10 Gstep 4050 Step 842: Loss= 0.583996 LR=1.982287e-05 mcc= 0.000000,
2020-01-19 14:00:40 Ep 3/10 Gstep 4100 Step 892: Loss= 0.616359 LR=1.967360e-05 mcc= 0.000000,
2020-01-19 14:01:15 Ep 3/10 Gstep 4150 Step 942: Loss= 0.610478 LR=1.952433e-05 mcc= 0.000000,
2020-01-19 14:01:49 Ep 3/10 Gstep 4200 Step 992: Loss= 0.604057 LR=1.937506e-05 mcc= 0.000000,
2020-01-19 14:02:23 Ep 3/10 Gstep 4250 Step 1042: Loss= 0.646733 LR=1.922579e-05 mcc= 0.000000,
2020-01-19 14:02:58 Ep 4/10 Gstep 4300 Step 23: Loss= 0.603115 LR=1.907653e-05 mcc= 0.000000,
2020-01-19 14:03:32 Ep 4/10 Gstep 4350 Step 73: Loss= 0.569349 LR=1.892726e-05 mcc= 0.000000,
2020-01-19 14:04:07 Ep 4/10 Gstep 4400 Step 123: Loss= 0.603698 LR=1.877799e-05 mcc= 0.000000,
2020-01-19 14:04:41 Ep 4/10 Gstep 4450 Step 173: Loss= 0.616832 LR=1.862872e-05 mcc= 0.000000,
2020-01-19 14:05:15 Ep 4/10 Gstep 4500 Step 223: Loss= 0.638386 LR=1.847945e-05 mcc= 0.000000,
2020-01-19 14:05:50 Ep 4/10 Gstep 4550 Step 273: Loss= 0.587255 LR=1.833018e-05 mcc= 0.000000,
2020-01-19 14:06:24 Ep 4/10 Gstep 4600 Step 323: Loss= 0.616687 LR=1.818091e-05 mcc= 0.000000,
2020-01-19 14:06:59 Ep 4/10 Gstep 4650 Step 373: Loss= 0.593398 LR=1.803164e-05 mcc= 0.000000,
2020-01-19 14:07:33 Ep 4/10 Gstep 4700 Step 423: Loss= 0.612735 LR=1.788238e-05 mcc= 0.000000,
2020-01-19 14:08:07 Ep 4/10 Gstep 4750 Step 473: Loss= 0.612634 LR=1.773311e-05 mcc= 0.000000,
2020-01-19 14:08:42 Ep 4/10 Gstep 4800 Step 523: Loss= 0.597561 LR=1.758384e-05 mcc= 0.000000,
2020-01-19 14:09:16 Ep 4/10 Gstep 4850 Step 573: Loss= 0.630737 LR=1.743457e-05 mcc= 0.000000,
2020-01-19 14:09:51 Ep 4/10 Gstep 4900 Step 623: Loss= 0.639448 LR=1.728530e-05 mcc= 0.000000,
2020-01-19 14:10:25 Ep 4/10 Gstep 4950 Step 673: Loss= 0.607357 LR=1.713603e-05 mcc= 0.000000,
2020-01-19 14:11:00 Ep 4/10 Gstep 5000 Step 723: Loss= 0.601610 LR=1.698676e-05 mcc= 0.000000,
2020-01-19 14:11:34 Ep 4/10 Gstep 5050 Step 773: Loss= 0.604433 LR=1.683750e-05 mcc= 0.000000,
2020-01-19 14:12:08 Ep 4/10 Gstep 5100 Step 823: Loss= 0.643115 LR=1.668823e-05 mcc= 0.000000,
2020-01-19 14:12:43 Ep 4/10 Gstep 5150 Step 873: Loss= 0.639950 LR=1.653896e-05 mcc= 0.000000,
2020-01-19 14:13:17 Ep 4/10 Gstep 5200 Step 923: Loss= 0.638993 LR=1.638969e-05 mcc= 0.000000,
2020-01-19 14:13:51 Ep 4/10 Gstep 5250 Step 973: Loss= 0.633510 LR=1.624042e-05 mcc= 0.000000,
2020-01-19 14:14:26 Ep 4/10 Gstep 5300 Step 1023: Loss= 0.581198 LR=1.609115e-05 mcc= 0.000000,
2020-01-19 14:15:00 Ep 5/10 Gstep 5350 Step 4: Loss= 0.592722 LR=1.594188e-05 mcc= 0.000000,
2020-01-19 14:15:34 Ep 5/10 Gstep 5400 Step 54: Loss= 0.614371 LR=1.579262e-05 mcc= 0.000000,
2020-01-19 14:16:09 Ep 5/10 Gstep 5450 Step 104: Loss= 0.607973 LR=1.564335e-05 mcc= 0.000000,
2020-01-19 14:16:43 Ep 5/10 Gstep 5500 Step 154: Loss= 0.605945 LR=1.549408e-05 mcc= 0.000000,
2020-01-19 14:17:18 Ep 5/10 Gstep 5550 Step 204: Loss= 0.620083 LR=1.534481e-05 mcc= 0.000000,
2020-01-19 14:17:52 Ep 5/10 Gstep 5600 Step 254: Loss= 0.625456 LR=1.519554e-05 mcc= 0.000000,
2020-01-19 14:18:27 Ep 5/10 Gstep 5650 Step 304: Loss= 0.633456 LR=1.504627e-05 mcc= 0.000000,
2020-01-19 14:19:01 Ep 5/10 Gstep 5700 Step 354: Loss= 0.608953 LR=1.489700e-05 mcc= 0.000000,
2020-01-19 14:19:35 Ep 5/10 Gstep 5750 Step 404: Loss= 0.590640 LR=1.474774e-05 mcc= 0.000000,
2020-01-19 14:20:10 Ep 5/10 Gstep 5800 Step 454: Loss= 0.608632 LR=1.459847e-05 mcc= 0.000000,
2020-01-19 14:20:44 Ep 5/10 Gstep 5850 Step 504: Loss= 0.615661 LR=1.444920e-05 mcc= 0.000000,
2020-01-19 14:21:19 Ep 5/10 Gstep 5900 Step 554: Loss= 0.602201 LR=1.429993e-05 mcc= 0.000000,
2020-01-19 14:21:53 Ep 5/10 Gstep 5950 Step 604: Loss= 0.593200 LR=1.415066e-05 mcc= 0.000000,
2020-01-19 14:22:27 Ep 5/10 Gstep 6000 Step 654: Loss= 0.623690 LR=1.400139e-05 mcc= 0.000000,
2020-01-19 14:23:02 Ep 5/10 Gstep 6050 Step 704: Loss= 0.608412 LR=1.385212e-05 mcc= 0.000000,
2020-01-19 14:23:36 Ep 5/10 Gstep 6100 Step 754: Loss= 0.637673 LR=1.370286e-05 mcc= 0.000000,
2020-01-19 14:24:11 Ep 5/10 Gstep 6150 Step 804: Loss= 0.606503 LR=1.355359e-05 mcc= 0.000000,
2020-01-19 14:24:45 Ep 5/10 Gstep 6200 Step 854: Loss= 0.575204 LR=1.340432e-05 mcc= 0.000000,
2020-01-19 14:25:19 Ep 5/10 Gstep 6250 Step 904: Loss= 0.619873 LR=1.325505e-05 mcc= 0.000000,
2020-01-19 14:25:54 Ep 5/10 Gstep 6300 Step 954: Loss= 0.587598 LR=1.310578e-05 mcc= 0.000000,
2020-01-19 14:26:28 Ep 5/10 Gstep 6350 Step 1004: Loss= 0.637348 LR=1.295651e-05 mcc= 0.000000,
2020-01-19 14:27:03 Ep 5/10 Gstep 6400 Step 1054: Loss= 0.613608 LR=1.280724e-05 mcc= 0.000000,
2020-01-19 14:27:37 Ep 6/10 Gstep 6450 Step 35: Loss= 0.631628 LR=1.265798e-05 mcc= 0.000000,
2020-01-19 14:28:12 Ep 6/10 Gstep 6500 Step 85: Loss= 0.640207 LR=1.250871e-05 mcc= 0.000000,
2020-01-19 14:28:46 Ep 6/10 Gstep 6550 Step 135: Loss= 0.620651 LR=1.235944e-05 mcc= 0.000000,
2020-01-19 14:29:20 Ep 6/10 Gstep 6600 Step 185: Loss= 0.620827 LR=1.221017e-05 mcc= 0.000000,
2020-01-19 14:29:55 Ep 6/10 Gstep 6650 Step 235: Loss= 0.600443 LR=1.206090e-05 mcc= 0.000000,
2020-01-19 14:30:29 Ep 6/10 Gstep 6700 Step 285: Loss= 0.617364 LR=1.191163e-05 mcc= 0.000000,
2020-01-19 14:31:04 Ep 6/10 Gstep 6750 Step 335: Loss= 0.640479 LR=1.176236e-05 mcc= 0.000000,
2020-01-19 14:31:38 Ep 6/10 Gstep 6800 Step 385: Loss= 0.636902 LR=1.161310e-05 mcc= 0.000000,
2020-01-19 14:32:12 Ep 6/10 Gstep 6850 Step 435: Loss= 0.596365 LR=1.146383e-05 mcc= 0.000000,
2020-01-19 14:32:47 Ep 6/10 Gstep 6900 Step 485: Loss= 0.608958 LR=1.131456e-05 mcc= 0.000000,
2020-01-19 14:33:21 Ep 6/10 Gstep 6950 Step 535: Loss= 0.619341 LR=1.116529e-05 mcc= 0.000000,
2020-01-19 14:33:56 Ep 6/10 Gstep 7000 Step 585: Loss= 0.626361 LR=1.101602e-05 mcc= 0.000000,
2020-01-19 14:34:30 Ep 6/10 Gstep 7050 Step 635: Loss= 0.613354 LR=1.086675e-05 mcc= 0.000000,
2020-01-19 14:35:05 Ep 6/10 Gstep 7100 Step 685: Loss= 0.612054 LR=1.071748e-05 mcc= 0.000000,
2020-01-19 14:35:39 Ep 6/10 Gstep 7150 Step 735: Loss= 0.613521 LR=1.056822e-05 mcc= 0.000000,
2020-01-19 14:36:13 Ep 6/10 Gstep 7200 Step 785: Loss= 0.577611 LR=1.041895e-05 mcc= 0.000000,
2020-01-19 14:36:48 Ep 6/10 Gstep 7250 Step 835: Loss= 0.605257 LR=1.026968e-05 mcc= 0.000000,
2020-01-19 14:37:22 Ep 6/10 Gstep 7300 Step 885: Loss= 0.611978 LR=1.012041e-05 mcc= 0.000000,
2020-01-19 14:37:57 Ep 6/10 Gstep 7350 Step 935: Loss= 0.595855 LR=9.971141e-06 mcc= 0.000000,
2020-01-19 14:38:31 Ep 6/10 Gstep 7400 Step 985: Loss= 0.585623 LR=9.821873e-06 mcc= 0.000000,
2020-01-19 14:39:05 Ep 6/10 Gstep 7450 Step 1035: Loss= 0.590831 LR=9.672604e-06 mcc= 0.000000,
2020-01-19 14:39:40 Ep 7/10 Gstep 7500 Step 16: Loss= 0.621975 LR=9.523336e-06 mcc= 0.000000,
2020-01-19 14:40:14 Ep 7/10 Gstep 7550 Step 66: Loss= 0.602145 LR=9.374067e-06 mcc= 0.000000,
2020-01-19 14:40:49 Ep 7/10 Gstep 7600 Step 116: Loss= 0.620748 LR=9.224798e-06 mcc= 0.000000,
2020-01-19 14:41:23 Ep 7/10 Gstep 7650 Step 166: Loss= 0.602158 LR=9.075530e-06 mcc= 0.000000,
2020-01-19 14:41:58 Ep 7/10 Gstep 7700 Step 216: Loss= 0.573956 LR=8.926261e-06 mcc= 0.000000,
2020-01-19 14:42:32 Ep 7/10 Gstep 7750 Step 266: Loss= 0.585606 LR=8.776993e-06 mcc= 0.000000,
2020-01-19 14:43:06 Ep 7/10 Gstep 7800 Step 316: Loss= 0.585316 LR=8.627724e-06 mcc= 0.000000,
2020-01-19 14:43:41 Ep 7/10 Gstep 7850 Step 366: Loss= 0.643286 LR=8.478456e-06 mcc= 0.000000,
2020-01-19 14:44:15 Ep 7/10 Gstep 7900 Step 416: Loss= 0.607292 LR=8.329187e-06 mcc= 0.000000,
2020-01-19 14:44:50 Ep 7/10 Gstep 7950 Step 466: Loss= 0.599829 LR=8.179918e-06 mcc= 0.000000,
2020-01-19 14:45:24 Ep 7/10 Gstep 8000 Step 516: Loss= 0.651290 LR=8.030650e-06 mcc= 0.000000,
2020-01-19 14:45:58 Ep 7/10 Gstep 8050 Step 566: Loss= 0.631367 LR=7.881381e-06 mcc= 0.000000,
2020-01-19 14:46:33 Ep 7/10 Gstep 8100 Step 616: Loss= 0.602171 LR=7.732113e-06 mcc= 0.000000,
2020-01-19 14:47:07 Ep 7/10 Gstep 8150 Step 666: Loss= 0.649055 LR=7.582844e-06 mcc= 0.000000,
2020-01-19 14:47:42 Ep 7/10 Gstep 8200 Step 716: Loss= 0.588507 LR=7.433575e-06 mcc= 0.000000,
2020-01-19 14:48:16 Ep 7/10 Gstep 8250 Step 766: Loss= 0.649030 LR=7.284307e-06 mcc= 0.000000,
2020-01-19 14:48:50 Ep 7/10 Gstep 8300 Step 816: Loss= 0.622789 LR=7.135038e-06 mcc= 0.000000,
2020-01-19 14:49:25 Ep 7/10 Gstep 8350 Step 866: Loss= 0.586203 LR=6.985770e-06 mcc= 0.000000,
2020-01-19 14:49:59 Ep 7/10 Gstep 8400 Step 916: Loss= 0.597735 LR=6.836501e-06 mcc= 0.000000,
2020-01-19 14:50:34 Ep 7/10 Gstep 8450 Step 966: Loss= 0.636263 LR=6.687233e-06 mcc= 0.000000,
2020-01-19 14:51:08 Ep 7/10 Gstep 8500 Step 1016: Loss= 0.612294 LR=6.537964e-06 mcc= 0.000000,
2020-01-19 14:51:42 Ep 7/10 Gstep 8550 Step 1066: Loss= 0.576644 LR=6.388695e-06 mcc= 0.000000,
2020-01-19 14:52:17 Ep 8/10 Gstep 8600 Step 47: Loss= 0.627220 LR=6.239427e-06 mcc= 0.000000,
2020-01-19 14:52:51 Ep 8/10 Gstep 8650 Step 97: Loss= 0.613375 LR=6.090158e-06 mcc= 0.000000,
2020-01-19 14:53:26 Ep 8/10 Gstep 8700 Step 147: Loss= 0.555401 LR=5.940890e-06 mcc= 0.000000,
2020-01-19 14:54:00 Ep 8/10 Gstep 8750 Step 197: Loss= 0.593526 LR=5.791621e-06 mcc= 0.000000,
2020-01-19 14:54:34 Ep 8/10 Gstep 8800 Step 247: Loss= 0.659518 LR=5.642352e-06 mcc= 0.000000,
2020-01-19 14:55:09 Ep 8/10 Gstep 8850 Step 297: Loss= 0.626100 LR=5.493084e-06 mcc= 0.000000,
2020-01-19 14:55:43 Ep 8/10 Gstep 8900 Step 347: Loss= 0.634845 LR=5.343815e-06 mcc= 0.000000,
2020-01-19 14:56:18 Ep 8/10 Gstep 8950 Step 397: Loss= 0.608417 LR=5.194547e-06 mcc= 0.000000,
2020-01-19 14:56:52 Ep 8/10 Gstep 9000 Step 447: Loss= 0.631753 LR=5.045278e-06 mcc= 0.000000,
2020-01-19 14:57:27 Ep 8/10 Gstep 9050 Step 497: Loss= 0.596399 LR=4.896010e-06 mcc= 0.000000,
2020-01-19 14:58:01 Ep 8/10 Gstep 9100 Step 547: Loss= 0.612060 LR=4.746741e-06 mcc= 0.000000,
2020-01-19 14:58:35 Ep 8/10 Gstep 9150 Step 597: Loss= 0.612795 LR=4.597472e-06 mcc= 0.000000,
2020-01-19 14:59:10 Ep 8/10 Gstep 9200 Step 647: Loss= 0.615351 LR=4.448204e-06 mcc= 0.000000,
2020-01-19 14:59:44 Ep 8/10 Gstep 9250 Step 697: Loss= 0.643801 LR=4.298935e-06 mcc= 0.000000,
2020-01-19 15:00:19 Ep 8/10 Gstep 9300 Step 747: Loss= 0.598972 LR=4.149667e-06 mcc= 0.000000,
2020-01-19 15:00:53 Ep 8/10 Gstep 9350 Step 797: Loss= 0.624094 LR=4.000398e-06 mcc= 0.000000,
2020-01-19 15:01:28 Ep 8/10 Gstep 9400 Step 847: Loss= 0.628975 LR=3.851129e-06 mcc= 0.000000,
2020-01-19 15:02:02 Ep 8/10 Gstep 9450 Step 897: Loss= 0.589227 LR=3.701861e-06 mcc= 0.000000,
2020-01-19 15:02:36 Ep 8/10 Gstep 9500 Step 947: Loss= 0.609399 LR=3.552592e-06 mcc= 0.000000,
2020-01-19 15:03:11 Ep 8/10 Gstep 9550 Step 997: Loss= 0.599416 LR=3.403324e-06 mcc= 0.000000,
2020-01-19 15:03:45 Ep 8/10 Gstep 9600 Step 1047: Loss= 0.583097 LR=3.254055e-06 mcc= 0.000000,
2020-01-19 15:04:20 Ep 9/10 Gstep 9650 Step 28: Loss= 0.609004 LR=3.104787e-06 mcc= 0.000000,
2020-01-19 15:04:54 Ep 9/10 Gstep 9700 Step 78: Loss= 0.599495 LR=2.955518e-06 mcc= 0.000000,
2020-01-19 15:05:28 Ep 9/10 Gstep 9750 Step 128: Loss= 0.627960 LR=2.806249e-06 mcc= 0.000000,
2020-01-19 15:06:03 Ep 9/10 Gstep 9800 Step 178: Loss= 0.641110 LR=2.656981e-06 mcc= 0.000000,
2020-01-19 15:06:37 Ep 9/10 Gstep 9850 Step 228: Loss= 0.594384 LR=2.507712e-06 mcc= 0.000000,
2020-01-19 15:07:12 Ep 9/10 Gstep 9900 Step 278: Loss= 0.639790 LR=2.358444e-06 mcc= 0.000000,
2020-01-19 15:07:46 Ep 9/10 Gstep 9950 Step 328: Loss= 0.606976 LR=2.209175e-06 mcc= 0.000000,
2020-01-19 15:08:21 Ep 9/10 Gstep 10000 Step 378: Loss= 0.591287 LR=2.059906e-06 mcc= 0.000000,
2020-01-19 15:08:55 Ep 9/10 Gstep 10050 Step 428: Loss= 0.591274 LR=1.910638e-06 mcc= 0.000000,
2020-01-19 15:09:29 Ep 9/10 Gstep 10100 Step 478: Loss= 0.611863 LR=1.761369e-06 mcc= 0.000000,
2020-01-19 15:10:04 Ep 9/10 Gstep 10150 Step 528: Loss= 0.623777 LR=1.612101e-06 mcc= 0.000000,
2020-01-19 15:10:38 Ep 9/10 Gstep 10200 Step 578: Loss= 0.631314 LR=1.462832e-06 mcc= 0.000000,
2020-01-19 15:11:13 Ep 9/10 Gstep 10250 Step 628: Loss= 0.641188 LR=1.313564e-06 mcc= 0.000000,
2020-01-19 15:11:47 Ep 9/10 Gstep 10300 Step 678: Loss= 0.634291 LR=1.164295e-06 mcc= 0.000000,
2020-01-19 15:12:22 Ep 9/10 Gstep 10350 Step 728: Loss= 0.609055 LR=1.015026e-06 mcc= 0.000000,
2020-01-19 15:12:56 Ep 9/10 Gstep 10400 Step 778: Loss= 0.609741 LR=8.657578e-07 mcc= 0.000000,
2020-01-19 15:13:30 Ep 9/10 Gstep 10450 Step 828: Loss= 0.617953 LR=7.164892e-07 mcc= 0.000000,
2020-01-19 15:14:05 Ep 9/10 Gstep 10500 Step 878: Loss= 0.628554 LR=5.672206e-07 mcc= 0.000000,
2020-01-19 15:14:39 Ep 9/10 Gstep 10550 Step 928: Loss= 0.567698 LR=4.179520e-07 mcc= 0.000000,
2020-01-19 15:15:14 Ep 9/10 Gstep 10600 Step 978: Loss= 0.603719 LR=2.686835e-07 mcc= 0.000000,
2020-01-19 15:15:48 Ep 9/10 Gstep 10650 Step 1028: Loss= 0.587463 LR=1.194149e-07 mcc= 0.000000,
```
* Finally, here is what the results look like over a lot of hyperparameter choices, for roberta-large. See the 3rd column from the right, 'last_mcc', and notice that several (but not all) finish with mcc = 0.
* The command for run_glue.py I included at the top corresponds with the last line in this table. Note that mcc finishes = 0 for all seeds for LR=3e-5, but changing to LR=2e-5, and things are better. (But: see line 7)
```
# bs LR seed best_mcc warmup best_mcc_step last_mcc last_mcc_step last_loss
0 12.0 0.00001 1.0 0.681680 427.0 3600.0 0.623916 7100.0 0.066959
1 12.0 0.00001 2.0 0.678343 427.0 1800.0 0.643239 7100.0 0.028075
2 12.0 0.00001 3.0 0.683793 427.0 3550.0 0.648869 7100.0 0.069469
3 12.0 0.00002 1.0 0.680998 427.0 5450.0 0.635618 7100.0 0.021568
4 12.0 0.00002 2.0 0.651091 427.0 4350.0 0.631009 7100.0 0.042258
5 12.0 0.00002 3.0 0.661668 427.0 4500.0 0.645933 7100.0 0.091654
6 12.0 0.00003 1.0 0.650776 427.0 5150.0 0.598263 7100.0 0.056770
7 12.0 0.00003 2.0 0.403664 427.0 300.0 0.000000 7100.0 0.641559
8 12.0 0.00003 3.0 0.637731 427.0 1800.0 0.597579 7100.0 0.078028
9 8.0 0.00001 1.0 0.667611 641.0 7400.0 0.640759 10650.0 0.046092
10 8.0 0.00001 2.0 0.685714 641.0 4050.0 0.639093 10650.0 0.035902
11 8.0 0.00001 3.0 0.691234 641.0 2650.0 0.625711 10650.0 0.016033
12 8.0 0.00002 1.0 0.670903 641.0 6100.0 0.655837 10650.0 0.071015
13 8.0 0.00002 2.0 0.660777 641.0 8700.0 0.623216 10650.0 0.183001
14 8.0 0.00002 3.0 0.633514 641.0 8450.0 0.585573 10650.0 0.061994
15 8.0 0.00003 1.0 0.521248 641.0 500.0 0.000000 10650.0 0.622017
16 8.0 0.00003 2.0 0.494827 641.0 600.0 0.000000 10650.0 0.614100
17 8.0 0.00003 3.0 0.480823 641.0 550.0 0.000000 10650.0 0.587463
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2590/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2590/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2589 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2589/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2589/comments | https://api.github.com/repos/huggingface/transformers/issues/2589/events | https://github.com/huggingface/transformers/issues/2589 | 551,969,213 | MDU6SXNzdWU1NTE5NjkyMTM= | 2,589 | run_lm_finetuning.py regenerates examples cache when restored from a checkpoint, is this intended? | {
"login": "mike-athene",
"id": 2640536,
"node_id": "MDQ6VXNlcjI2NDA1MzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2640536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mike-athene",
"html_url": "https://github.com/mike-athene",
"followers_url": "https://api.github.com/users/mike-athene/followers",
"following_url": "https://api.github.com/users/mike-athene/following{/other_user}",
"gists_url": "https://api.github.com/users/mike-athene/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mike-athene/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mike-athene/subscriptions",
"organizations_url": "https://api.github.com/users/mike-athene/orgs",
"repos_url": "https://api.github.com/users/mike-athene/repos",
"events_url": "https://api.github.com/users/mike-athene/events{/privacy}",
"received_events_url": "https://api.github.com/users/mike-athene/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I also use --save_total_limit=10 which triggered another exception in checkpoint deletion code as it tried to delete the cache file like it was a folder\r\n\r\n```\r\nDeleting older checkpoint [./output\\checkpoint-200_cached_lm_512_dataset.txt] due to args.save_total_limit\r\nTraceback (most recent call last):\r\n File \".\\transformers\\examples\\run_lm_finetuning.py\", line 721, in <module>\r\n main()\r\n File \".\\transformers\\examples\\run_lm_finetuning.py\", line 671, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer)\r\n File \".\\transformers\\examples\\run_lm_finetuning.py\", line 360, in train\r\n _rotate_checkpoints(args, checkpoint_prefix)\r\n File \".\\transformers\\examples\\run_lm_finetuning.py\", line 169, in _rotate_checkpoints\r\n shutil.rmtree(checkpoint)\r\n File \"...\\lib\\shutil.py\", line 500, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"...\\lib\\shutil.py\", line 382, in _rmtree_unsafe\r\n onerror(os.listdir, path, sys.exc_info())\r\n File \"...\\lib\\shutil.py\", line 380, in _rmtree_unsafe\r\n names = os.listdir(path)\r\nNotADirectoryError: [WinError 267] The directory name is invalid: './output\\\\checkpoint-200_cached_lm_512_dataset.txt'\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | NONE | null | ## β Questions & Help
Hello,
I am finetuning a gpt2-medium model on a large (800mb+) input via run_lm_finetuning.py on a windown/conda env with recent git checkout of transformers and apex installed (--per_gpu_train_batch_size=1 --fp16 --fp16_opt_level O2 --gradient_accumulation_steps=10 --block_size=512).
On initiall run a cache file is created next to the input file using the model type as filename prefix, i.e. gpt2-medium_cached_lm_512_dataset.txt
This WAI and if I restart the finetuning process the cache is picked up, but when I try to resume by restoring a checkpoint (--model_name_or_path=FULL_PATH\output\checkpoint-200\) the cache is regenerated in the /output folder under a name of checkpoint-200_cached_lm_512_dataset.txt.
(also providing a relative path to model_name_or_path does not seem to work[1])
- Tracing the cache generation code it looks like it depends on the tokenizer which is restored from the checkpoint, but I am not sure if any state from the checkpoint is actually affecting the cache generation. Both files have the same checksum, so it does seem like unnecessary work?
- If not, Is it possible to store current version of the cache alongside other checkpoint data for faster resume?
- Am I restoring the model incorrectly? Even after cache regeneration the models OOMs, but starting from scratch I can finetune way past the checkpoint state. [2]
Thank you!
[1]
```
Saving features into cached file C:\data\.\output\checkpoint-200\_cached_lm_512_dataset.txt
Traceback (most recent call last):
File ".\transformers\examples\run_lm_finetuning.py", line 721, in <module>
main()
File ".\transformers\examples\run_lm_finetuning.py", line 666, in main
train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False)
File ".\transformers\examples\run_lm_finetuning.py", line 130, in load_and_cache_examples
block_size=args.block_size,
File ".\transformers\examples\run_lm_finetuning.py", line 115, in __init__
with open(cached_features_file, "wb") as handle:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\data\\.\\output\\checkpoint-200\\_cached_lm_512_dataset.txt'
```
[2]
```
File ".\transformers\examples\run_lm_finetuning.py", line 721, in <module>
main()
File ".\transformers\examples\run_lm_finetuning.py", line 671, in main
global_step, tr_loss = train(args, train_dataset, model, tokenizer)
File ".\transformers\examples\run_lm_finetuning.py", line 316, in train
with amp.scale_loss(loss, optimizer) as scaled_loss:
File "...\lib\contextlib.py", line 81, in __enter__
return next(self.gen)
File "...\apex\amp\handle.py", line 111, in scale_loss
optimizer._prepare_amp_backward()
File "...\apex\amp\_process_optimizer.py", line 137, in prepare_backward_with_master_weights
self._amp_lazy_init()
File "...\apex\amp\_process_optimizer.py", line 309, in _amp_lazy_init
self._lazy_init_maybe_master_weights()
File "...\apex\amp\_process_optimizer.py", line 90, in lazy_init_with_master_weights
self.load_state_dict(self.state_dict())
File "...\torch\optim\optimizer.py", line 147, in load_state_dict
state[param] = cast(param, v)
File "...\torch\optim\optimizer.py", line 134, in cast
return {k: cast(param, v) for k, v in value.items()}
File "...\torch\optim\optimizer.py", line 134, in <dictcomp>
return {k: cast(param, v) for k, v in value.items()}
File "...\torch\optim\optimizer.py", line 130, in cast
value = value.to(param.dtype)
RuntimeError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 11.00 GiB total capacity; 8.61 GiB already allocated; 10.74 MiB free; 8.69 GiB reserved in total by PyTorch)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2589/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2589/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2588 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2588/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2588/comments | https://api.github.com/repos/huggingface/transformers/issues/2588/events | https://github.com/huggingface/transformers/issues/2588 | 551,929,414 | MDU6SXNzdWU1NTE5Mjk0MTQ= | 2,588 | how can i download the model manually? | {
"login": "shange1996",
"id": 49185852,
"node_id": "MDQ6VXNlcjQ5MTg1ODUy",
"avatar_url": "https://avatars.githubusercontent.com/u/49185852?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shange1996",
"html_url": "https://github.com/shange1996",
"followers_url": "https://api.github.com/users/shange1996/followers",
"following_url": "https://api.github.com/users/shange1996/following{/other_user}",
"gists_url": "https://api.github.com/users/shange1996/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shange1996/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shange1996/subscriptions",
"organizations_url": "https://api.github.com/users/shange1996/orgs",
"repos_url": "https://api.github.com/users/shange1996/repos",
"events_url": "https://api.github.com/users/shange1996/events{/privacy}",
"received_events_url": "https://api.github.com/users/shange1996/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"- xlnet-base-cased : https://s3.amazonaws.com/models.huggingface.co/bert/xlnet-base-cased-config.json\r\n- xlnet-large-cased : https://s3.amazonaws.com/models.huggingface.co/bert/xlnet-large-cased-config.json\r\n\r\nI got it from [here](https://huggingface.co/transformers/_modules/transformers/configuration_xlnet.html#XLNetConfig) \r\nGo to the source of that model in hugging face repo and there you can find the links set in config.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I used [this link ](https://huggingface.co/jplu/tf-xlm-roberta-large#tensorflow-xlm-roberta)to download the tf-xlm-roberta-base. But such information is not available for every model.\r\n\r\nAs @nauman-chaudhary indicated some are available [here](https://huggingface.co/transformers/_modules/transformers/configuration_xlnet.html#XLNetConfig).."
] | 1,579 | 1,592 | 1,585 | NONE | null | ## β Questions & Help
<!-- A clear and concise description of the question. -->
I want to download the model manually because of my network. But now I can only find the download address of bert. Where is the address of all models? Such as XLNETγ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2588/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2588/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2587 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2587/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2587/comments | https://api.github.com/repos/huggingface/transformers/issues/2587/events | https://github.com/huggingface/transformers/issues/2587 | 551,915,182 | MDU6SXNzdWU1NTE5MTUxODI= | 2,587 | The accuracy of XLNet | {
"login": "chlorane",
"id": 39242468,
"node_id": "MDQ6VXNlcjM5MjQyNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/39242468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chlorane",
"html_url": "https://github.com/chlorane",
"followers_url": "https://api.github.com/users/chlorane/followers",
"following_url": "https://api.github.com/users/chlorane/following{/other_user}",
"gists_url": "https://api.github.com/users/chlorane/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chlorane/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chlorane/subscriptions",
"organizations_url": "https://api.github.com/users/chlorane/orgs",
"repos_url": "https://api.github.com/users/chlorane/repos",
"events_url": "https://api.github.com/users/chlorane/events{/privacy}",
"received_events_url": "https://api.github.com/users/chlorane/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Are you sure that you are using the right models and not just `BertModel`? You also have to change the tokenizer completely.\r\n\r\nInstead of something like\r\n\r\n```python\r\nmodel = BertModel.from_pretrained('bert-base-uncased')\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n```\r\n\r\nYou should use something like this:\r\n\r\n```python\r\nmodel = XLNetModel.from_pretrained('xlnet-base-cased')\r\ntokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased')\r\n```\r\n\r\nFor tokenisation, I recommend that you use the `encode` method so that special tokens are added correctly and automatically.\r\n\r\n```python\r\ninput_ids = tokenizer.encode(text, return_tensors='pt')\r\n```\r\n\r\nIt is normal that not all models behave the same way. You can't just use the same hyperparameters and expect the same or bettter results. Try longer finetuning, other learning rate, stuff like that.\r\n\r\nFinally, you may wish to update your PyTorch version if possible. We're already at 1.4.\r\n\r\n",
"> \r\n> \r\n> Are you sure that you are using the right models and not just `BertModel`? You also have to change the tokenizer completely.\r\n> \r\n> Instead of something like\r\n> \r\n> ```python\r\n> model = BertModel.from_pretrained('bert-base-uncased')\r\n> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n> ```\r\n> \r\n> You should use something like this:\r\n> \r\n> ```python\r\n> model = XLNetModel.from_pretrained('xlnet-base-cased')\r\n> tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased')\r\n> ```\r\n> \r\n> For tokenisation, I recommend that you use the `encode` method so that special tokens are added correctly and automatically.\r\n> \r\n> ```python\r\n> input_ids = tokenizer.encode(text, return_tensors='pt')\r\n> ```\r\n> \r\n> It is normal that not all models behave the same way. You can't just use the same hyperparameters and expect the same or bettter results. Try longer finetuning, other learning rate, stuff like that.\r\n> \r\n> Finally, you may wish to update your PyTorch version if possible. We're already at 1.4.\r\n\r\nYes, I have made these modifications, but the acc is still low. After training for an epoch, it is 28% still.\r\nI'm using:\r\n```\r\n MODEL_CLASSES = {\"bert\": (BertConfig, BertForMultipleChoice, BertTokenizer),\"xlnet\": (XLNetConfig, XLNetForMultipleChoice, XLNetTokenizer),\"roberta\": (RobertaConfig, RobertaForMultipleChoice, RobertaTokenizer)}\r\n ALL_MODELS = sum((tuple(conf.pretrained_config_archive_map.keys()) for conf in (BertConfig, XLNetConfig, RobertaConfig)), ())\r\n config_class, model_class, tokenizer_class = MODEL_CLASSES[\"xlnet\"]\r\n transformer_tokenizer = tokenizer_class.from_pretrained(\"xlnet-base-cased\", do_lower_case=True, cache_dir=None)\r\n```\r\n\r\nand\r\n\r\n```\r\n text_a=context_tokens_choice\r\n text_b=ending_tokens\r\n inputs=transformer_tokenizer.encode_plus(text_a, text_b, add_special_tokens=True, max_length=max_seq_length)\r\n input_ids, token_type_ids = inputs[\"input_ids\"], inputs[\"token_type_ids\"]\r\n attention_mask = [1 if mask_padding_with_zero else 0] * len(input_ids)\r\n padding_length = max_seq_length - len(input_ids)\r\n if pad_on_left:\r\n input_ids = ([pad_token] * padding_length) + input_ids\r\n segment_ids = ([0 if mask_padding_with_zero else 1] * padding_length) + attention_mask\r\n input_mask = ([pad_token_segment_id] * padding_length) + token_type_ids\r\n else:\r\n input_ids = input_ids + ([pad_token] * padding_length)\r\n segment_ids = attention_mask + ([0 if mask_padding_with_zero else 1] * padding_length)\r\n input_mask = token_type_ids + ([pad_token_segment_id] * padding_length) \r\n``` \r\n\r\nand\r\n\r\n` self.sub_model=model_class.from_pretrained(\"xlnet_base-cased\",from_tf=bool(\".ckpt\" in \"xlnet-base-cased\"),config=config,cache_dir=None)`\r\n\r\n`out=self.sub_model(all_input_ids,all_segment_ids,all_input_mask)`\r\n\r\nI thought it is because segment_ids and input_mask are in reversed order, but after switching them, the acc is still low.\r\n\r\nI tried RoBERTa, the acc tends to be normal.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi, @chlorane ,I got the same problem,and did you solve it???"
] | 1,579 | 1,589 | 1,585 | NONE | null | ## π Migration
<!-- Important information -->
Model I am using (Bert, XLNet....):
XLNet
Language I am using the model on (English, Chinese....):
English
The problem arise when using:
[ ] the official example scripts: (give details)
[ * ] my own modified scripts: (give details)
I use my own scripts under this library
The tasks I am working on is:
[ ] an official GLUE/SQUaD task: (give the name)
[ * ] my own task or dataset: (give details)
TVQA dataset, for question answering task
Details of the issue:
I was using pytorch-pretrained-BERT library on my scripts. Last week, I switched to the "transformer" library. After that, I tried to use BERT with "bert-base-cased" (do_lower_case=true) model, the valid acc is around 64% after 800 it (batch size=8). However, when I modify the model type to "xlnet" and the model with "xlnet-base-cased" (I have set left padding as true and pad_token_segment_id=4. The other parts are kept the same as using BERT), the valid acc is only about 28% after 800 it (batch size=8). When more iterations are made, the valid acc will drop to about 22%. I think this is quite strange.
## Environment
* OS: Windows 10
* Python version: 3.6
* PyTorch version: 1.0
* PyTorch Transformers version (or branch): The newest? (Downloaded last Monday)
* Using GPU ? Yes
* Distributed or parallel setup ? No
* Any other relevant information:
## Checklist
- [ * ] I have read the migration guide in the readme.
- [ * ] I checked if a related official extension example runs on my machine.
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2587/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2587/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2586 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2586/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2586/comments | https://api.github.com/repos/huggingface/transformers/issues/2586/events | https://github.com/huggingface/transformers/issues/2586 | 551,896,193 | MDU6SXNzdWU1NTE4OTYxOTM= | 2,586 | PyTorch 1.2 has released API 'torch.nn.Transformer'οΌso it's better to modify the source code with the official python API | {
"login": "daydayfun",
"id": 39835967,
"node_id": "MDQ6VXNlcjM5ODM1OTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/39835967?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daydayfun",
"html_url": "https://github.com/daydayfun",
"followers_url": "https://api.github.com/users/daydayfun/followers",
"following_url": "https://api.github.com/users/daydayfun/following{/other_user}",
"gists_url": "https://api.github.com/users/daydayfun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daydayfun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daydayfun/subscriptions",
"organizations_url": "https://api.github.com/users/daydayfun/orgs",
"repos_url": "https://api.github.com/users/daydayfun/repos",
"events_url": "https://api.github.com/users/daydayfun/events{/privacy}",
"received_events_url": "https://api.github.com/users/daydayfun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This has been suggested a while back when this was first introduced (we're at 1.4 now). This is possibly impractical to do since it is likely that many people are still on 1.0<=x<1.2. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | NONE | null | ## π Feature
<!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. -->
## Motivation
It's better to Modify modeling_bert.py with official API 'torch.nn.Transformer' of PyTorch 1.2
## Additional context
https://pytorch.org/docs/stable/nn.html?highlight=transformer#torch.nn.Transformer
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2586/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2586/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2585 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2585/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2585/comments | https://api.github.com/repos/huggingface/transformers/issues/2585/events | https://github.com/huggingface/transformers/issues/2585 | 551,869,598 | MDU6SXNzdWU1NTE4Njk1OTg= | 2,585 | Attibute ErrorοΌβNoneTypeβ object has no attribute 'seek' and OSError | {
"login": "MrChenJianhui",
"id": 45336867,
"node_id": "MDQ6VXNlcjQ1MzM2ODY3",
"avatar_url": "https://avatars.githubusercontent.com/u/45336867?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MrChenJianhui",
"html_url": "https://github.com/MrChenJianhui",
"followers_url": "https://api.github.com/users/MrChenJianhui/followers",
"following_url": "https://api.github.com/users/MrChenJianhui/following{/other_user}",
"gists_url": "https://api.github.com/users/MrChenJianhui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MrChenJianhui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MrChenJianhui/subscriptions",
"organizations_url": "https://api.github.com/users/MrChenJianhui/orgs",
"repos_url": "https://api.github.com/users/MrChenJianhui/repos",
"events_url": "https://api.github.com/users/MrChenJianhui/events{/privacy}",
"received_events_url": "https://api.github.com/users/MrChenJianhui/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"python version is 3.7.3",
"Hi, you would need to provide more information than that for us to help you. What code made you run into this error? ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | NONE | null | My pytorch version is 1.4.0+cpu and tensorflow version is 2.0.0-dev20191002.
/torch/serialization.py,line 289,in_check_seekable
βNoneTypeβ object has no attribute 'seek'
You can only torch.load from a file that is seekable.Please pre_load the data into a buffer like io.BytesIO and try to load from it instead.
But how should I do to sovle the quetions?
Another question.
OSError:Unable to load weights from pytorch checkpoint file.If you tried to load a PyTorch model from a TF 2.0 checkpoint,please set from_tf=true. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2585/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2585/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2584 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2584/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2584/comments | https://api.github.com/repos/huggingface/transformers/issues/2584/events | https://github.com/huggingface/transformers/issues/2584 | 551,860,131 | MDU6SXNzdWU1NTE4NjAxMzE= | 2,584 | what's the structure of the model saved after fine-tuning ? | {
"login": "JiangYanting",
"id": 44471391,
"node_id": "MDQ6VXNlcjQ0NDcxMzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/44471391?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JiangYanting",
"html_url": "https://github.com/JiangYanting",
"followers_url": "https://api.github.com/users/JiangYanting/followers",
"following_url": "https://api.github.com/users/JiangYanting/following{/other_user}",
"gists_url": "https://api.github.com/users/JiangYanting/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JiangYanting/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JiangYanting/subscriptions",
"organizations_url": "https://api.github.com/users/JiangYanting/orgs",
"repos_url": "https://api.github.com/users/JiangYanting/repos",
"events_url": "https://api.github.com/users/JiangYanting/events{/privacy}",
"received_events_url": "https://api.github.com/users/JiangYanting/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm afraid I don't understand your question. The pretrained model is an architecture whose weights have already been trained on some task (typically (M)LM and NSP/SOP). When you finetune the model, the architecture stays exactly the same but the weights are finetuned to best fit your task.",
"@BramVanroy I tried βprint(model)βοΌit showed the information of every layer. Thank you so much! "
] | 1,579 | 1,579 | 1,579 | NONE | null | ## β Questions & Help
<!-- A clear and concise description of the question. -->
Hello ! I'm wondering what is the structure of the model saved after fine-tuning. For example, after the sequence classification fine-tuning , how to show the layers information of newly-formed model ? Is new model's sentence vector different from the original one which is extracted from pre-trained model ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2584/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2584/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2583 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2583/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2583/comments | https://api.github.com/repos/huggingface/transformers/issues/2583/events | https://github.com/huggingface/transformers/issues/2583 | 551,849,821 | MDU6SXNzdWU1NTE4NDk4MjE= | 2,583 | How to start a server and client to get feature vectors | {
"login": "duan348733684",
"id": 26431015,
"node_id": "MDQ6VXNlcjI2NDMxMDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/26431015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/duan348733684",
"html_url": "https://github.com/duan348733684",
"followers_url": "https://api.github.com/users/duan348733684/followers",
"following_url": "https://api.github.com/users/duan348733684/following{/other_user}",
"gists_url": "https://api.github.com/users/duan348733684/gists{/gist_id}",
"starred_url": "https://api.github.com/users/duan348733684/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/duan348733684/subscriptions",
"organizations_url": "https://api.github.com/users/duan348733684/orgs",
"repos_url": "https://api.github.com/users/duan348733684/repos",
"events_url": "https://api.github.com/users/duan348733684/events{/privacy}",
"received_events_url": "https://api.github.com/users/duan348733684/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | NONE | null | ## β Questions & Help
<!-- A clear and concise description of the question. -->
How to start a server and client to get feature vectorsοΌor Which part of the code should I study in https://github.com/huggingface/transformers.git.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2583/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2583/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2582 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2582/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2582/comments | https://api.github.com/repos/huggingface/transformers/issues/2582/events | https://github.com/huggingface/transformers/issues/2582 | 551,832,180 | MDU6SXNzdWU1NTE4MzIxODA= | 2,582 | XLM-Roberta checkpoint redundant weight | {
"login": "NProkoptsev",
"id": 13507236,
"node_id": "MDQ6VXNlcjEzNTA3MjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/13507236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NProkoptsev",
"html_url": "https://github.com/NProkoptsev",
"followers_url": "https://api.github.com/users/NProkoptsev/followers",
"following_url": "https://api.github.com/users/NProkoptsev/following{/other_user}",
"gists_url": "https://api.github.com/users/NProkoptsev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NProkoptsev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NProkoptsev/subscriptions",
"organizations_url": "https://api.github.com/users/NProkoptsev/orgs",
"repos_url": "https://api.github.com/users/NProkoptsev/repos",
"events_url": "https://api.github.com/users/NProkoptsev/events{/privacy}",
"received_events_url": "https://api.github.com/users/NProkoptsev/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,579 | 1,579 | 1,579 | NONE | null | deleted | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2582/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2582/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2581 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2581/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2581/comments | https://api.github.com/repos/huggingface/transformers/issues/2581/events | https://github.com/huggingface/transformers/issues/2581 | 551,823,128 | MDU6SXNzdWU1NTE4MjMxMjg= | 2,581 | Invalid argument: assertion failed: [Condition x == y did not hold element-wise:] [x (loss/output_1_loss/SparseSoftmaxCrossEntropyWithLogits/Shape_1:0) = ] [32 1] [y (loss/output_1_loss/SparseSoftmaxCrossEntropyWithLogits/strided_slice:0) = ] [32 128] | {
"login": "noambaron1989",
"id": 40001659,
"node_id": "MDQ6VXNlcjQwMDAxNjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/40001659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/noambaron1989",
"html_url": "https://github.com/noambaron1989",
"followers_url": "https://api.github.com/users/noambaron1989/followers",
"following_url": "https://api.github.com/users/noambaron1989/following{/other_user}",
"gists_url": "https://api.github.com/users/noambaron1989/gists{/gist_id}",
"starred_url": "https://api.github.com/users/noambaron1989/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/noambaron1989/subscriptions",
"organizations_url": "https://api.github.com/users/noambaron1989/orgs",
"repos_url": "https://api.github.com/users/noambaron1989/repos",
"events_url": "https://api.github.com/users/noambaron1989/events{/privacy}",
"received_events_url": "https://api.github.com/users/noambaron1989/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, GLUE is a sequence classification task, not a token classification task. The model you're using classifies tokens instead of entires sequences, and therefore has a different output than what is expected by the GLUE task.\r\n\r\nChange this line:\r\n\r\n```py\r\nmodel = TFBertForTokenClassification.from_pretrained('bert-base-uncased')\r\n```\r\n\r\nto this:\r\n\r\n```py\r\nmodel = TFBertForSequenceClassification.from_pretrained('bert-base-uncased')\r\n```\r\n\r\nfor it to work.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | NONE | null | I'm trying to run TFBertForTokenClassification with tensorflow_datasets.load('glue/sst2'):
```py
import tensorflow as tf
import tensorflow_datasets
from transformers import *
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertForTokenClassification.from_pretrained('bert-base-uncased')
data = tensorflow_datasets.load('glue/sst2')
train_dataset = glue_convert_examples_to_features(data['train'], tokenizer, max_length=128, task='sst-2')
valid_dataset = glue_convert_examples_to_features(data['validation'], tokenizer, max_length=128, task='sst-2')
train_dataset = train_dataset.shuffle(100).batch(32).repeat(2)
valid_dataset = valid_dataset.batch(64)
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model.compile(optimizer=optimizer, loss=loss, metrics=[metric])
history = model.fit(train_dataset, epochs=2, steps_per_epoch=115,
validation_data=valid_dataset, validation_steps=7)
```
while model.fit I get this error:
```
Train for 115 steps, validate for 7 steps
Epoch 1/2
WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_for_token_classification_1/bert/pooler/dense/kernel:0', 'tf_bert_for_token_classification_1/bert/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_for_token_classification_1/bert/pooler/dense/kernel:0', 'tf_bert_for_token_classification_1/bert/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_for_token_classification_1/bert/pooler/dense/kernel:0', 'tf_bert_for_token_classification_1/bert/pooler/dense/bias:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['tf_bert_for_token_classification_1/bert/pooler/dense/kernel:0', 'tf_bert_for_token_classification_1/bert/pooler/dense/bias:0'] when minimizing the loss.
1/115 [..............................] - ETA: 25:00
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-15-f52b3b390355> in <module>()
1 history = model.fit(train_dataset, epochs=2, steps_per_epoch=115,
----> 2 validation_data=valid_dataset, validation_steps=7)
11 frames
/usr/local/lib/python3.6/dist-packages/six.py in raise_from(value, from_value)
InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: assertion failed: [Condition x == y did not hold element-wise:] [x (loss/output_1_loss/SparseSoftmaxCrossEntropyWithLogits/Shape_1:0) = ] [32 1] [y (loss/output_1_loss/SparseSoftmaxCrossEntropyWithLogits/strided_slice:0) = ] [32 128]
[[node loss/output_1_loss/SparseSoftmaxCrossEntropyWithLogits/assert_equal_1/Assert/Assert (defined at <ipython-input-15-f52b3b390355>:2) ]]
[[Reshape_824/_584]]
(1) Invalid argument: assertion failed: [Condition x == y did not hold element-wise:] [x (loss/output_1_loss/SparseSoftmaxCrossEntropyWithLogits/Shape_1:0) = ] [32 1] [y (loss/output_1_loss/SparseSoftmaxCrossEntropyWithLogits/strided_slice:0) = ] [32 128]
[[node loss/output_1_loss/SparseSoftmaxCrossEntropyWithLogits/assert_equal_1/Assert/Assert (defined at <ipython-input-15-f52b3b390355>:2) ]]
0 successful operations.
0 derived errors ignored. [Op:__inference_distributed_function_477134]
Function call stack:
distributed_function -> distributed_function
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2581/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2580 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2580/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2580/comments | https://api.github.com/repos/huggingface/transformers/issues/2580/events | https://github.com/huggingface/transformers/issues/2580 | 551,802,588 | MDU6SXNzdWU1NTE4MDI1ODg= | 2,580 | glue_convert_examples_to_features in glue.py runs to errors | {
"login": "nargesam",
"id": 24642904,
"node_id": "MDQ6VXNlcjI0NjQyOTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/24642904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nargesam",
"html_url": "https://github.com/nargesam",
"followers_url": "https://api.github.com/users/nargesam/followers",
"following_url": "https://api.github.com/users/nargesam/following{/other_user}",
"gists_url": "https://api.github.com/users/nargesam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nargesam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nargesam/subscriptions",
"organizations_url": "https://api.github.com/users/nargesam/orgs",
"repos_url": "https://api.github.com/users/nargesam/repos",
"events_url": "https://api.github.com/users/nargesam/events{/privacy}",
"received_events_url": "https://api.github.com/users/nargesam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# Load dataset, tokenizer, model from pretrained model/vocabulary\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-cased')\r\nmodel = TFBertForSequenceClassification.from_pretrained('bert-base-cased', force_download=True)\r\ndata = tensorflow_datasets.load('glue/mrpc')\r\n\r\nprint(\"checkpoint on data\")\r\n# Prepare dataset for GLUE as a tf.data.Dataset instance\r\n\r\nprint(type(data['train']))\r\ntrain_data = data['train']\r\ntrain_dataset = glue_convert_examples_to_features(train_data, tokenizer, max_length=128 , task='mrpc')\r\nvalid_dataset = glue_convert_examples_to_features(data['validation'], tokenizer, max_length=128, task='mrpc')\r\ntrain_dataset = train_dataset.shuffle(100).batch(32).repeat(2)\r\nvalid_dataset = valid_dataset.batch(64)\r\n\r\n\r\nThis is the whole code I am running!! \r\nthis is the error: \r\n\r\n\r\ntrain_dataset = glue_convert_examples_to_features(train_data, tokenizer, max_length=128 , task='mrpc')\r\n File \"./transformers/data/processors/glue.py\", line 84, in glue_convert_examples_to_features\r\n logger.info(\"Writing example %d/%d\" % (ex_index, len(examples)))\r\nTypeError: object of type '_OptionsDataset' has no len()",
"Hi! Indeed, it seems there was an error in the code. It was fixed by @neonbjb in https://github.com/huggingface/transformers/pull/2564.\r\n\r\nCould you install from source and let me know if it fixes your issue? `pip install git+https://github.com/huggingface/transformers`. Thank you.",
"Yes, the issue is resolved. Thank you!! "
] | 1,579 | 1,579 | 1,579 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using: Bert
Language I am using the model on (English)
The problem arise when using:
* [x] the official example scripts: (give details):
I have a venv running with TF2.0 and transformers, and I am running mrpc dataset with BERT. Here's the code:
train_dataset = glue_convert_examples_to_features(data['train'], tokenizer, max_length=128, task='mrpc')
valid_dataset = glue_convert_examples_to_features(data['validation'], tokenizer, max_length=128, task='mrpc')
train_dataset = train_dataset.shuffle(100).batch(32).repeat(2)
valid_dataset = valid_dataset.batch(64)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
## To Reproduce
Steps to reproduce the behavior:
I am using the official code on hugging face for BERT and sequence calssification and it is not working...
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
model = TFBertForSequenceClassification.from_pretrained('bert-base-cased', force_download=True)
data = tensorflow_datasets.load('glue/mrpc')
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
Error message:
File "hugf-bert.py", line 20, in <module>
train_dataset = glue_convert_examples_to_features(data['train'], tokenizer, max_length=128, task='mrpc')
File "/Users/ns5kn/Documents/insight/transformers/src/transformers/data/processors/glue.py", line 84, in glue_convert_examples_to_features
logger.info("Writing example %d/%d" % (ex_index, len(examples)))
TypeError: object of type '_OptionsDataset' has no len()
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Environment
* OS: mac 10.14
* Python version: 3.6
* Tensorflow version: 2.0
* PyTorch Transformers version (or branch):
* Using GPU ?
* Distributed or parallel setup ?
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2580/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2579 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2579/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2579/comments | https://api.github.com/repos/huggingface/transformers/issues/2579/events | https://github.com/huggingface/transformers/pull/2579 | 551,768,277 | MDExOlB1bGxSZXF1ZXN0MzY0NDI1Nzg4 | 2,579 | Added saving to custom dir in PPLM train | {
"login": "ashirviskas",
"id": 11985242,
"node_id": "MDQ6VXNlcjExOTg1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/11985242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashirviskas",
"html_url": "https://github.com/ashirviskas",
"followers_url": "https://api.github.com/users/ashirviskas/followers",
"following_url": "https://api.github.com/users/ashirviskas/following{/other_user}",
"gists_url": "https://api.github.com/users/ashirviskas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashirviskas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashirviskas/subscriptions",
"organizations_url": "https://api.github.com/users/ashirviskas/orgs",
"repos_url": "https://api.github.com/users/ashirviskas/repos",
"events_url": "https://api.github.com/users/ashirviskas/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashirviskas/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The CI was a dependency glitch that was fixed on master since, you can rebase on master if you want it to go away.",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2579?src=pr&el=h1) Report\n> :exclamation: No coverage uploaded for pull request base (`master@cb13c8a`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2579?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2579 +/- ##\n=========================================\n Coverage ? 74.61% \n=========================================\n Files ? 87 \n Lines ? 14802 \n Branches ? 0 \n=========================================\n Hits ? 11044 \n Misses ? 3758 \n Partials ? 0\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2579?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2579?src=pr&el=footer). Last update [cb13c8a...36339e7](https://codecov.io/gh/huggingface/transformers/pull/2579?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2579?src=pr&el=h1) Report\n> :exclamation: No coverage uploaded for pull request base (`master@cb13c8a`). [Click here to learn what that means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2579?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2579 +/- ##\n=========================================\n Coverage ? 74.61% \n=========================================\n Files ? 87 \n Lines ? 14802 \n Branches ? 0 \n=========================================\n Hits ? 11044 \n Misses ? 3758 \n Partials ? 0\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2579?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2579?src=pr&el=footer). Last update [cb13c8a...36339e7](https://codecov.io/gh/huggingface/transformers/pull/2579?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"@julien-c I'll just do a new PR."
] | 1,579 | 1,579 | 1,579 | NONE | null | Just an option to save the model to other than the working directory.
Default functionality hasn't changed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2579/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2579",
"html_url": "https://github.com/huggingface/transformers/pull/2579",
"diff_url": "https://github.com/huggingface/transformers/pull/2579.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2579.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2578 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2578/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2578/comments | https://api.github.com/repos/huggingface/transformers/issues/2578/events | https://github.com/huggingface/transformers/issues/2578 | 551,763,691 | MDU6SXNzdWU1NTE3NjM2OTE= | 2,578 | GPT2TokenizerFast object has no attribute 'with_pre_tokenizer' | {
"login": "armheb",
"id": 18460769,
"node_id": "MDQ6VXNlcjE4NDYwNzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/18460769?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/armheb",
"html_url": "https://github.com/armheb",
"followers_url": "https://api.github.com/users/armheb/followers",
"following_url": "https://api.github.com/users/armheb/following{/other_user}",
"gists_url": "https://api.github.com/users/armheb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/armheb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/armheb/subscriptions",
"organizations_url": "https://api.github.com/users/armheb/orgs",
"repos_url": "https://api.github.com/users/armheb/repos",
"events_url": "https://api.github.com/users/armheb/events{/privacy}",
"received_events_url": "https://api.github.com/users/armheb/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, could you specify your versions of `tokenizers` and `transformers`? I believe you're running on an older version of `transformers`, could you install from source: `pip install git+https://github.com/huggingface/transformers` and let me know if it fixes this issue? Thank you.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,590 | 1,590 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): GPT2
Language I am using the model on (English, Chinese....): English
The problem arise when using:
tokenizer = GPT2TokenizerFast.from_pretrained('gpt2')
The tasks I am working on is:
* [ ] my own task or dataset:
## To Reproduce
Steps to reproduce the behavior:
1.tokenizer = GPT2TokenizerFast.from_pretrained('gpt2')
2.
3.
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. --><ipython-input-97-b0a0cde738fe> in <module>
----> 1 tokenizer = GPT2TokenizerFast.from_pretrained('gpt2')
/media/bahram/New Volume/Projects/Python_LM/transformers/tokenization_utils.py in from_pretrained(cls, *inputs, **kwargs)
307
308 """
--> 309 return cls._from_pretrained(*inputs, **kwargs)
310
311 @classmethod
/media/bahram/New Volume/Projects/Python_LM/transformers/tokenization_utils.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
459 # Instantiate tokenizer.
460 try:
--> 461 tokenizer = cls(*init_inputs, **init_kwargs)
462 except OSError:
463 raise OSError(
/media/bahram/New Volume/Projects/Python_LM/transformers/tokenization_gpt2.py in __init__(self, vocab_file, merges_file, unk_token, bos_token, eos_token, pad_to_max_length, add_prefix_space, max_length, stride, truncation_strategy, **kwargs)
AttributeError: 'Tokenizer' object has no attribute 'with_pre_tokenizer'
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
AttributeError: 'Tokenizer' object has no attribute 'with_pre_tokenizer'
## Environment
* OS: Ubuntu 18
* Python version: 3.7
* PyTorch version:
* PyTorch Transformers version (or branch):
* Using GPU yes
* Distributed or parallel setup No
* Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2578/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2578/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2577 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2577/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2577/comments | https://api.github.com/repos/huggingface/transformers/issues/2577/events | https://github.com/huggingface/transformers/issues/2577 | 551,738,014 | MDU6SXNzdWU1NTE3MzgwMTQ= | 2,577 | always occur error:AssertionError | {
"login": "jiangjiaqi6",
"id": 33390819,
"node_id": "MDQ6VXNlcjMzMzkwODE5",
"avatar_url": "https://avatars.githubusercontent.com/u/33390819?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangjiaqi6",
"html_url": "https://github.com/jiangjiaqi6",
"followers_url": "https://api.github.com/users/jiangjiaqi6/followers",
"following_url": "https://api.github.com/users/jiangjiaqi6/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangjiaqi6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangjiaqi6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangjiaqi6/subscriptions",
"organizations_url": "https://api.github.com/users/jiangjiaqi6/orgs",
"repos_url": "https://api.github.com/users/jiangjiaqi6/repos",
"events_url": "https://api.github.com/users/jiangjiaqi6/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangjiaqi6/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"It seems like the paths to your data files are incorrect.\r\n\r\nAre you sure they're not at `./dataset/wiki.train.raw ` (notice the leading `.`)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | NONE | null | ## β Questions & Help
<!-- A clear and concise description of the question. -->
I put the wiki.train.raw and the wiki.test.raw in /dataset,
then run the command:
python run_lm_finetuning.py --output_dir=output --model_type=roberta --model_name_or_path=roberta-base --do_train --train_data_file=/dataset/wiki.train.raw --do_eval --eval_data_file=/dataset/wiki.test.raw --mlm
errors:
Traceback (most recent call last):
File "run_lm_finetuning.py", line 717, in <module>
main()
File "run_lm_finetuning.py", line 662, in main
train_dataset = load_and_cache_examples(args, tokenizer, evaluate=False)
File "run_lm_finetuning.py", line 127, in load_and_cache_examples
block_size=args.block_size,
File "run_lm_finetuning.py", line 86, in __init__
assert os.path.isfile(file_path)
AssertionError
then I find the code in run_lm_finetuning.py,
I don't know change file_path in def __init__:
def __init__(self, tokenizer, args, file_path="train", block_size=512):
I hope you can help me out. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2577/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2577/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2576 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2576/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2576/comments | https://api.github.com/repos/huggingface/transformers/issues/2576/events | https://github.com/huggingface/transformers/pull/2576 | 551,724,704 | MDExOlB1bGxSZXF1ZXN0MzY0Mzk0NDUx | 2,576 | fill_mask helper | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2576?src=pr&el=h1) Report\n> Merging [#2576](https://codecov.io/gh/huggingface/transformers/pull/2576?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9d87eafd118739a4c121d69d7cff425264f01e1c?src=pr&el=desc) will **decrease** coverage by `29.56%`.\n> The diff coverage is `6.25%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2576?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2576 +/- ##\n===========================================\n- Coverage 74.51% 44.94% -29.57% \n===========================================\n Files 87 87 \n Lines 14920 14951 +31 \n===========================================\n- Hits 11117 6720 -4397 \n- Misses 3803 8231 +4428\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2576?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/2576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `77.1% <0%> (-21.69%)` | :arrow_down: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `0% <0%> (-61.1%)` | :arrow_down: |\n| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/2576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `61.51% <11.76%> (-6.95%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG1fcm9iZXJ0YS5weQ==) | `0% <0%> (-100%)` | :arrow_down: |\n| [src/transformers/modeling\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `0% <0%> (-100%)` | :arrow_down: |\n| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/2576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `0% <0%> (-96%)` | :arrow_down: |\n| [src/transformers/modeling\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `0% <0%> (-95.86%)` | :arrow_down: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `0% <0%> (-94.28%)` | :arrow_down: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `0% <0%> (-87.91%)` | :arrow_down: |\n| [src/transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2576/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `0% <0%> (-86.42%)` | :arrow_down: |\n| ... and [20 more](https://codecov.io/gh/huggingface/transformers/pull/2576/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2576?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2576?src=pr&el=footer). Last update [9d87eaf...55069c7](https://codecov.io/gh/huggingface/transformers/pull/2576?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This feels like a very similar method to [generate](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_utils.py#L567). If we want to add it to the library's internals, don't you think it would make sense to add it directly to the models' internals like `generate`? I feel it would be consistent to mirror it but for mask filling.\r\n\r\nThis way it could be called like this:\r\n\r\n```py\r\nmodel.fill_mask(x)\r\n```\r\n\r\nFurthermore, I think that handling strings is nice, but handling lists of tokens would be better, like `generate` does -> more coherent with `generate`, only handle model data (tokens) and no need to pass a tokenizer to a `modeling_utils` internal method.",
"Ok I'll take a deeper look next week @LysandreJik ",
"Ok, I see what you mean. \r\n\r\nOur desired use case is that we would be able to do this in one line, on a string. It's a usage sample we've been seeing more and more (initially from RoBERTa), as it lets one check that the LM works well, in one line.\r\n\r\nExamples:\r\n- https://github.com/musixmatchresearch/umberto#fairseq-1\r\n- https://camembert-model.fr/\r\n\r\nI see two options:\r\n- define it in its own utils-like file, e.g. `fill_mask.py` or `hub_utils.py` or whatever\r\n- create a `FillMaskPipeline`, as the pipeline already has a tokenizer and a model\r\n\r\nWdyt?",
"The second option would look something like the last commit and would be used like:\r\n\r\n```python\r\nmasked_line = \"Le camembert est <mask> :)\"\r\n\r\nmodel = CamembertForMaskedLM.from_pretrained(\"camembert-base\")\r\ntokenizer = CamembertTokenizer.from_pretrained(\"camembert-base\")\r\n\r\n\r\nfill_mask = FillMaskPipeline(model, tokenizer)\r\n\r\nprint(fill_mask(masked_line))\r\n```",
"I'm hyped by the pipeline option. I believe filling a mask, being the main use-case of an MLM trained model would be a very nice pipeline to have, alongside sequence classification, question answering and named entity recognition.",
"I think you are very right and I wholeheartedly agree with you. <3 <3",
"Looks good to me, you can merge",
"@mfuntowicz As the pipeline exposes its tokenizer, I'm guessing you can already do something like (untested):\r\n\r\n```python\r\nfill_mask = pipeline(\"fill-mask\")\r\nfill_mask(f\"My name is {fill_mask.tokenizer.mask_token}.\")\r\n```",
"Nice one @julien-c!"
] | 1,579 | 1,580 | 1,580 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2576/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2576/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2576",
"html_url": "https://github.com/huggingface/transformers/pull/2576",
"diff_url": "https://github.com/huggingface/transformers/pull/2576.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2576.patch",
"merged_at": 1580426142000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2575 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2575/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2575/comments | https://api.github.com/repos/huggingface/transformers/issues/2575/events | https://github.com/huggingface/transformers/pull/2575 | 551,721,001 | MDExOlB1bGxSZXF1ZXN0MzY0MzkxNjMx | 2,575 | Fix examples/run_tf_ner.py label encoding error #2559 | {
"login": "HuiyingLi",
"id": 1331543,
"node_id": "MDQ6VXNlcjEzMzE1NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1331543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HuiyingLi",
"html_url": "https://github.com/HuiyingLi",
"followers_url": "https://api.github.com/users/HuiyingLi/followers",
"following_url": "https://api.github.com/users/HuiyingLi/following{/other_user}",
"gists_url": "https://api.github.com/users/HuiyingLi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HuiyingLi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HuiyingLi/subscriptions",
"organizations_url": "https://api.github.com/users/HuiyingLi/orgs",
"repos_url": "https://api.github.com/users/HuiyingLi/repos",
"events_url": "https://api.github.com/users/HuiyingLi/events{/privacy}",
"received_events_url": "https://api.github.com/users/HuiyingLi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2575?src=pr&el=h1) Report\n> Merging [#2575](https://codecov.io/gh/huggingface/transformers/pull/2575?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1a8e87be4e2a1b551175bd6f0f749f3d2289010f?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2575?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2575 +/- ##\n=======================================\n Coverage 74.53% 74.53% \n=======================================\n Files 87 87 \n Lines 14819 14819 \n=======================================\n Hits 11046 11046 \n Misses 3773 3773\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2575?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2575?src=pr&el=footer). Last update [1a8e87b...bd3fe2f](https://codecov.io/gh/huggingface/transformers/pull/2575?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | NONE | null | This is an explanation and a proposed fix for #2559
The code set `pad_token_label_id = 0`, and increase the total number of labels `num_labels = len(labels) + 1`, but made no change to the label list. Thus the first label in label list has the same index as pad_token_label_id.
Following instructions in README take GermEval 2014 as an example, for one sentence in test dataset the token `Aachen` is labeled as `B-LOC` (`B-LOC` is the first label in label list), yet because of the collision with pad_token_label_id, both pad tokens and `Aachen` are encoded as 0:

And the test_predictions.txt is also off by one:
```
1951 I-PERpart
bis I-PERpart
1953 I-PERpart
wurde I-PERpart
...
```
The fix adds a placeholder label `[PAD]` at position 0 when loading the datasets and all labels positions are shifted by 1. The resulting encoding for the same sample sentence:

And the test_predictions.txt thus has correct index:
```
1951 O
bis O
1953 O
wurde O
...
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2575/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2575/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2575",
"html_url": "https://github.com/huggingface/transformers/pull/2575",
"diff_url": "https://github.com/huggingface/transformers/pull/2575.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2575.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2574 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2574/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2574/comments | https://api.github.com/repos/huggingface/transformers/issues/2574/events | https://github.com/huggingface/transformers/issues/2574 | 551,715,399 | MDU6SXNzdWU1NTE3MTUzOTk= | 2,574 | is RoBERTa-base.json in s3 wrong? | {
"login": "skyf0cker",
"id": 32331521,
"node_id": "MDQ6VXNlcjMyMzMxNTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/32331521?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/skyf0cker",
"html_url": "https://github.com/skyf0cker",
"followers_url": "https://api.github.com/users/skyf0cker/followers",
"following_url": "https://api.github.com/users/skyf0cker/following{/other_user}",
"gists_url": "https://api.github.com/users/skyf0cker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/skyf0cker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/skyf0cker/subscriptions",
"organizations_url": "https://api.github.com/users/skyf0cker/orgs",
"repos_url": "https://api.github.com/users/skyf0cker/repos",
"events_url": "https://api.github.com/users/skyf0cker/events{/privacy}",
"received_events_url": "https://api.github.com/users/skyf0cker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes this file is correct."
] | 1,579 | 1,579 | 1,579 | NONE | null | Q:
when i open the json file downloaded from this url:
`https://s3.amazonaws.com/models.huggingface.co/bert/roberta-large-mnli-vocab.json`,
i found there is so many wrong strange code in it,like this:
> {"<s>": 0, "<pad>": 1, "</s>": 2, "<unk>": 3, ".": 4, "Δ the": 5, ",": 6, "Δ to": 7, "Δ and": 8, "Δ of": 9, "Δ a": 10, "Δ in": 11, "-": 12, "Δ for": 13, "Δ that": 14, "Δ on": 15, "Δ is": 16, "Γ’Δ’": 17, "'s": 18, "Δ with": 19, "Δ The": 20, "Δ was": 21, "Δ \"": 22, "Δ at": 23, "Δ it": 24, "Δ as": 25, "Δ said": 26, "Δ»": 27, "Δ be": 28, "s": 29, "Δ by": 30, "Δ from": 31, "Δ are": 32, "Δ have": 33, "Δ has": 34, ":": 35, "Δ (": 36, "Δ he": 37, "Δ I": 38, "Δ his": 39, "Δ will": 40, "Δ an": 41, "Δ this": 42, ")": 43, "Δ Γ’Δ’": 44, "Δ not": 45, "ΔΏ": 46, "Δ you": 47, "ΔΎ": 48, "Δ their": 49, "Δ or": 50, "Δ they": 51, "Δ we": 52, "Δ but": 53, "Δ who": 54, "Δ more": 55, "Δ had": 56, "Δ been": 57, "Δ were": 58, "Δ about": 59, ",\"": 60, "Δ which": 61, "Δ up": 62, "Δ its": 63, "Δ can": 64, "Δ one": 65, "Δ out": 66, "Δ also": 67, "Δ $": 68, "Δ her": 69, "Δ all": 70, "Δ after": 71, ".\"": 72, "/": 73, "Δ would": 74, "'t": 75, "Δ year": 76, "Δ when": 77, "Δ first": 78, "Δ she": 79, "Δ two": 80, "Δ over": 81, "Δ people": 82, "Δ A": 83, "Δ our": 84, "Δ It": 85, "Δ time": 86, "Δ than": 87, "Δ into": 88, "Δ there": 89, "t": 90, "Δ He": 91, "Δ new": 92, "Δ Γ’Δ’ΔΆ": 93, "Δ last": 94, "Δ just": 95, "Δ In": 96, "Δ other": 97, "Δ so": 98, "Δ what": 99, "I": 100, "Δ like": 101, "a": 102, "Δ some": 103, "S": 104, "ΓΒ«": 105, "Δ them": 106, "Δ years": 107, "'": 108, "Δ do": 109, "Δ your": 110, "Δ -": 111, "Δ 1": 112, "\"": 113, "Δ if": 114, "Δ could": 115, "?": 116, "Δ no": 117, "i": 118, "m": 119, "Δ get": 120, "Δ U": 121, "Δ now": 122, "Δ him": 123, "Δ back": 124, "Δ But": 125, "Δ Γ’Δ’Δ΅": 126, "Δ my": 127, "Δ '": 128, "Δ only": 129, "Δ three": 130, ";": 131, "Δ 2": 132, "The": 133, "1": 134, "Δ percent": 135, "Δ against": 136, "Δ before": 137, ...
**this really make me confused .
hope for your help,and i will appreciate it!**
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2574/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2573 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2573/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2573/comments | https://api.github.com/repos/huggingface/transformers/issues/2573/events | https://github.com/huggingface/transformers/issues/2573 | 551,709,489 | MDU6SXNzdWU1NTE3MDk0ODk= | 2,573 | Is RoBERTa's pair of sequences tokenizer correct with double </s> | {
"login": "keldLundgaard",
"id": 1884495,
"node_id": "MDQ6VXNlcjE4ODQ0OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1884495?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/keldLundgaard",
"html_url": "https://github.com/keldLundgaard",
"followers_url": "https://api.github.com/users/keldLundgaard/followers",
"following_url": "https://api.github.com/users/keldLundgaard/following{/other_user}",
"gists_url": "https://api.github.com/users/keldLundgaard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/keldLundgaard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keldLundgaard/subscriptions",
"organizations_url": "https://api.github.com/users/keldLundgaard/orgs",
"repos_url": "https://api.github.com/users/keldLundgaard/repos",
"events_url": "https://api.github.com/users/keldLundgaard/events{/privacy}",
"received_events_url": "https://api.github.com/users/keldLundgaard/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, this is how RoBERTa was trained. "
] | 1,579 | 1,579 | 1,579 | NONE | null | In RoBERTa's build_input_with_special_tokens, the comment says
```
A RoBERTa sequence has the following format:
single sequence: <s> X </s>
pair of sequences: <s> A </s></s> B </s>
```
I find the double `</s></s>` very peculiar. Can you please verify that it should not be `</s><s>` (as a normal XML tag).
Thank you for the amazing work here. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2573/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2573/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2572 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2572/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2572/comments | https://api.github.com/repos/huggingface/transformers/issues/2572/events | https://github.com/huggingface/transformers/issues/2572 | 551,705,738 | MDU6SXNzdWU1NTE3MDU3Mzg= | 2,572 | Bert TPU fine-tuning works on Colab but not in GCP | {
"login": "jswift24",
"id": 1891204,
"node_id": "MDQ6VXNlcjE4OTEyMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1891204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jswift24",
"html_url": "https://github.com/jswift24",
"followers_url": "https://api.github.com/users/jswift24/followers",
"following_url": "https://api.github.com/users/jswift24/following{/other_user}",
"gists_url": "https://api.github.com/users/jswift24/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jswift24/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jswift24/subscriptions",
"organizations_url": "https://api.github.com/users/jswift24/orgs",
"repos_url": "https://api.github.com/users/jswift24/repos",
"events_url": "https://api.github.com/users/jswift24/events{/privacy}",
"received_events_url": "https://api.github.com/users/jswift24/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, your error states: \r\n\r\n```\r\nOne possible root cause is the client and server binaries are not built with the same version. Please make sure the operation or function is registered in the binary running in this process.\r\n```\r\n\r\nDo you have the same TensorFlow versions for your TPU and your VM?",
"The tensorflow version for my VM is 2.1.0. As I understand it, older TF versions are not supported by Huggingface. \r\n\r\nHow would I check the tf version on my TPU? Better yet, is there a recommendation or code sample to provision a Huggingface-compatible TPU?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> \r\n> \r\n> The tensorflow version for my VM is 2.1.0. As I understand it, older TF versions are not supported by Huggingface.\r\n> \r\n> How would I check the tf version on my TPU? Better yet, is there a recommendation or code sample to provision a Huggingface-compatible TPU?\r\n\r\nHi,\r\nI am facing the same issue. TF and TPU versions are the same.\r\nDid you manage to have it resolved?\r\nThanks"
] | 1,579 | 1,601 | 1,585 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): Bert
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [ ] the official example scripts: (give details)
* [x] my own modified scripts: (give details)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name): SQUAD
* [ ] my own task or dataset: (give details)
## To Reproduce
I'm trying to fine-tune a BERT model on TPU. It works in Colab but fails when I switch to a paid TPU on GCP.
Steps to reproduce the behavior:
Jupyter notebook code is as follows:
```
[1] model = TFBertModel.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad')
# works
[2] cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(
tpu='[My TPU]',
zone='us-central1-a',
project='[My Project]'
)
tf.config.experimental_connect_to_cluster(cluster_resolver)
tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
tpu_strategy = tf.distribute.experimental.TPUStrategy(cluster_resolver)
#Also works. Got a bunch of startup messages from the TPU - all good.
[3] with tpu_strategy.scope():
model = TFBertModel.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad')
#Generates the error below (long). Same line works in Colab.
```
---------------------------------------------------------------------------
```
NotFoundError Traceback (most recent call last)
<ipython-input-14-2cfc1a238903> in <module>
1 with tpu_strategy.scope():
----> 2 model = TFBertModel.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad')
~/.local/lib/python3.5/site-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
309 return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file, allow_missing_keys=True)
310
--> 311 ret = model(model.dummy_inputs, training=False) # build the network with dummy inputs
312
313 assert os.path.isfile(resolved_archive_file), "Error retrieving file {}".format(resolved_archive_file)
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
820 with base_layer_utils.autocast_context_manager(
821 self._compute_dtype):
--> 822 outputs = self.call(cast_inputs, *args, **kwargs)
823 self._handle_activity_regularization(inputs, outputs)
824 self._set_mask_metadata(inputs, outputs, input_masks)
~/.local/lib/python3.5/site-packages/transformers/modeling_tf_bert.py in call(self, inputs, **kwargs)
688
689 def call(self, inputs, **kwargs):
--> 690 outputs = self.bert(inputs, **kwargs)
691 return outputs
692
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
820 with base_layer_utils.autocast_context_manager(
821 self._compute_dtype):
--> 822 outputs = self.call(cast_inputs, *args, **kwargs)
823 self._handle_activity_regularization(inputs, outputs)
824 self._set_mask_metadata(inputs, outputs, input_masks)
~/.local/lib/python3.5/site-packages/transformers/modeling_tf_bert.py in call(self, inputs, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, training)
548
549 embedding_output = self.embeddings([input_ids, position_ids, token_type_ids, inputs_embeds], training=training)
--> 550 encoder_outputs = self.encoder([embedding_output, extended_attention_mask, head_mask], training=training)
551
552 sequence_output = encoder_outputs[0]
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
820 with base_layer_utils.autocast_context_manager(
821 self._compute_dtype):
--> 822 outputs = self.call(cast_inputs, *args, **kwargs)
823 self._handle_activity_regularization(inputs, outputs)
824 self._set_mask_metadata(inputs, outputs, input_masks)
~/.local/lib/python3.5/site-packages/transformers/modeling_tf_bert.py in call(self, inputs, training)
365 all_hidden_states = all_hidden_states + (hidden_states,)
366
--> 367 layer_outputs = layer_module([hidden_states, attention_mask, head_mask[i]], training=training)
368 hidden_states = layer_outputs[0]
369
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
820 with base_layer_utils.autocast_context_manager(
821 self._compute_dtype):
--> 822 outputs = self.call(cast_inputs, *args, **kwargs)
823 self._handle_activity_regularization(inputs, outputs)
824 self._set_mask_metadata(inputs, outputs, input_masks)
~/.local/lib/python3.5/site-packages/transformers/modeling_tf_bert.py in call(self, inputs, training)
341 hidden_states, attention_mask, head_mask = inputs
342
--> 343 attention_outputs = self.attention([hidden_states, attention_mask, head_mask], training=training)
344 attention_output = attention_outputs[0]
345 intermediate_output = self.intermediate(attention_output)
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
820 with base_layer_utils.autocast_context_manager(
821 self._compute_dtype):
--> 822 outputs = self.call(cast_inputs, *args, **kwargs)
823 self._handle_activity_regularization(inputs, outputs)
824 self._set_mask_metadata(inputs, outputs, input_masks)
~/.local/lib/python3.5/site-packages/transformers/modeling_tf_bert.py in call(self, inputs, training)
290 input_tensor, attention_mask, head_mask = inputs
291
--> 292 self_outputs = self.self_attention([input_tensor, attention_mask, head_mask], training=training)
293 attention_output = self.dense_output([self_outputs[0], input_tensor], training=training)
294 outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
820 with base_layer_utils.autocast_context_manager(
821 self._compute_dtype):
--> 822 outputs = self.call(cast_inputs, *args, **kwargs)
823 self._handle_activity_regularization(inputs, outputs)
824 self._set_mask_metadata(inputs, outputs, input_masks)
~/.local/lib/python3.5/site-packages/transformers/modeling_tf_bert.py in call(self, inputs, training)
222
223 batch_size = shape_list(hidden_states)[0]
--> 224 mixed_query_layer = self.query(hidden_states)
225 mixed_key_layer = self.key(hidden_states)
226 mixed_value_layer = self.value(hidden_states)
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
820 with base_layer_utils.autocast_context_manager(
821 self._compute_dtype):
--> 822 outputs = self.call(cast_inputs, *args, **kwargs)
823 self._handle_activity_regularization(inputs, outputs)
824 self._set_mask_metadata(inputs, outputs, input_masks)
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/keras/layers/core.py in call(self, inputs)
1142 outputs = gen_math_ops.mat_mul(inputs, self.kernel)
1143 if self.use_bias:
-> 1144 outputs = nn.bias_add(outputs, self.bias)
1145 if self.activation is not None:
1146 return self.activation(outputs) # pylint: disable=not-callable
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/ops/nn_ops.py in bias_add(value, bias, data_format, name)
2756 else:
2757 return gen_nn_ops.bias_add(
-> 2758 value, bias, data_format=data_format, name=name)
2759
2760
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/ops/gen_nn_ops.py in bias_add(value, bias, data_format, name)
675 try:
676 return bias_add_eager_fallback(
--> 677 value, bias, data_format=data_format, name=name, ctx=_ctx)
678 except _core._SymbolicException:
679 pass # Add nodes to the TensorFlow graph.
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/ops/gen_nn_ops.py in bias_add_eager_fallback(value, bias, data_format, name, ctx)
703 data_format = "NHWC"
704 data_format = _execute.make_str(data_format, "data_format")
--> 705 _attr_T, _inputs_T = _execute.args_to_matching_eager([value, bias], ctx)
706 (value, bias) = _inputs_T
707 _inputs_flat = [value, bias]
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/eager/execute.py in args_to_matching_eager(l, ctx, default_dtype)
265 dtype = ret[-1].dtype
266 else:
--> 267 ret = [ops.convert_to_tensor(t, dtype, ctx=ctx) for t in l]
268
269 # TODO(slebedev): consider removing this as it leaks a Keras concept.
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/eager/execute.py in <listcomp>(.0)
265 dtype = ret[-1].dtype
266 else:
--> 267 ret = [ops.convert_to_tensor(t, dtype, ctx=ctx) for t in l]
268
269 # TODO(slebedev): consider removing this as it leaks a Keras concept.
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/framework/ops.py in convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, dtype_hint, ctx, accepted_result_types)
1312
1313 if ret is None:
-> 1314 ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
1315
1316 if ret is NotImplemented:
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/distribute/values.py in _tensor_conversion_mirrored(var, dtype, name, as_ref)
1174 # allowing instances of the class to be used as tensors.
1175 def _tensor_conversion_mirrored(var, dtype=None, name=None, as_ref=False):
-> 1176 return var._dense_var_to_tensor(dtype=dtype, name=name, as_ref=as_ref) # pylint: disable=protected-access
1177
1178
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/distribute/values.py in _dense_var_to_tensor(self, dtype, name, as_ref)
908 if _enclosing_tpu_context() is None:
909 return super(TPUVariableMixin, self)._dense_var_to_tensor(
--> 910 dtype=dtype, name=name, as_ref=as_ref)
911 # pylint: enable=protected-access
912 elif dtype is not None and dtype != self.dtype:
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/distribute/values.py in _dense_var_to_tensor(self, dtype, name, as_ref)
1164 assert not as_ref
1165 return ops.convert_to_tensor(
-> 1166 self.get(), dtype=dtype, name=name, as_ref=as_ref)
1167
1168 def _clone_with_new_values(self, new_values):
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/distribute/values.py in get(self, device)
835 def get(self, device=None):
836 if (_enclosing_tpu_context() is None) or (device is not None):
--> 837 return super(TPUVariableMixin, self).get(device=device)
838 else:
839 raise NotImplementedError(
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/distribute/values.py in get(self, device)
320 device = distribute_lib.get_update_device()
321 if device is None:
--> 322 return self._get_cross_replica()
323 device = device_util.canonicalize(device)
324 return self._device_map.select_for_device(self._values, device)
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/distribute/values.py in _get_cross_replica(self)
1136 replica_id = self._device_map.replica_for_device(device)
1137 if replica_id is None:
-> 1138 return array_ops.identity(self.primary)
1139 return array_ops.identity(self._values[replica_id])
1140
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/util/dispatch.py in wrapper(*args, **kwargs)
178 """Call target, and fall back on dispatchers if there is a TypeError."""
179 try:
--> 180 return target(*args, **kwargs)
181 except (TypeError, ValueError):
182 # Note: convert_to_eager_tensor currently raises a ValueError, not a
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/ops/array_ops.py in identity(input, name)
265 # variables. Variables have correct handle data when graph building.
266 input = ops.convert_to_tensor(input)
--> 267 ret = gen_array_ops.identity(input, name=name)
268 # Propagate handle data for happier shape inference for resource variables.
269 if hasattr(input, "_handle_data"):
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/ops/gen_array_ops.py in identity(input, name)
3824 pass # Add nodes to the TensorFlow graph.
3825 except _core._NotOkStatusException as e:
-> 3826 _ops.raise_from_not_ok_status(e, name)
3827 # Add nodes to the TensorFlow graph.
3828 _, _, _op, _outputs = _op_def_library._apply_op_helper(
/usr/local/lib/python3.5/dist-packages/tensorflow_core/python/framework/ops.py in raise_from_not_ok_status(e, name)
6604 message = e.message + (" name: " + name if name is not None else "")
6605 # pylint: disable=protected-access
-> 6606 six.raise_from(core._status_to_exception(e.code, message), None)
6607 # pylint: enable=protected-access
6608
/usr/local/lib/python3.5/dist-packages/six.py in raise_from(value, from_value)
NotFoundError: '_MklMatMul' is neither a type of a primitive operation nor a name of a function registered in binary running on n-aa2fcfb7-w-0. One possible root cause is the client and server binaries are not built with the same version. Please make sure the operation or function is registered in the binary running in this process. [Op:Identity]
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well. -->
## Expected behavior
Would like the model to start...
<!-- A clear and concise description of what you expected to happen. -->
## Environment
GCP AI Notebook. https://console.cloud.google.com/ai-platform/notebooks
* OS:
* Python version: 3.5
* PyTorch version: Tensorflow 2.1.0
* PyTorch Transformers version (or branch): 2.3.0
* Using GPU ? Using TPU
* Distributed or parallel setup ? Distributed
* Any other relevant information:
## Additional context
Lots of config detail in the code above.
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2572/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2571 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2571/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2571/comments | https://api.github.com/repos/huggingface/transformers/issues/2571/events | https://github.com/huggingface/transformers/issues/2571 | 551,681,286 | MDU6SXNzdWU1NTE2ODEyODY= | 2,571 | Why isn't BERT doing wordpiece tokenization? | {
"login": "jianwolf",
"id": 24360583,
"node_id": "MDQ6VXNlcjI0MzYwNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/24360583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jianwolf",
"html_url": "https://github.com/jianwolf",
"followers_url": "https://api.github.com/users/jianwolf/followers",
"following_url": "https://api.github.com/users/jianwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/jianwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jianwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jianwolf/subscriptions",
"organizations_url": "https://api.github.com/users/jianwolf/orgs",
"repos_url": "https://api.github.com/users/jianwolf/repos",
"events_url": "https://api.github.com/users/jianwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/jianwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Even if I do `add_special_tokens=True` when encoding, I get\r\n\r\n```\r\n[CLS] why isn ' t my card working [SEP]\r\n```\r\n\r\nwhich is still not wordpiece tokenization.",
"When using `encode` and `decode` you're performing the full tokenization steps each time:\r\n\r\nencode: tokenizing -> convert tokens to ids\r\ndecode: convert tokens to ids -> detokenizing\r\n \r\nIf you want to see the middle step, you can use the `tokenize` method:\r\n\r\n```py\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\ntext = '''why isn't my card working'''\r\nprint(tokenizer.tokenize(text)) # ['why', 'isn', \"'\", 't', 'my', 'card', 'working']\r\n```\r\n\r\nAll the words are in the vocabulary, but if you use more complex words:\r\n\r\n```py\r\ntokenizer.tokenize(\"Why isn't my text tokenizing\") # ['why', 'isn', \"'\", 't', 'my', 'text', 'token', '##izing']\r\n```\r\n\r\nYou'll see the `##ing` you were looking for.\r\n\r\n",
"Looks great! Thank you!"
] | 1,579 | 1,579 | 1,579 | NONE | null | My code is
```
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
text = '''why isn't my card working'''
encoded = tokenizer.encode(text, add_special_tokens=False)
text_tokenized = tokenizer.decode(encoded, clean_up_tokenization_spaces=False)
print(text_tokenized)
```
and the output result (the tokenization) is
```
why isn ' t my card working
```
But this isn't the wordpiece nokenization BERT should be using. E.g., `working` should be tokenized as `work ##ing`. Is there anything wrong with my code? And will the fact we are not using wordpiece tokenization decrease BERT's performance?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2571/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2571/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2570 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2570/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2570/comments | https://api.github.com/repos/huggingface/transformers/issues/2570/events | https://github.com/huggingface/transformers/pull/2570 | 551,672,846 | MDExOlB1bGxSZXF1ZXN0MzY0MzUzMTY1 | 2,570 | [run_lm_finetuning] Train from scratch | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2570?src=pr&el=h1) Report\n> Merging [#2570](https://codecov.io/gh/huggingface/transformers/pull/2570?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/65a89a89768f5922e51cdc7d49990d731e3f2c03?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2570?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2570 +/- ##\n=======================================\n Coverage 74.61% 74.61% \n=======================================\n Files 87 87 \n Lines 14802 14802 \n=======================================\n Hits 11044 11044 \n Misses 3758 3758\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2570?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2570?src=pr&el=footer). Last update [65a89a8...55939b5](https://codecov.io/gh/huggingface/transformers/pull/2570?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Looks good to me, feel free to merge it when you feel ready @julien-c ",
"Yo! merge this shit up!! ",
"Thanks, A Lot Guys your the best!"
] | 1,579 | 1,579 | 1,579 | MEMBER | null | Ability to train a model from scratch, rather than finetune a pretrained one. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2570/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2570/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2570",
"html_url": "https://github.com/huggingface/transformers/pull/2570",
"diff_url": "https://github.com/huggingface/transformers/pull/2570.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2570.patch",
"merged_at": 1579643859000
} |
https://api.github.com/repos/huggingface/transformers/issues/2569 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2569/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2569/comments | https://api.github.com/repos/huggingface/transformers/issues/2569/events | https://github.com/huggingface/transformers/pull/2569 | 551,651,488 | MDExOlB1bGxSZXF1ZXN0MzY0MzM1NTg2 | 2,569 | Add lower bound to tqdm for tqdm.auto | {
"login": "brendan-ai2",
"id": 16342367,
"node_id": "MDQ6VXNlcjE2MzQyMzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/16342367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brendan-ai2",
"html_url": "https://github.com/brendan-ai2",
"followers_url": "https://api.github.com/users/brendan-ai2/followers",
"following_url": "https://api.github.com/users/brendan-ai2/following{/other_user}",
"gists_url": "https://api.github.com/users/brendan-ai2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brendan-ai2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brendan-ai2/subscriptions",
"organizations_url": "https://api.github.com/users/brendan-ai2/orgs",
"repos_url": "https://api.github.com/users/brendan-ai2/repos",
"events_url": "https://api.github.com/users/brendan-ai2/events{/privacy}",
"received_events_url": "https://api.github.com/users/brendan-ai2/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2569?src=pr&el=h1) Report\n> Merging [#2569](https://codecov.io/gh/huggingface/transformers/pull/2569?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/65a89a89768f5922e51cdc7d49990d731e3f2c03?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2569?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2569 +/- ##\n=======================================\n Coverage 74.61% 74.61% \n=======================================\n Files 87 87 \n Lines 14802 14802 \n=======================================\n Hits 11044 11044 \n Misses 3758 3758\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2569?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2569?src=pr&el=footer). Last update [65a89a8...3ef04e1](https://codecov.io/gh/huggingface/transformers/pull/2569?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Indeed",
"Thanks!"
] | 1,579 | 1,579 | 1,579 | CONTRIBUTOR | null | - It appears that `tqdm` only introduced `tqdm.auto` in 4.27.
- See https://github.com/tqdm/tqdm/releases/tag/v4.27.0.
- Without a lower bound I received an error when importing `transformers` in an environment where I already had `tqdm` installed.
- `transformers` version:
```
$ pip list | grep transformers
transformers 2.3.0
```
- repro:
```
$ pip install tqdm==4.23
$ ipython
Python 3.6.7 |Anaconda, Inc.| (default, Oct 23 2018, 19:16:44)
Type 'copyright', 'credits' or 'license' for more information
IPython 6.4.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import transformers
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-279c49635b32> in <module>()
----> 1 import transformers
~/anaconda3/envs/allennlp/lib/python3.6/site-packages/transformers/__init__.py in <module>()
18
19 # Files and general utilities
---> 20 from .file_utils import (TRANSFORMERS_CACHE, PYTORCH_TRANSFORMERS_CACHE, PYTORCH_PRETRAINED_BERT_CACHE,
21 cached_path, add_start_docstrings, add_end_docstrings,
22 WEIGHTS_NAME, TF2_WEIGHTS_NAME, TF_WEIGHTS_NAME, CONFIG_NAME, MODEL_CARD_NAME,
~/anaconda3/envs/allennlp/lib/python3.6/site-packages/transformers/file_utils.py in <module>()
22 from botocore.exceptions import ClientError
23 import requests
---> 24 from tqdm.auto import tqdm
25 from contextlib import contextmanager
26 from . import __version__
ModuleNotFoundError: No module named 'tqdm.auto'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2569/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2569/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2569",
"html_url": "https://github.com/huggingface/transformers/pull/2569",
"diff_url": "https://github.com/huggingface/transformers/pull/2569.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2569.patch",
"merged_at": 1579303752000
} |
https://api.github.com/repos/huggingface/transformers/issues/2568 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2568/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2568/comments | https://api.github.com/repos/huggingface/transformers/issues/2568/events | https://github.com/huggingface/transformers/issues/2568 | 551,633,973 | MDU6SXNzdWU1NTE2MzM5NzM= | 2,568 | Finetuning ALBERT using examples/run_lm_finetuning.py | {
"login": "jianwolf",
"id": 24360583,
"node_id": "MDQ6VXNlcjI0MzYwNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/24360583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jianwolf",
"html_url": "https://github.com/jianwolf",
"followers_url": "https://api.github.com/users/jianwolf/followers",
"following_url": "https://api.github.com/users/jianwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/jianwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jianwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jianwolf/subscriptions",
"organizations_url": "https://api.github.com/users/jianwolf/orgs",
"repos_url": "https://api.github.com/users/jianwolf/repos",
"events_url": "https://api.github.com/users/jianwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/jianwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You're right, ALBERT should work out of the box with the fine tuning script as addressed at #2008 by @thomwolf. It's not too tough to fine-tune ALBERT with the script as reference, and there should also be a PR to add ALBERT and some other language models sometime in the near future",
"Thank you!"
] | 1,579 | 1,579 | 1,579 | NONE | null | ## π Feature
The current run_lm_finetuning.py script seems to not have ALBERT added. We should be able to finetune ALBERT in the same way we do to other models in your library.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2568/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2567 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2567/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2567/comments | https://api.github.com/repos/huggingface/transformers/issues/2567/events | https://github.com/huggingface/transformers/issues/2567 | 551,629,855 | MDU6SXNzdWU1NTE2Mjk4NTU= | 2,567 | Bert perform way worse than simple LSTM+Glove | {
"login": "zhaoxy92",
"id": 21225257,
"node_id": "MDQ6VXNlcjIxMjI1MjU3",
"avatar_url": "https://avatars.githubusercontent.com/u/21225257?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhaoxy92",
"html_url": "https://github.com/zhaoxy92",
"followers_url": "https://api.github.com/users/zhaoxy92/followers",
"following_url": "https://api.github.com/users/zhaoxy92/following{/other_user}",
"gists_url": "https://api.github.com/users/zhaoxy92/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhaoxy92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhaoxy92/subscriptions",
"organizations_url": "https://api.github.com/users/zhaoxy92/orgs",
"repos_url": "https://api.github.com/users/zhaoxy92/repos",
"events_url": "https://api.github.com/users/zhaoxy92/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhaoxy92/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Well, how are you actually using it? Are you actually fine-tuning the model? What's your train loop?",
"I actually just solved the issue. It seems that the code I posted was\ncorrect, but it has to do with where I placed the scheduler.step().\n\nOn Mon, Jan 20, 2020 at 6:57 AM Bram Vanroy <[email protected]>\nwrote:\n\n> Well, how are you actually using it? Are you actually fine-tuning the\n> model? What's your train loop?\n>\n> β\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/2567?email_source=notifications&email_token=AFB56KPQC3V5YOYAOQN5YRLQ6WGRXA5CNFSM4KIOCMLKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEJMMIKY#issuecomment-576242731>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AFB56KMFFHCCUDF7SFZNR63Q6WGRXANCNFSM4KIOCMLA>\n> .\n>\n",
"Glad to hear that you solved the issue. Please close the question. "
] | 1,579 | 1,580 | 1,580 | NONE | null | Hi, I am doing a very straightforward entity classification task, but Bert is not giving a good result. I am wondering if there is something wrong with my code.
My task is give a sentence and an entity boundary in that sentence, I predict entity type.
Here is my code to prepare input data. The basic idea is I have a sentence batch `batch_data`, then I use tokenizer to encode that sentence batch to `s_encoded`. Then to pad the encoded ids so that each encoded id list can have same length, I pad them with 0's according the longest sentence in that batch (`max_len`). Because I need to extract feature for the entity, so I use `entity_idx` to keep track of the entity boundary after it's tokenized.
Later, I just use `s_tensor` and `attn_mask_list` as input for BERT, and use `entity_idx` list to extract BERT feature for the entity.
Is there anything wrong with this part? Thank you!
```
s_bat = [x.tokens for x in batch_data]
s_encoded = tokenizer.batch_encode_plus([' '.join(x) for x in s_bat], add_special_tokens=True)
max_len = max([len(x) for x in s_encoded['input_ids']])
start_ids = [x.mention_start for x in batch_data]
ids_list = []
entity_idx = []
attn_mask_list = []
for i in range(len(s_bat)):
ids = s_encoded['input_ids'][i]
ids.extend([0 for _ in range(max_len-len(ids))])
attn = s_encoded['attention_mask'][i]
attn.extend([0 for _ in range(max_len-len(attn))])
ids_list.append(ids)
attn_mask_list.append(attn)
m_start_idx = len(self.tokenizer.encode(' '.join(s_bat[i][:start_ids[i]]))) if start_ids[i]>0 else 0
entity_idx.append([1+m_start_idx, 1+m_start_idx+len(m_encoded['input_ids'][i])]) #consider start special token
s_tensor = torch.LongTensor(ids_list)
attn_mask_list = torch.Tensor(attn_mask_list)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2567/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2567/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2566 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2566/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2566/comments | https://api.github.com/repos/huggingface/transformers/issues/2566/events | https://github.com/huggingface/transformers/issues/2566 | 551,571,037 | MDU6SXNzdWU1NTE1NzEwMzc= | 2,566 | question about tokenizer changes original sequence length | {
"login": "zhaoxy92",
"id": 21225257,
"node_id": "MDQ6VXNlcjIxMjI1MjU3",
"avatar_url": "https://avatars.githubusercontent.com/u/21225257?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhaoxy92",
"html_url": "https://github.com/zhaoxy92",
"followers_url": "https://api.github.com/users/zhaoxy92/followers",
"following_url": "https://api.github.com/users/zhaoxy92/following{/other_user}",
"gists_url": "https://api.github.com/users/zhaoxy92/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhaoxy92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhaoxy92/subscriptions",
"organizations_url": "https://api.github.com/users/zhaoxy92/orgs",
"repos_url": "https://api.github.com/users/zhaoxy92/repos",
"events_url": "https://api.github.com/users/zhaoxy92/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhaoxy92/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,579 | 1,579 | 1,579 | NONE | null | Hi, I am working on an entity classification task where I know the entity boundary and the context.
when I use tokenizer to encode the entire sequence, some token got split up to word pieces that will change the original length of the sequence. I want to extract the states only for the entity, but since the seq length changed, does that mean I need to recalcuate the boundary? Is there a way to automatically do this? Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2566/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2565 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2565/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2565/comments | https://api.github.com/repos/huggingface/transformers/issues/2565/events | https://github.com/huggingface/transformers/issues/2565 | 551,570,174 | MDU6SXNzdWU1NTE1NzAxNzQ= | 2,565 | Optionally convert output of FeatureExtraction pipeline to list | {
"login": "lambdaofgod",
"id": 3647577,
"node_id": "MDQ6VXNlcjM2NDc1Nzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3647577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lambdaofgod",
"html_url": "https://github.com/lambdaofgod",
"followers_url": "https://api.github.com/users/lambdaofgod/followers",
"following_url": "https://api.github.com/users/lambdaofgod/following{/other_user}",
"gists_url": "https://api.github.com/users/lambdaofgod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lambdaofgod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lambdaofgod/subscriptions",
"organizations_url": "https://api.github.com/users/lambdaofgod/orgs",
"repos_url": "https://api.github.com/users/lambdaofgod/repos",
"events_url": "https://api.github.com/users/lambdaofgod/events{/privacy}",
"received_events_url": "https://api.github.com/users/lambdaofgod/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
},
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
}
] | closed | false | null | [] | [
"Hi @lambdaofgod,\r\n\r\n.tolist() call is there as Python's lists are more compatible with CSV/JSON serialisation than numpy array.\r\n\r\nDid you have a chance to get number of how actually slower it is ? If the difference is non negligible then we might have a look to optimise .tolist() only when serialising through JSON/CSV",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,587 | 1,587 | NONE | null | What is the purpose of .tolist in FeatureExtraction pipeline?
Why is it called? Is this because of some kind of compatibility issue?
If someone needs to use __call__ a lot, it only slows it down. I've tried subclassing FeatureExtractionPipeline, but it's very ugly since then I can't just use it with pipelines.pipeline. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2565/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2565/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2564 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2564/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2564/comments | https://api.github.com/repos/huggingface/transformers/issues/2564/events | https://github.com/huggingface/transformers/pull/2564 | 551,554,439 | MDExOlB1bGxSZXF1ZXN0MzY0MjU1NDMw | 2,564 | Fix glue processor failing on tf datasets | {
"login": "neonbjb",
"id": 833082,
"node_id": "MDQ6VXNlcjgzMzA4Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/833082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neonbjb",
"html_url": "https://github.com/neonbjb",
"followers_url": "https://api.github.com/users/neonbjb/followers",
"following_url": "https://api.github.com/users/neonbjb/following{/other_user}",
"gists_url": "https://api.github.com/users/neonbjb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neonbjb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neonbjb/subscriptions",
"organizations_url": "https://api.github.com/users/neonbjb/orgs",
"repos_url": "https://api.github.com/users/neonbjb/repos",
"events_url": "https://api.github.com/users/neonbjb/events{/privacy}",
"received_events_url": "https://api.github.com/users/neonbjb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2564?src=pr&el=h1) Report\n> Merging [#2564](https://codecov.io/gh/huggingface/transformers/pull/2564?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6d5049a24d5906ece3fd9b68fb3abe1a0b6bb049?src=pr&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `0%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2564?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2564 +/- ##\n==========================================\n- Coverage 74.6% 74.58% -0.02% \n==========================================\n Files 87 87 \n Lines 14802 14805 +3 \n==========================================\n Hits 11043 11043 \n- Misses 3759 3762 +3\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2564?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/2564/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `27.53% <0%> (-0.34%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2564?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2564?src=pr&el=footer). Last update [6d5049a...17f172e](https://codecov.io/gh/huggingface/transformers/pull/2564?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"A proper fix for this would probably be to add a unit test that sends tf datasets through GLUE. Let me know if you want me to add that in as well.",
"That's great, thanks @neonbjb "
] | 1,579 | 1,579 | 1,579 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2564/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2564/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2564",
"html_url": "https://github.com/huggingface/transformers/pull/2564",
"diff_url": "https://github.com/huggingface/transformers/pull/2564.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2564.patch",
"merged_at": 1579538804000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2563 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2563/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2563/comments | https://api.github.com/repos/huggingface/transformers/issues/2563/events | https://github.com/huggingface/transformers/pull/2563 | 551,369,367 | MDExOlB1bGxSZXF1ZXN0MzY0MTAzMTM1 | 2,563 | Fix typo in examples/run_squad.py | {
"login": "whitedelay",
"id": 38174055,
"node_id": "MDQ6VXNlcjM4MTc0MDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/38174055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/whitedelay",
"html_url": "https://github.com/whitedelay",
"followers_url": "https://api.github.com/users/whitedelay/followers",
"following_url": "https://api.github.com/users/whitedelay/following{/other_user}",
"gists_url": "https://api.github.com/users/whitedelay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/whitedelay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/whitedelay/subscriptions",
"organizations_url": "https://api.github.com/users/whitedelay/orgs",
"repos_url": "https://api.github.com/users/whitedelay/repos",
"events_url": "https://api.github.com/users/whitedelay/events{/privacy}",
"received_events_url": "https://api.github.com/users/whitedelay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great, thanks!"
] | 1,579 | 1,579 | 1,579 | CONTRIBUTOR | null | Rul -> Run | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2563/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2563",
"html_url": "https://github.com/huggingface/transformers/pull/2563",
"diff_url": "https://github.com/huggingface/transformers/pull/2563.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2563.patch",
"merged_at": 1579278172000
} |
https://api.github.com/repos/huggingface/transformers/issues/2562 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2562/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2562/comments | https://api.github.com/repos/huggingface/transformers/issues/2562/events | https://github.com/huggingface/transformers/issues/2562 | 551,323,073 | MDU6SXNzdWU1NTEzMjMwNzM= | 2,562 | Architectures for Dialogue | {
"login": "lukasfrank",
"id": 9821158,
"node_id": "MDQ6VXNlcjk4MjExNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9821158?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lukasfrank",
"html_url": "https://github.com/lukasfrank",
"followers_url": "https://api.github.com/users/lukasfrank/followers",
"following_url": "https://api.github.com/users/lukasfrank/following{/other_user}",
"gists_url": "https://api.github.com/users/lukasfrank/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lukasfrank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lukasfrank/subscriptions",
"organizations_url": "https://api.github.com/users/lukasfrank/orgs",
"repos_url": "https://api.github.com/users/lukasfrank/repos",
"events_url": "https://api.github.com/users/lukasfrank/events{/privacy}",
"received_events_url": "https://api.github.com/users/lukasfrank/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Did you check out [DialoGPT](https://huggingface.co/microsoft/DialoGPT-large) by @dreasysnail?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,590 | 1,590 | NONE | null | ## β Questions & Help
Hi π
I'm trying to build a dialogue system which should reply based on a history, memory (which is represented as a string) and a confidence if the memory content is correct and should be used.
Here two examples:
- history: _Hi_
memory: _name: Max_
confidence: _0.2_
=> expected output: _Hi, what's your name?_
- history: _Hi_
memory: _name: Max_
confidence: _0.9_
=> expected output: _Hi Max_
First of all, are there some best practices how to encode non textual input or inject such information into a model?
I already trained a Bert2Bert model which is not really performing very well. The generated response seems to be not conditioned on the encoder output.
Are there any recommendations what to try next?
Many thanks in advance for your hints!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2562/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2561 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2561/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2561/comments | https://api.github.com/repos/huggingface/transformers/issues/2561/events | https://github.com/huggingface/transformers/issues/2561 | 551,305,554 | MDU6SXNzdWU1NTEzMDU1NTQ= | 2,561 | Model upload and sharing - delete, update, rename.... | {
"login": "miki537",
"id": 59996473,
"node_id": "MDQ6VXNlcjU5OTk2NDcz",
"avatar_url": "https://avatars.githubusercontent.com/u/59996473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/miki537",
"html_url": "https://github.com/miki537",
"followers_url": "https://api.github.com/users/miki537/followers",
"following_url": "https://api.github.com/users/miki537/following{/other_user}",
"gists_url": "https://api.github.com/users/miki537/gists{/gist_id}",
"starred_url": "https://api.github.com/users/miki537/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/miki537/subscriptions",
"organizations_url": "https://api.github.com/users/miki537/orgs",
"repos_url": "https://api.github.com/users/miki537/repos",
"events_url": "https://api.github.com/users/miki537/events{/privacy}",
"received_events_url": "https://api.github.com/users/miki537/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @miki537, we already have `transformers-cli s3 rm ____` but it is not super well documented.\r\n\r\nI'll improve the documentation on that point. Also `transformers-cli upload` will overwrite existing files with the same name so you can already update files.\r\n\r\nS3 doesn't not support moving/renaming files so I'm reluctant to introduce a `rename` (which would need to download the files locally then re-upload with new name β this is what the official aws-cli does)",
"I'll close this for now, feel free to reopen if it's not well documented enough (or even better, improve it and create a PR :)",
"Hi Julien, I'm also trying to delete some of my shared models, however, transformers-cli s3 rm ____ seems not working for me. All the models are still there after the command is done",
"Which model(s) do you intend to delete @Jiaxin-Pei?",
"@julien-c \r\nHere are they:\r\npedropei/question-intimacy-DEMO\r\npedropei/question-intimacy-demo\r\npedropei/random-demo\r\n\r\n",
"done"
] | 1,579 | 1,604 | 1,580 | NONE | null | ## π Feature
It would be great to have the option of deleting, renaming and adding description to the community models. I saw that there are already some errors in the model names which can probably not be fixed because of this missing functionality.
We should have something like:
transformers-cli delete
transformers-cli rename
transformers-cli update
## Motivation
I think that without this options the list will get quite messy in the future. Also as mentioned already in other issues(#2281 and #2520) we should be able to add how we trained the model, what dataset was used etc.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2561/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2561/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2560 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2560/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2560/comments | https://api.github.com/repos/huggingface/transformers/issues/2560/events | https://github.com/huggingface/transformers/issues/2560 | 551,231,487 | MDU6SXNzdWU1NTEyMzE0ODc= | 2,560 | why this implementation didn't apply residual and layer norm? | {
"login": "reniew",
"id": 32028135,
"node_id": "MDQ6VXNlcjMyMDI4MTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/32028135?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/reniew",
"html_url": "https://github.com/reniew",
"followers_url": "https://api.github.com/users/reniew/followers",
"following_url": "https://api.github.com/users/reniew/following{/other_user}",
"gists_url": "https://api.github.com/users/reniew/gists{/gist_id}",
"starred_url": "https://api.github.com/users/reniew/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/reniew/subscriptions",
"organizations_url": "https://api.github.com/users/reniew/orgs",
"repos_url": "https://api.github.com/users/reniew/repos",
"events_url": "https://api.github.com/users/reniew/events{/privacy}",
"received_events_url": "https://api.github.com/users/reniew/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The layer normalization happening after the attention is visible [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_albert.py#L255).\r\n\r\nThe `inner_group_num` is used to better understand how many layers are in a specific group. It is set to 1 in all the configurations that google-research has output as they all have a single repeating layer, but it would be necessary to increase the number of inner groups if you pre-trained an ALBERT model that used more than one repeating layer.\r\n\r\nYou can see the `inner_group_num` in the official configuration files, for example the [xxlarge-v3](https://tfhub.dev/google/albert_xxlarge/3)."
] | 1,579 | 1,582 | 1,582 | NONE | null | ## β Questions & Help
In ALBERT implementation code `modeling_albert.py`, i can't find applying skip-connection and layer normalizing after multi-head attention layer. I didn't read about this tequnique.
Is there a special reason about it?
One more, i saw the argument `inner_group_num` in `AlbertLayerGroup` class, is it introduced in original ABERT paper?
Thank you for good implementing.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2560/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2559 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2559/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2559/comments | https://api.github.com/repos/huggingface/transformers/issues/2559/events | https://github.com/huggingface/transformers/issues/2559 | 551,164,146 | MDU6SXNzdWU1NTExNjQxNDY= | 2,559 | Prediction on NER Tensorflow 2 | {
"login": "imayachita",
"id": 3615586,
"node_id": "MDQ6VXNlcjM2MTU1ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3615586?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imayachita",
"html_url": "https://github.com/imayachita",
"followers_url": "https://api.github.com/users/imayachita/followers",
"following_url": "https://api.github.com/users/imayachita/following{/other_user}",
"gists_url": "https://api.github.com/users/imayachita/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imayachita/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imayachita/subscriptions",
"organizations_url": "https://api.github.com/users/imayachita/orgs",
"repos_url": "https://api.github.com/users/imayachita/repos",
"events_url": "https://api.github.com/users/imayachita/events{/privacy}",
"received_events_url": "https://api.github.com/users/imayachita/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, the output is wrong. \r\nI think the run_tf_ner.py script has a bug where the labels are off by 1. \r\nAnd the off-by-1 prediction result is sent for evaluation `metrics.classification_report(y_true, y_pred, digits=4)` therefore the evaluation result is wrong too. ",
"Thanks @HuiyingLi! The workaround works."
] | 1,579 | 1,579 | 1,579 | NONE | null | Hi,
I tried running the implementation of NER on Tensorflow 2. I have a problem doing the prediction. Seems like the label to index are off. Here is some examples:
```
SOCCER B-ORG
- B-ORG
JAPAN B-MISC
GET B-ORG
LUCKY B-MISC
WIN B-ORG
, B-ORG
CHINA B-MISC
IN B-ORG
SURPRISE O
DEFEAT B-ORG
. B-ORG
Nadim B-MISC
Ladki B-PER
AL-AIN I-PER
, B-ORG
United I-PER
Arab I-MISC
Emirates I-MISC
1996-12-06 B-ORG
Japan I-PER
began B-ORG
the B-ORG
defence B-ORG
of B-ORG
their B-ORG
Asian O
Cup I-ORG
title B-ORG
with B-ORG
a B-ORG
lucky B-ORG
2-1 B-ORG
win B-ORG
against B-ORG
Syria I-PER
in B-ORG
a B-ORG
Group B-ORG
C I-ORG
championship B-ORG
match B-ORG
on B-ORG
Friday B-ORG
. B-ORG
```
Anyone has the same problem? Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2559/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2558 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2558/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2558/comments | https://api.github.com/repos/huggingface/transformers/issues/2558/events | https://github.com/huggingface/transformers/pull/2558 | 551,160,888 | MDExOlB1bGxSZXF1ZXN0MzYzOTMzMTQ0 | 2,558 | solve the exception: [AttributeError: 'bool' object has no attribute 'mean'] | {
"login": "7will10",
"id": 31179081,
"node_id": "MDQ6VXNlcjMxMTc5MDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/31179081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/7will10",
"html_url": "https://github.com/7will10",
"followers_url": "https://api.github.com/users/7will10/followers",
"following_url": "https://api.github.com/users/7will10/following{/other_user}",
"gists_url": "https://api.github.com/users/7will10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/7will10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/7will10/subscriptions",
"organizations_url": "https://api.github.com/users/7will10/orgs",
"repos_url": "https://api.github.com/users/7will10/repos",
"events_url": "https://api.github.com/users/7will10/events{/privacy}",
"received_events_url": "https://api.github.com/users/7will10/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! I'd like to replicate the error you had with the `AttributeError`. Could you let me know in which situation you faced this error?"
] | 1,579 | 1,581 | 1,581 | NONE | null | modified method simple_accuracy(),
before:
it's (preds == labels).mean()
This will cause an exception[AttributeError: 'bool' object has no attribute 'mean'],
then after update:
change to accuracy_score(labels,preds),
use this method accuracy_score() in package sklearn.metrics | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2558/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2558/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2558",
"html_url": "https://github.com/huggingface/transformers/pull/2558",
"diff_url": "https://github.com/huggingface/transformers/pull/2558.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2558.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2557 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2557/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2557/comments | https://api.github.com/repos/huggingface/transformers/issues/2557/events | https://github.com/huggingface/transformers/pull/2557 | 551,110,027 | MDExOlB1bGxSZXF1ZXN0MzYzODkyMDM3 | 2,557 | Fix BasicTokenizer to respect `never_split` parameters | {
"login": "DeNeutoy",
"id": 16001974,
"node_id": "MDQ6VXNlcjE2MDAxOTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/16001974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DeNeutoy",
"html_url": "https://github.com/DeNeutoy",
"followers_url": "https://api.github.com/users/DeNeutoy/followers",
"following_url": "https://api.github.com/users/DeNeutoy/following{/other_user}",
"gists_url": "https://api.github.com/users/DeNeutoy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DeNeutoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DeNeutoy/subscriptions",
"organizations_url": "https://api.github.com/users/DeNeutoy/orgs",
"repos_url": "https://api.github.com/users/DeNeutoy/repos",
"events_url": "https://api.github.com/users/DeNeutoy/events{/privacy}",
"received_events_url": "https://api.github.com/users/DeNeutoy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Not sure how to fix that last CI build, seems unrelated?",
"Unrelated Heisenbug, relaunched the CI",
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2557?src=pr&el=h1) Report\n> Merging [#2557](https://codecov.io/gh/huggingface/transformers/pull/2557?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/23a2cea8cb95864ddb7e7e80e126e4f083640882?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2557?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2557 +/- ##\n==========================================\n+ Coverage 74.6% 74.61% +<.01% \n==========================================\n Files 87 87 \n Lines 14802 14802 \n==========================================\n+ Hits 11043 11044 +1 \n+ Misses 3759 3758 -1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2557?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2557/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `96.62% <100%> (+0.42%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2557?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2557?src=pr&el=footer). Last update [23a2cea...c0afe26](https://codecov.io/gh/huggingface/transformers/pull/2557?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thank you sir!"
] | 1,579 | 1,579 | 1,579 | CONTRIBUTOR | null | `never_split` was not being passed to `_split_on_punc`, causing special tokens to be split apart. Failing test (in first commit) demonstrates the problem. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2557/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2557",
"html_url": "https://github.com/huggingface/transformers/pull/2557",
"diff_url": "https://github.com/huggingface/transformers/pull/2557.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2557.patch",
"merged_at": 1579291077000
} |
https://api.github.com/repos/huggingface/transformers/issues/2556 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2556/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2556/comments | https://api.github.com/repos/huggingface/transformers/issues/2556/events | https://github.com/huggingface/transformers/issues/2556 | 551,020,346 | MDU6SXNzdWU1NTEwMjAzNDY= | 2,556 | Quantized model not preserved when imported using from_pretrained() | {
"login": "trevorpfiz",
"id": 24904780,
"node_id": "MDQ6VXNlcjI0OTA0Nzgw",
"avatar_url": "https://avatars.githubusercontent.com/u/24904780?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trevorpfiz",
"html_url": "https://github.com/trevorpfiz",
"followers_url": "https://api.github.com/users/trevorpfiz/followers",
"following_url": "https://api.github.com/users/trevorpfiz/following{/other_user}",
"gists_url": "https://api.github.com/users/trevorpfiz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trevorpfiz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trevorpfiz/subscriptions",
"organizations_url": "https://api.github.com/users/trevorpfiz/orgs",
"repos_url": "https://api.github.com/users/trevorpfiz/repos",
"events_url": "https://api.github.com/users/trevorpfiz/events{/privacy}",
"received_events_url": "https://api.github.com/users/trevorpfiz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@LysandreJik any ideas on this? I am itching to use a quantized BERT model in production, but it does not work when loaded in :(",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi @ElektrikSpark , I am also facing similar issue, were you able to resolve this? \r\nI am unable to get good results while loading the quantized bert model. "
] | 1,579 | 1,594 | 1,585 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): Bert
Language I am using the model on (English, Chinese....): English
The problem arise when using:
When I import the saved quantized model using `from_pretrained()`, the model's size is inflated to the pre-quantized version. The model also takes a significant performance hit, both accuracy and time, from the original quantized model.
The tasks I am working on is:
This is the official Pytorch notebook from: https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html
https://colab.research.google.com/github/pytorch/tutorials/blob/gh-pages/_downloads/dynamic_quantization_bert_tutorial.ipynb
## To Reproduce
Steps to reproduce the behavior:
1. Run through the notebook fully.
2. Load the quantized model with from_pretrained()
3. Run `3.1 Check the model size` and `3.2 Evaluate the inference accuracy and time`
You will see that the size is of the pre-quantized model (>400 MB), and the accuracy AND time take a huge hit from the original quantized model.
## Expected behavior
1. The quantized model can be loaded in at its original size of <200 MB
2. The quantized model preserves its accuracy when loaded in
3. The quantized model preserves its time to run when loaded in
## Environment
Colab
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2556/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2555 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2555/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2555/comments | https://api.github.com/repos/huggingface/transformers/issues/2555/events | https://github.com/huggingface/transformers/pull/2555 | 551,016,011 | MDExOlB1bGxSZXF1ZXN0MzYzODE0MDg1 | 2,555 | Fix output name | {
"login": "glicerico",
"id": 23503930,
"node_id": "MDQ6VXNlcjIzNTAzOTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/23503930?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/glicerico",
"html_url": "https://github.com/glicerico",
"followers_url": "https://api.github.com/users/glicerico/followers",
"following_url": "https://api.github.com/users/glicerico/following{/other_user}",
"gists_url": "https://api.github.com/users/glicerico/gists{/gist_id}",
"starred_url": "https://api.github.com/users/glicerico/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/glicerico/subscriptions",
"organizations_url": "https://api.github.com/users/glicerico/orgs",
"repos_url": "https://api.github.com/users/glicerico/repos",
"events_url": "https://api.github.com/users/glicerico/events{/privacy}",
"received_events_url": "https://api.github.com/users/glicerico/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2555?src=pr&el=h1) Report\n> Merging [#2555](https://codecov.io/gh/huggingface/transformers/pull/2555?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6e2c28a14a3d171e8c4d3838429abb1d69456df5?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2555?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2555 +/- ##\n=======================================\n Coverage 74.66% 74.66% \n=======================================\n Files 87 87 \n Lines 14802 14802 \n=======================================\n Hits 11052 11052 \n Misses 3750 3750\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2555?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2555?src=pr&el=footer). Last update [6e2c28a...e268f1c](https://codecov.io/gh/huggingface/transformers/pull/2555?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,584 | 1,584 | NONE | null | Output variable name `all_hidden_states` found in README is inconsistent with documentation's `hidden_states`: https://huggingface.co/transformers/model_doc/bert.html#bertmodel | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2555/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2555",
"html_url": "https://github.com/huggingface/transformers/pull/2555",
"diff_url": "https://github.com/huggingface/transformers/pull/2555.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2555.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2554 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2554/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2554/comments | https://api.github.com/repos/huggingface/transformers/issues/2554/events | https://github.com/huggingface/transformers/issues/2554 | 551,013,629 | MDU6SXNzdWU1NTEwMTM2Mjk= | 2,554 | CTRL tokenizer has no special tokens to indicate EOS | {
"login": "erickrf",
"id": 294483,
"node_id": "MDQ6VXNlcjI5NDQ4Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/294483?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erickrf",
"html_url": "https://github.com/erickrf",
"followers_url": "https://api.github.com/users/erickrf/followers",
"following_url": "https://api.github.com/users/erickrf/following{/other_user}",
"gists_url": "https://api.github.com/users/erickrf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erickrf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erickrf/subscriptions",
"organizations_url": "https://api.github.com/users/erickrf/orgs",
"repos_url": "https://api.github.com/users/erickrf/repos",
"events_url": "https://api.github.com/users/erickrf/events{/privacy}",
"received_events_url": "https://api.github.com/users/erickrf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,579 | 1,582 | 1,582 | CONTRIBUTOR | null | ## π Bug
The `generate` method from `PreTrainedModel` by default uses index 0 as EOS. This is a problem with CTRL, because its tokenizer has the word `the` mapped to this id.
Actually the CTRL has no special tokens besides UNK:
```
tokenizer = CTRLTokenizer.from_pretrained('ctrl')
tokenizer.special_tokens_map
# {'unk_token': '<unk>'}
tokenizer.convert_ids_to_tokens([0])
# ['the']
```
I believe the CTRL tokenizer should have some special token to use as EOS or PAD, as the other models do.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2554/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2553 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2553/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2553/comments | https://api.github.com/repos/huggingface/transformers/issues/2553/events | https://github.com/huggingface/transformers/issues/2553 | 551,003,765 | MDU6SXNzdWU1NTEwMDM3NjU= | 2,553 | Model not learning when using albert-base-v2 -- ALBERT | {
"login": "trevorpfiz",
"id": 24904780,
"node_id": "MDQ6VXNlcjI0OTA0Nzgw",
"avatar_url": "https://avatars.githubusercontent.com/u/24904780?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/trevorpfiz",
"html_url": "https://github.com/trevorpfiz",
"followers_url": "https://api.github.com/users/trevorpfiz/followers",
"following_url": "https://api.github.com/users/trevorpfiz/following{/other_user}",
"gists_url": "https://api.github.com/users/trevorpfiz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/trevorpfiz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trevorpfiz/subscriptions",
"organizations_url": "https://api.github.com/users/trevorpfiz/orgs",
"repos_url": "https://api.github.com/users/trevorpfiz/repos",
"events_url": "https://api.github.com/users/trevorpfiz/events{/privacy}",
"received_events_url": "https://api.github.com/users/trevorpfiz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The interesting things is that why do u use another modelβs tokenizer to\nprocess data .\n\nDo u know each model tokenizer is a map function which map ID to token ?\nSo for each model , the same word for them is mapping to different iD . So\nit will not learning . Do u read paper ?\n\nOn Fri, Jan 17, 2020 at 03:19 Trevor Pfizenmaier <[email protected]>\nwrote:\n\n> π Bug\n>\n> Model I am using (Bert, XLNet....): AlbertForSequenceClassification\n>\n> Language I am using the model on (English, Chinese....): English\n>\n> The problem arise when using:\n> When I use albert-base-v2 instead of albert-base-v1 for the model and\n> tokenizer, the model does not learn during training.\n>\n> The tasks I am working on is:\n> https://colab.research.google.com/drive/1Y4o3jh3ZH70tl6mCd76vz_IxX23biCPP\n> To Reproduce\n>\n> Steps to reproduce the behavior:\n>\n> 1. Open the colab notebook I have referenced above\n> 2. Change the word Bert to Albert in necessary places\n> 3. Run 4.3. Training Loop, you will see that the model does not learn\n>\n> Expected behavior\n>\n> I would expect the model to learn on the given task.\n> Environment\n>\n> Colab\n> Additional context\n>\n> I can not pinpoint exactly what the problem is, but it almost seems like\n> the data that is being fed to the model is not understood. If I use the\n> AlbertTokenizer for a BertForSequenceClassification model, which the BERT\n> model would obviously not understand, the same behavior is exhibited.\n>\n> β\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/2553?email_source=notifications&email_token=AIEAE4DURWPRQ22RW6PWX5DQ6CXMXA5CNFSM4KHZJIJ2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4IGXUZ2Q>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AIEAE4DTEA5RJ36IGEJQVGTQ6CXMXANCNFSM4KHZJIJQ>\n> .\n>\n",
"I know that you can not use a different tokenizer, as I said \"which the BERT model would obviously not understand\". I did this to extend my understanding of the problem, which yields the exact same behavior (loss does not go down/accuracy does not go down/model does not learn) as using `albert-base-v2` + `AlbertTokenizer` with `albert-base-v2` + `AlbertForSequenceClassification`.",
"I'm getting similar bad results. seems like ALBERT v2 isn't converging on the hyperparameters published in the original paper.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Has anyone figured this out? I've attempted the same thing (in tensorflow) by swapping out bert model & tokenizer for Albert V1 and V2 and no learning is done. Bert works just fine, but Albert is a no-go.\r\n",
"@1337-Pete, current version of ALBERT V1 and V2 work well but are very sensitive to the training data and hyperparameters. If you use the hyperparameters from the paper you will get similar results for both models on all GLUE tasks."
] | 1,579 | 1,592 | 1,585 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): AlbertForSequenceClassification
Language I am using the model on (English, Chinese....): English
The problem arise when using:
When I use `albert-base-v2` instead of `albert-base-v1` for the model and tokenizer, the model does not learn during training.
The tasks I am working on is:
https://colab.research.google.com/drive/1Y4o3jh3ZH70tl6mCd76vz_IxX23biCPP
## To Reproduce
Steps to reproduce the behavior:
1. Open the colab notebook I have referenced above
2. Change the word Bert to Albert in necessary places
3. Run 4.3. Training Loop, you will see that the model does not learn
## Expected behavior
I would expect the model to learn on the given task.
## Environment
Colab
## Additional context
I can not pinpoint exactly what the problem is, but it almost seems like the data that is being fed to the model is not understood. If I use the AlbertTokenizer for a BertForSequenceClassification model, which the BERT model would obviously not understand, the same behavior is exhibited.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2553/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2553/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2552 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2552/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2552/comments | https://api.github.com/repos/huggingface/transformers/issues/2552/events | https://github.com/huggingface/transformers/pull/2552 | 550,975,266 | MDExOlB1bGxSZXF1ZXN0MzYzNzgxOTEy | 2,552 | fix #2549 | {
"login": "Guitaricet",
"id": 2821124,
"node_id": "MDQ6VXNlcjI4MjExMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2821124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Guitaricet",
"html_url": "https://github.com/Guitaricet",
"followers_url": "https://api.github.com/users/Guitaricet/followers",
"following_url": "https://api.github.com/users/Guitaricet/following{/other_user}",
"gists_url": "https://api.github.com/users/Guitaricet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Guitaricet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Guitaricet/subscriptions",
"organizations_url": "https://api.github.com/users/Guitaricet/orgs",
"repos_url": "https://api.github.com/users/Guitaricet/repos",
"events_url": "https://api.github.com/users/Guitaricet/events{/privacy}",
"received_events_url": "https://api.github.com/users/Guitaricet/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"=(",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi, this should have been fixed with the release of `v3.0.0`. Thanks a lot for your contribution!"
] | 1,579 | 1,594 | 1,594 | NONE | null | closes #2549
proposed solution for unsupported operand type error in tokenizer.batch_encode_plus | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2552/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2552",
"html_url": "https://github.com/huggingface/transformers/pull/2552",
"diff_url": "https://github.com/huggingface/transformers/pull/2552.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2552.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2551 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2551/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2551/comments | https://api.github.com/repos/huggingface/transformers/issues/2551/events | https://github.com/huggingface/transformers/issues/2551 | 550,971,592 | MDU6SXNzdWU1NTA5NzE1OTI= | 2,551 | EnvironmentError OSError: Couldn't reach server | {
"login": "baggyg",
"id": 4569081,
"node_id": "MDQ6VXNlcjQ1NjkwODE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4569081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/baggyg",
"html_url": "https://github.com/baggyg",
"followers_url": "https://api.github.com/users/baggyg/followers",
"following_url": "https://api.github.com/users/baggyg/following{/other_user}",
"gists_url": "https://api.github.com/users/baggyg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/baggyg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/baggyg/subscriptions",
"organizations_url": "https://api.github.com/users/baggyg/orgs",
"repos_url": "https://api.github.com/users/baggyg/repos",
"events_url": "https://api.github.com/users/baggyg/events{/privacy}",
"received_events_url": "https://api.github.com/users/baggyg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can you try again? It seems that the server is reachable now. Of course you must be connected to the Internet.",
"The server was reachable. I try the same URL in my browser at the time of doing it and it loaded fine. Its just via python / transformers that he problem occurs (I've tried everyday for 3 days now). Could this be something to do with file locks? I get a similar message above. I will try again and post the full error message",
"Full Error Message:\r\n```\r\n01/17/2020 15:52:28 - WARNING - lib.squad - Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: True\r\n01/17/2020 15:52:29 - INFO - filelock - Lock 1473387147720 acquired on TensorflowQA/cache2\\4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.bf3b9ea126d8c0001ee8a1e8b92229871d06d36d8808208cc2449280da87785c.lock\r\n01/17/2020 15:52:29 - INFO - transformers.file_utils - https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json not found in cache or force_download set to True, downloading to D:\\GBKaggleChallenges\\NLP\\TensorflowQA\\cache2\\tmpjccrll9f\r\nHBox(children=(FloatProgress(value=0.0, description='Downloading', max=313.0, style=ProgressStyle(description_β¦\r\n01/17/2020 15:52:29 - INFO - transformers.file_utils - storing https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json in cache at TensorflowQA/cache2\\4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.bf3b9ea126d8c0001ee8a1e8b92229871d06d36d8808208cc2449280da87785c\r\n01/17/2020 15:52:29 - INFO - filelock - Lock 1473387147720 released on TensorflowQA/cache2\\4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.bf3b9ea126d8c0001ee8a1e8b92229871d06d36d8808208cc2449280da87785c.lock\r\n\r\nTraceback (most recent call last):\r\n\r\n File \"d:\\cudaenv\\lib\\site-packages\\transformers\\configuration_utils.py\", line 179, in from_pretrained\r\n resume_download=resume_download,\r\n\r\n File \"d:\\cudaenv\\lib\\site-packages\\transformers\\file_utils.py\", line 212, in cached_path\r\n user_agent=user_agent,\r\n\r\n File \"d:\\cudaenv\\lib\\site-packages\\transformers\\file_utils.py\", line 392, in get_from_cache\r\n os.rename(temp_file.name, cache_path)\r\n\r\nPermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'D:\\\\GBKaggleChallenges\\\\NLP\\\\TensorflowQA\\\\cache2\\\\tmpjccrll9f' -> 'TensorflowQA/cache2\\\\4dad0251492946e18ac39290fcfe91b89d370fee250efe9521476438fe8ca185.bf3b9ea126d8c0001ee8a1e8b92229871d06d36d8808208cc2449280da87785c'\r\n\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n\r\n File \"<ipython-input-2-e8fb1981d786>\", line 18, in <module>\r\n squad.main(args)\r\n\r\n File \"D:\\GBKaggleChallenges\\NLP\\lib\\squad.py\", line 743, in main\r\n cache_dir=args.cache_dir if args.cache_dir else None,\r\n\r\n File \"d:\\cudaenv\\lib\\site-packages\\transformers\\configuration_utils.py\", line 200, in from_pretrained\r\n raise EnvironmentError(msg)\r\n\r\nOSError: Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json' to download pretrained model configuration file.```",
"You're on Windows right? Might there be something wrong with the paths?",
"> You're on Windows right? Might there be something wrong with the paths?\r\n\r\nI can see that the D:\\\\GBKaggleChallenges\\\\NLP\\\\TensorflowQA\\\\cache2\\\\tmpjccrll9f\r\nis created (suggesting the paths are fine) and in fact contains the following, which makes the message even more confusing:\r\n\r\n```\r\n{\r\n \"attention_probs_dropout_prob\": 0.1,\r\n \"hidden_act\": \"gelu\",\r\n \"hidden_dropout_prob\": 0.1,\r\n \"hidden_size\": 768,\r\n \"initializer_range\": 0.02,\r\n \"intermediate_size\": 3072,\r\n \"max_position_embeddings\": 512,\r\n \"num_attention_heads\": 12,\r\n \"num_hidden_layers\": 12,\r\n \"type_vocab_size\": 2,\r\n \"vocab_size\": 30522\r\n}\r\n```",
"I just installed from source and has now gotten passed that particular error so I believe this was something that was fixed within the last week. ",
"@BramVanroy I have this issue from time to time. I think it's a network timeout issue when the connection is not stable. Unfortunately, it happens.\r\n It would be nice if this configuration can be cached too, there is no need to download the same config file each time.",
"> @BramVanroy I have this issue from time to time. I think it's a network timeout issue when the connection is not stable. Unfortunately, it happens.\r\n> It would be nice if this configuration can be cached too, there is no need to download the same config file each time.\r\n\r\nYou can restrict your script to using only local files by using e.g.\r\n\r\n```python\r\ntokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased', local_files_only=True)\r\n```\r\n\r\nNote that this will only work if the required files were already downloaded once before."
] | 1,579 | 1,584 | 1,579 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): Bert
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [X] the official example scripts: (give details)
* [ ] my own modified scripts: (give details)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details)
## To Reproduce
Steps to reproduce the behavior:
1. Run the run_squad.py scrtipt with standard settings (as per the example page)
Receive the following:
```
File "d:\cudaenv\lib\site-packages\transformers\configuration_utils.py", line 200, in from_pretrained
raise EnvironmentError(msg)
OSError: Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json' to download pretrained model configuration file.```
```
## Expected behavior
Example Runs as Normal after downloading pre-trained model
## Environment
* OS: Windows 10 x64
* Python version: 3.7.6
* PyTorch version: 1.4
* PyTorch Transformers version (or branch): Latest
* Using GPU ? Yes
* Distributed or parallel setup ? No
* Any other relevant information: Completely clean install / I have internet connection
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2551/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2550 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2550/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2550/comments | https://api.github.com/repos/huggingface/transformers/issues/2550/events | https://github.com/huggingface/transformers/issues/2550 | 550,957,418 | MDU6SXNzdWU1NTA5NTc0MTg= | 2,550 | fast gpt2 inference | {
"login": "rajarsheem",
"id": 6441313,
"node_id": "MDQ6VXNlcjY0NDEzMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6441313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajarsheem",
"html_url": "https://github.com/rajarsheem",
"followers_url": "https://api.github.com/users/rajarsheem/followers",
"following_url": "https://api.github.com/users/rajarsheem/following{/other_user}",
"gists_url": "https://api.github.com/users/rajarsheem/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajarsheem/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajarsheem/subscriptions",
"organizations_url": "https://api.github.com/users/rajarsheem/orgs",
"repos_url": "https://api.github.com/users/rajarsheem/repos",
"events_url": "https://api.github.com/users/rajarsheem/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajarsheem/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Have given some thought to using TorchScript but since my input sequence length changes each time, the easier tracing approach won't work.\r\n\r\nONNX also faced the same problem.",
"Hi rajarsheem,\r\nCan you please share your code that doing the batch inference with variable-length sequences?\r\n\r\nThanks",
"Is gpt2 model traceable? ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,589 | 1,589 | NONE | null | I have a fine-tuned ```GPT2LMHeadModel``` (gpt2-medium) which I am using to run inference on large data (>60M sequences) offline. At each iteration, my input is a batch of 30 variable-length sequences which gets padded according to the max length of the batch. My current speed is around 8 secs/iter and input sequences have around 150 tokens.
I am using pytorch's ```BucketIterator``` to group sequences of similar length and minimize padding. I am using fp-16. These are increasing the throughput. The GPU I am using is Tesla V100.
Can you please suggest what all other optimizations that I can do to increase the speed? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2550/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2550/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2549 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2549/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2549/comments | https://api.github.com/repos/huggingface/transformers/issues/2549/events | https://github.com/huggingface/transformers/issues/2549 | 550,954,140 | MDU6SXNzdWU1NTA5NTQxNDA= | 2,549 | unsupported operand type error in tokenizer.batch_encode_plus | {
"login": "Guitaricet",
"id": 2821124,
"node_id": "MDQ6VXNlcjI4MjExMjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2821124?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Guitaricet",
"html_url": "https://github.com/Guitaricet",
"followers_url": "https://api.github.com/users/Guitaricet/followers",
"following_url": "https://api.github.com/users/Guitaricet/following{/other_user}",
"gists_url": "https://api.github.com/users/Guitaricet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Guitaricet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Guitaricet/subscriptions",
"organizations_url": "https://api.github.com/users/Guitaricet/orgs",
"repos_url": "https://api.github.com/users/Guitaricet/repos",
"events_url": "https://api.github.com/users/Guitaricet/events{/privacy}",
"received_events_url": "https://api.github.com/users/Guitaricet/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I am fixing this issue and just understood that `return_attention_masks` is supposed to work only if `return_tensors is not None`. But it is not mentioned in the docstring nor raises an error. Also, in the case of `is_tf_available` and `return_tensors == 'pt'` current code would return tensorflow maks.\r\nI'd suggest to increase the identation of this block\r\n\r\n```python\r\n# encoder_attention_mask requires 1 for real token, 0 for padding, just invert value\r\nif return_attention_masks:\r\n if is_tf_available():\r\n batch_outputs[\"attention_mask\"] = tf.abs(batch_outputs[\"attention_mask\"] - 1)\r\n else:\r\n batch_outputs[\"attention_mask\"] = torch.abs(batch_outputs[\"attention_mask\"] - 1)\r\n```\r\n\r\nand to raise ValueError if `return_attention_masks` and `return_tensors is None`",
"This [PR](https://github.com/huggingface/transformers/pull/2552) is my proposed fix of the issue.\r\n",
"It seems that tf2.0 can trigger `is_tf_available()`. I've documented the effects of tf version on this functionality [here](https://colab.research.google.com/drive/1a4qmiiZpPXu4mhscJkN_Q0tbd_-HR4jY)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): any model
Language I am using the model on (English, Chinese....): any language
The problem arise when using: tokenizer object
The tasks I am working on is: my own tasks
## To Reproduce
Steps to reproduce the behavior:
use `tokenizer.batch_encode_plus(batch_of_strings, return_attention_masks=True)`
```python
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
tokenizer.batch_encode_plus(['this text is longer than the next', 'short text'], return_attention_masks=True)
```
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-31-0f744584ae65> in <module>
----> 1 tokenizer.batch_encode_plus(['this text is longer than the next', 'short text'], return_attention_masks=True)
/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils.py in batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, max_length, stride, truncation_strategy, return_tensors, return_input_lengths, return_attention_masks, **kwargs)
971 if return_attention_masks:
972 if is_tf_available():
--> 973 batch_outputs['attention_mask'] = tf.abs(batch_outputs['attention_mask'] - 1)
974 else:
975 batch_outputs['attention_mask'] = torch.abs(batch_outputs['attention_mask'] - 1)
TypeError: unsupported operand type(s) for -: 'list' and 'int'
```
## Expected behavior
tokenizer.batch_encode_plus does not crash and returns valid attention masks
## Environment
* OS: Ubuntu 18.04
* Python version: Python 3.6
* PyTorch version: 1.3.0
* PyTorch Transformers version (or branch): 2.3.0
* Using GPU ? No
* Distributed or parallel setup ? No
* Any other relevant information:
## Additional context
The problem is in the lines
```python
batch_outputs['attention_mask'] = tf.abs(batch_outputs['attention_mask'] - 1)
```
and
```python
batch_outputs['attention_mask'] = torch.abs(batch_outputs['attention_mask'] - 1)
```
where we assume that batch_outputs['attention_mask'] is a vectorized object, however it is just a list of lists
```python
batch_outputs["attention_mask"] = [[0] * len(v) for v in batch_outputs["input_ids"]]
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2549/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2548 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2548/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2548/comments | https://api.github.com/repos/huggingface/transformers/issues/2548/events | https://github.com/huggingface/transformers/issues/2548 | 550,920,835 | MDU6SXNzdWU1NTA5MjA4MzU= | 2,548 | SQuAD convert_examples_to_features skipping doc tokens when they exceed max_seq_length | {
"login": "ofirzaf",
"id": 18296312,
"node_id": "MDQ6VXNlcjE4Mjk2MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/18296312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ofirzaf",
"html_url": "https://github.com/ofirzaf",
"followers_url": "https://api.github.com/users/ofirzaf/followers",
"following_url": "https://api.github.com/users/ofirzaf/following{/other_user}",
"gists_url": "https://api.github.com/users/ofirzaf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ofirzaf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ofirzaf/subscriptions",
"organizations_url": "https://api.github.com/users/ofirzaf/orgs",
"repos_url": "https://api.github.com/users/ofirzaf/repos",
"events_url": "https://api.github.com/users/ofirzaf/events{/privacy}",
"received_events_url": "https://api.github.com/users/ofirzaf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Indeed, this looks like a bug, thank you for opening an issue. I'll take a look at it.",
"This issue stems from the two arguments: `max_seq_length=128` and `doc_stride=128`.\r\n\r\nWould you mind telling me the expected behavior when putting a doc stride as big as the maximum sequence length? Since the sequence length considers both the document and the question, I don't see a reason for putting such a high document stride: it is larger than the maximum document length that will be in the sequence, and therefore breaks the stride.\r\n\r\nAccording to your use-case we'll do our best to accommodate it with our script.",
"You are correct, I would expect a warning or an error in this case.\r\nAnyway, even if the `doc_stride` was less than the `max_seq_len` there will still be a possibility that doc tokens will be skipped in the current implementation so I think a warning should occur. I would expect that `doc_tokens` will never be skipped but that's me",
"Indeed, I agree with you that raising a warning, in this case, would be best. Thanks for your feedback!",
"I've added a warning in 6e2c28a",
"I think the solution is to specify the question token max length and\npadding the question part to the max question token length, then you can\njust set the max doc_stride is max seq_length - max question length . It\nwill not loss any information\n\nOn Fri, Jan 17, 2020 at 03:00 Lysandre Debut <[email protected]>\nwrote:\n\n> I've added a warning in 6e2c28a\n> <https://github.com/huggingface/transformers/commit/6e2c28a14a3d171e8c4d3838429abb1d69456df5>\n>\n> β\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/2548?email_source=notifications&email_token=AIEAE4FWVAXHHAENIDH2JETQ6CVFNA5CNFSM4KHW5362YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEJFFA4Y#issuecomment-575295603>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AIEAE4HZCAYY5EHQTOLRQEDQ6CVFNANCNFSM4KHW536Q>\n> .\n>\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,584 | 1,584 | CONTRIBUTOR | null | ## π Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): BERT
Language I am using the model on (English, Chinese....): English
Using Transformers v2.3.0 installed from pypi
The problem arise when using:
transformers/data/processors/squad.py + BertTokenizer
The tasks I am working on is:
* SQuADv1.1
## To Reproduce
Steps to reproduce the behavior:
run:
```SQUAD_DIR=$HOME/data/SQUAD
export CUDA_VISIBLE_DEVICES=0
python run_squad.py \
--model_type bert \
--model_name_or_path bert-base-uncased \
--do_eval \
--do_lower_case \
--predict_file $SQUAD_DIR/dev-v1.1.json \
--per_gpu_eval_batch_size 1 \
--max_seq_length 128 \
--doc_stride 128 \
--output_dir $HOME/tmp/debug_squad \
--overwrite_output_dir
```
## Expected behavior
Looking at the first example:
Q:"Which NFL team represented the AFC at Super Bowl 50?"
Doc: "Super Bowl 50 was an American football...")
After converting the example to features I see that the question and doc lengths after tokenization are 11 and 157 and in addition each feature need 3 extra tokens for the [CLS], [SEP] tokens. So I would expect the first feature to be:
`[CLS] [11 Q tokens][SEP][114 Doc tokens][SEP]` = total of 128 tokens
and the second feature to be:
`[CLS][11 Q tokens][SEP][43 Doc tokens][SEP][Padding]` = total of 57 tokens without padding.
Currently the implementation of squad and the tokenizer skips doc_tokens[115:128] as if the first 128 doc tokens appeared in the first features:
the second feature is:
`[CLS][11 Q tokens][SEQ][last 29 Doc tokens][SEP][Padding]` = total of 43 tokens
This bug happens in all the examples, the stride is not done correctly and some of doc tokens are skipped.
## Environment
* OS: 16.04.6 LTS
* Python version: 3.6.8
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): 2.3.0
* Using GPU 1
* Distributed or parallel setup no
* Any other relevant information:
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2548/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2547 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2547/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2547/comments | https://api.github.com/repos/huggingface/transformers/issues/2547/events | https://github.com/huggingface/transformers/issues/2547 | 550,809,089 | MDU6SXNzdWU1NTA4MDkwODk= | 2,547 | AlbertDoublehHeadsModel | {
"login": "hasnain2808",
"id": 28212972,
"node_id": "MDQ6VXNlcjI4MjEyOTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/28212972?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hasnain2808",
"html_url": "https://github.com/hasnain2808",
"followers_url": "https://api.github.com/users/hasnain2808/followers",
"following_url": "https://api.github.com/users/hasnain2808/following{/other_user}",
"gists_url": "https://api.github.com/users/hasnain2808/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hasnain2808/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hasnain2808/subscriptions",
"organizations_url": "https://api.github.com/users/hasnain2808/orgs",
"repos_url": "https://api.github.com/users/hasnain2808/repos",
"events_url": "https://api.github.com/users/hasnain2808/events{/privacy}",
"received_events_url": "https://api.github.com/users/hasnain2808/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,584 | 1,584 | NONE | null | # πNew model addition
## Model description
Like we have OpenAIGPTDoubleHeadsModel. I actually want to know if someone is already working on similar model for Albert
If not that with some help I would want to contribute towards it
<!-- Important information -->
## Open Source status
* [ ] the model implementation is available: (give details)
* [ ] the model weights are available: (give details)
* [ ] who are the authors: (mention them)
## Additional context
<!-- Add any other context about the problem here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2547/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2547/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2546 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2546/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2546/comments | https://api.github.com/repos/huggingface/transformers/issues/2546/events | https://github.com/huggingface/transformers/issues/2546 | 550,750,245 | MDU6SXNzdWU1NTA3NTAyNDU= | 2,546 | Unable to generate ALBERT embeddings of size 128 | {
"login": "MaheshChandrra",
"id": 13826929,
"node_id": "MDQ6VXNlcjEzODI2OTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/13826929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaheshChandrra",
"html_url": "https://github.com/MaheshChandrra",
"followers_url": "https://api.github.com/users/MaheshChandrra/followers",
"following_url": "https://api.github.com/users/MaheshChandrra/following{/other_user}",
"gists_url": "https://api.github.com/users/MaheshChandrra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MaheshChandrra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MaheshChandrra/subscriptions",
"organizations_url": "https://api.github.com/users/MaheshChandrra/orgs",
"repos_url": "https://api.github.com/users/MaheshChandrra/repos",
"events_url": "https://api.github.com/users/MaheshChandrra/events{/privacy}",
"received_events_url": "https://api.github.com/users/MaheshChandrra/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Please format your post with [code blocks](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks).\r\n\r\nThis seems like a very general question, where you want to change the size of a dimension of an output tensor. There are different approaches to this. If you want to do this as part of a downstream task, you can simply work with a linear layer from 768 to 128. If you are just extracting features from the model and want to reduce the dimensionality, you can do pooling, typically max or mean pooling. Since this is quite a general question, I suggest that you make a question on [Stack Overflow](https://stackoverflow.com/) instead, and close this question since it's not specific to `transformers`.",
"Sure,Thanks"
] | 1,579 | 1,579 | 1,579 | NONE | null | ## β Questions & Help
<!-- A clear and concise description of the question. -->
Hi Team Hugging face,
Due to memory issues I wanted to migrate from BERT to ALBERT,tried the model present in the transformers,but I'm unable to generate the embedding's of size 128,all I get in outputs is 768 dimension embedding's,can you please let me know how do I get a 128 dimension embedding for any input text passed to ALBERT model.Below is the sample code,output_ids contain the embedding's at each layer.
```
from transformers import AlbertTokenizer, AlbertModel
albert_model=AlbertModel.from_pretrained('albert-base-v2',output_hidden_states=True,output_attentions=True)
albert_tokenizer=AlbertTokenizer.from_pretrained('albert-base-v2')
input_ids = tf.constant(tokenizer.encode("Hugging face is great"))[None, :] # Batch size 1
outputs = model(input_ids)
##Displaying ALBERT config
albert_model
AlbertModel(
(embeddings): AlbertEmbeddings(
(word_embeddings): Embedding(30000, 128, padding_idx=0)
(position_embeddings): Embedding(512, 128)
(token_type_embeddings): Embedding(2, 128)
(LayerNorm): LayerNorm((128,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): AlbertTransformer(
(embedding_hidden_mapping_in): Linear(in_features=128, out_features=768, bias=True)
(albert_layer_groups): ModuleList(
(0): AlbertLayerGroup(
(albert_layers): ModuleList(
(0): AlbertLayer(
(full_layer_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(attention): AlbertAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
(ffn): Linear(in_features=768, out_features=3072, bias=True)
(ffn_output): Linear(in_features=3072, out_features=768, bias=True)
)
)
)
)
)
(pooler): Linear(in_features=768, out_features=768, bias=True)
(pooler_activation): Tanh()
)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2546/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2545 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2545/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2545/comments | https://api.github.com/repos/huggingface/transformers/issues/2545/events | https://github.com/huggingface/transformers/pull/2545 | 550,707,167 | MDExOlB1bGxSZXF1ZXN0MzYzNTYzNDk0 | 2,545 | modified method simple_accuracy(), solve the exception | {
"login": "7will10",
"id": 31179081,
"node_id": "MDQ6VXNlcjMxMTc5MDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/31179081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/7will10",
"html_url": "https://github.com/7will10",
"followers_url": "https://api.github.com/users/7will10/followers",
"following_url": "https://api.github.com/users/7will10/following{/other_user}",
"gists_url": "https://api.github.com/users/7will10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/7will10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/7will10/subscriptions",
"organizations_url": "https://api.github.com/users/7will10/orgs",
"repos_url": "https://api.github.com/users/7will10/repos",
"events_url": "https://api.github.com/users/7will10/events{/privacy}",
"received_events_url": "https://api.github.com/users/7will10/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2545?src=pr&el=h1) Report\n> Merging [#2545](https://codecov.io/gh/huggingface/transformers/pull/2545?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7833dfccac0d7d74e12d2b2be1f6caa6e895ca73?src=pr&el=desc) will **increase** coverage by `29.34%`.\n> The diff coverage is `89.55%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2545?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2545 +/- ##\n==========================================\n+ Coverage 45.25% 74.6% +29.34% \n==========================================\n Files 87 87 \n Lines 14800 14802 +2 \n==========================================\n+ Hits 6698 11043 +4345 \n+ Misses 8102 3759 -4343\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2545?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/tokenization\\_bert\\_japanese.py](https://codecov.io/gh/huggingface/transformers/pull/2545/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydF9qYXBhbmVzZS5weQ==) | `67.46% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/tokenization\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2545/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `36.76% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_mmbt.py](https://codecov.io/gh/huggingface/transformers/pull/2545/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tbWJ0LnB5) | `17.6% <0%> (+17.6%)` | :arrow_up: |\n| [src/transformers/tokenization\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2545/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY2FtZW1iZXJ0LnB5) | `35.71% <0%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_encoder\\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/2545/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `25.3% <0%> (+25.3%)` | :arrow_up: |\n| [src/transformers/data/metrics/\\_\\_init\\_\\_.py](https://codecov.io/gh/huggingface/transformers/pull/2545/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL21ldHJpY3MvX19pbml0X18ucHk=) | `26.66% <0%> (-1.25%)` | :arrow_down: |\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2545/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `93.93% <100%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2545/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `87.72% <100%> (+87.72%)` | :arrow_up: |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2545/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <100%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/configuration\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2545/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100% <100%> (ΓΈ)` | :arrow_up: |\n| ... and [75 more](https://codecov.io/gh/huggingface/transformers/pull/2545/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2545?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2545?src=pr&el=footer). Last update [7833dfc...0e778f9](https://codecov.io/gh/huggingface/transformers/pull/2545?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,579 | 1,579 | 1,579 | NONE | null | modified method simple_accuracy(),
before:
it's (preds == labels).mean()
This will cause an exception[AttributeError: 'bool' object has no attribute 'mean'],
then after update:
change to accuracy_score(labels,preds),
use this method accuracy_score() in package sklearn.metrics | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2545/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2545/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2545",
"html_url": "https://github.com/huggingface/transformers/pull/2545",
"diff_url": "https://github.com/huggingface/transformers/pull/2545.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2545.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2544 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2544/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2544/comments | https://api.github.com/repos/huggingface/transformers/issues/2544/events | https://github.com/huggingface/transformers/pull/2544 | 550,699,946 | MDExOlB1bGxSZXF1ZXN0MzYzNTU3NjY2 | 2,544 | modified method simple_accuracy() | {
"login": "7will10",
"id": 31179081,
"node_id": "MDQ6VXNlcjMxMTc5MDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/31179081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/7will10",
"html_url": "https://github.com/7will10",
"followers_url": "https://api.github.com/users/7will10/followers",
"following_url": "https://api.github.com/users/7will10/following{/other_user}",
"gists_url": "https://api.github.com/users/7will10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/7will10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/7will10/subscriptions",
"organizations_url": "https://api.github.com/users/7will10/orgs",
"repos_url": "https://api.github.com/users/7will10/repos",
"events_url": "https://api.github.com/users/7will10/events{/privacy}",
"received_events_url": "https://api.github.com/users/7will10/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,579 | 1,579 | 1,579 | NONE | null | modified method simple_accuracy(),
before:
it's (preds == labels).mean()
This will cause an exception[AttributeError: 'bool' object has no attribute 'mean'],
then after update:
change to accuracy_score(labels,preds),
use this method accuracy_score() in package sklearn.metrics | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2544/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2544/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2544",
"html_url": "https://github.com/huggingface/transformers/pull/2544",
"diff_url": "https://github.com/huggingface/transformers/pull/2544.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2544.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2543 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2543/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2543/comments | https://api.github.com/repos/huggingface/transformers/issues/2543/events | https://github.com/huggingface/transformers/pull/2543 | 550,694,323 | MDExOlB1bGxSZXF1ZXN0MzYzNTUzMDc0 | 2,543 | modified method simple_accuracy() | {
"login": "7will10",
"id": 31179081,
"node_id": "MDQ6VXNlcjMxMTc5MDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/31179081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/7will10",
"html_url": "https://github.com/7will10",
"followers_url": "https://api.github.com/users/7will10/followers",
"following_url": "https://api.github.com/users/7will10/following{/other_user}",
"gists_url": "https://api.github.com/users/7will10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/7will10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/7will10/subscriptions",
"organizations_url": "https://api.github.com/users/7will10/orgs",
"repos_url": "https://api.github.com/users/7will10/repos",
"events_url": "https://api.github.com/users/7will10/events{/privacy}",
"received_events_url": "https://api.github.com/users/7will10/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,579 | 1,579 | 1,579 | NONE | null | modified method simple_accuracy(),
before:
it's (preds == labels).mean()
This will cause an exception[AttributeError: 'bool' object has no attribute 'mean'],
then after update:
change to accuracy_score(labels,preds),
use this method accuracy_score() in package sklearn.metrics | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2543/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2543",
"html_url": "https://github.com/huggingface/transformers/pull/2543",
"diff_url": "https://github.com/huggingface/transformers/pull/2543.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2543.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2542 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2542/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2542/comments | https://api.github.com/repos/huggingface/transformers/issues/2542/events | https://github.com/huggingface/transformers/issues/2542 | 550,579,413 | MDU6SXNzdWU1NTA1Nzk0MTM= | 2,542 | Dynamic Quantization on ALBERT (pytorch) | {
"login": "Rachnas",
"id": 11646550,
"node_id": "MDQ6VXNlcjExNjQ2NTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/11646550?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rachnas",
"html_url": "https://github.com/Rachnas",
"followers_url": "https://api.github.com/users/Rachnas/followers",
"following_url": "https://api.github.com/users/Rachnas/following{/other_user}",
"gists_url": "https://api.github.com/users/Rachnas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rachnas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rachnas/subscriptions",
"organizations_url": "https://api.github.com/users/Rachnas/orgs",
"repos_url": "https://api.github.com/users/Rachnas/repos",
"events_url": "https://api.github.com/users/Rachnas/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rachnas/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I know this doesn't directly answer the question, but I have been playing around with quantization of BERT and everything is good until I want to load the model into my notebook. The size of the model is inflated back to over 400 MB from under 200 MB, and the accuracy takes a huge hit. I noticed this when I tried to load the quantized model in the notebook of the pytorch tutorial as well. Have you been able to successful load in and use a quantized model in the first place?",
"I tested `albert-base-v1` as well, since I can't get `albert-base-v2` to work (created an issue), and I can confirm that I am getting the same error. When `outputs = quantized_model(input_ids, labels=labels)` is run, the error occurs.",
"@ElektrikSpark , I can evaluate using quantized bert model as shown in the documentation. Accuracy is low as compared to original Bert. After saving quantized model, I tried loading it from command line, it is not working for me. \r\nWith Albert, quantization step is not completing.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Any solution to this error?",
"Same issue here.",
"I found issue while loading the quantized bert model, accuracy score decreases significantly. Does this mean, we can't use quantized BERT for production? I am not sure then why this [tutorial](https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html) was provided",
"I was able to solve this issue by using this\r\n\r\n`model = torch.quantization.quantize_dynamic(\r\n big_model, {torch.nn.Bilinear}, dtype=torch.qint8\r\n )\r\n`\r\n\r\nNotice I used Bilinear instead of Linear, now dont ask me why, I just saw someone do something similar while quantizing GPT2 model",
"for those still looking for a workaround solution to this issue: you may try following changes to AlbertAttention.forward()\r\n\r\n ...\r\n # Should find a better way to do this\r\n # w = (\r\n # self.dense.weight.t()\r\n # .view(self.num_attention_heads, self.attention_head_size, self.hidden_size)\r\n # .to(context_layer.dtype)\r\n # )\r\n # b = self.dense.bias.to(context_layer.dtype)\r\n #\r\n # note that dequantize() is required as quantized tensor with dtype.qint8 cannot be converted to\r\n # dtype.float32 by calling .to(context_layer.dtype).\r\n #\r\n # Different from self.dense.weight(), self.dense.bias() returns regular tensor not quantized tensor\r\n w = (\r\n (self.dense.weight().t().dequantize() if callable(self.dense.weight) else self.dense.weight.t())\r\n .view(self.num_attention_heads, self.attention_head_size, self.hidden_size)\r\n .to(context_layer.dtype)\r\n )\r\n b = (self.dense.bias() if callable(self.dense.bias) else self.dense.bias) \\\r\n .to(context_layer.dtype)\r\n",
"I ran into the same problemοΌlike this:\r\n model_pt_quantized(input_ids=model_inputs[\"input_ids\"], token_type_ids=model_inputs[\"token_type_ids\"], attention_mask=model_inputs[\"attention_mask\"])\r\n File \"/work/runtime/torch/lib64/python3.6/site-packages/torch/nn/modules/module.py\", line 722, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/work/runtime/torch/lib/python3.6/site-packages/transformers/modeling_albert.py\", line 563, in forward\r\n output_hidden_states=output_hidden_states,\r\n File \"/work/runtime/torch/lib64/python3.6/site-packages/torch/nn/modules/module.py\", line 722, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/work/runtime/torch/lib/python3.6/site-packages/transformers/modeling_albert.py\", line 346, in forward\r\n output_hidden_states,\r\n File \"/work/runtime/torch/lib64/python3.6/site-packages/torch/nn/modules/module.py\", line 722, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/work/runtime/torch/lib/python3.6/site-packages/transformers/modeling_albert.py\", line 299, in forward\r\n layer_output = albert_layer(hidden_states, attention_mask, head_mask[layer_index], output_attentions)\r\n File \"/work/runtime/torch/lib64/python3.6/site-packages/torch/nn/modules/module.py\", line 722, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/work/runtime/torch/lib/python3.6/site-packages/transformers/modeling_albert.py\", line 277, in forward\r\n attention_output = self.attention(hidden_states, attention_mask, head_mask, output_attentions)\r\n File \"/work/runtime/torch/lib64/python3.6/site-packages/torch/nn/modules/module.py\", line 722, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/work/runtime/torch/lib/python3.6/site-packages/transformers/modeling_albert.py\", line 251, in forward\r\n self.dense.weight.t()\r\nAttributeError: 'function' object has no attribute 't'\r\nDoes pytorch support dynamic albert quantization nowοΌ"
] | 1,579 | 1,614 | 1,585 | NONE | null | ## β Questions & Help
Hi,
Thank you for providing great documentation on quantization:
https://pytorch.org/tutorials/intermediate/dynamic_quantization_bert_tutorial.html
I am trying similar steps on Albert Pytorch model, converted "albert-base-v1" to quantized one by applying dynamic quantization on linear layers. At inference stage (with quantized model), I get following error:
w = ( self.dense.weight.t()
.view(self.num_attention_heads, self.attention_head_size, self.hidden_size)
.to(context_layer.dtype)
AttributeError: 'function' object has no attribute 't'
Any pointers about how to solve this error ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2542/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/2542/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2541 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2541/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2541/comments | https://api.github.com/repos/huggingface/transformers/issues/2541/events | https://github.com/huggingface/transformers/issues/2541 | 550,570,009 | MDU6SXNzdWU1NTA1NzAwMDk= | 2,541 | squad convert example to features potential bug | {
"login": "phuongpm241",
"id": 29219768,
"node_id": "MDQ6VXNlcjI5MjE5NzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/29219768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phuongpm241",
"html_url": "https://github.com/phuongpm241",
"followers_url": "https://api.github.com/users/phuongpm241/followers",
"following_url": "https://api.github.com/users/phuongpm241/following{/other_user}",
"gists_url": "https://api.github.com/users/phuongpm241/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phuongpm241/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phuongpm241/subscriptions",
"organizations_url": "https://api.github.com/users/phuongpm241/orgs",
"repos_url": "https://api.github.com/users/phuongpm241/repos",
"events_url": "https://api.github.com/users/phuongpm241/events{/privacy}",
"received_events_url": "https://api.github.com/users/phuongpm241/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"The new and old versions of SQuAD should behave exactly the same when building features. Do you think you could provide an example script that replicates this issue, so that I may take a look at it?",
"Possibly related to #2548 ",
"I was trying to run `run_squad.py` using this script\r\n```\r\nCUDA_VISIBLE_DEVICES=1 python run_squad.py \\\r\n --model_type bert \\\r\n --model_name_or_path bert-base-cased \\\r\n --do_train \\\r\n --do_eval \\\r\n --do_lower_case \\\r\n --train_file ../clicr_train_squadstyle.1.0.json \\\r\n --predict_file ../clicr_dev_squadstyle.1.0.json \\\r\n --per_gpu_train_batch_size 12 \\\r\n --learning_rate 3e-5 \\\r\n --num_train_epochs 2.0 \\\r\n --max_seq_length 384 \\\r\n --doc_stride 128 \\\r\n --output_dir ../squad_results/\r\n```\r\n\r\n`max_seq_length` was set to be larger than `doc_stride`.\r\n\r\nIt might be my misunderstanding, but the old version of `examples` was caching a lot of checkpoints and log of the number of iterations over 913 while the new version has 14 iterations per epoch.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,584 | 1,584 | NONE | null | ## β Questions & Help
<!-- A clear and concise description of the question. -->
I used an older version of run_squad.py (and everything else in the example). My dataset contains very long documents (1000-2000 tokens). In the past, convert_example_to_features returns about 913 features per 12 examples. However, after a pull I did last night, the number of features is now 14. Both f1 and exact match drops tremendously because of that. I wonder if there were any changes in the pull I did. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2541/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2541/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2540 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2540/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2540/comments | https://api.github.com/repos/huggingface/transformers/issues/2540/events | https://github.com/huggingface/transformers/pull/2540 | 550,519,215 | MDExOlB1bGxSZXF1ZXN0MzYzNDEyMDQ4 | 2,540 | [PyTorch 1.4] Fix failing torchscript test for xlnet | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,579 | 1,579 | 1,579 | MEMBER | null | model.parameters() order is apparently not stable (only for xlnet, for some reason) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2540/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2540/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2540",
"html_url": "https://github.com/huggingface/transformers/pull/2540",
"diff_url": "https://github.com/huggingface/transformers/pull/2540.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2540.patch",
"merged_at": 1579177020000
} |
https://api.github.com/repos/huggingface/transformers/issues/2539 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2539/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2539/comments | https://api.github.com/repos/huggingface/transformers/issues/2539/events | https://github.com/huggingface/transformers/issues/2539 | 550,507,196 | MDU6SXNzdWU1NTA1MDcxOTY= | 2,539 | Finetuning TFDistilBertForQuestionAnswering on SQuAD | {
"login": "melwazir",
"id": 37004311,
"node_id": "MDQ6VXNlcjM3MDA0MzEx",
"avatar_url": "https://avatars.githubusercontent.com/u/37004311?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/melwazir",
"html_url": "https://github.com/melwazir",
"followers_url": "https://api.github.com/users/melwazir/followers",
"following_url": "https://api.github.com/users/melwazir/following{/other_user}",
"gists_url": "https://api.github.com/users/melwazir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/melwazir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/melwazir/subscriptions",
"organizations_url": "https://api.github.com/users/melwazir/orgs",
"repos_url": "https://api.github.com/users/melwazir/repos",
"events_url": "https://api.github.com/users/melwazir/events{/privacy}",
"received_events_url": "https://api.github.com/users/melwazir/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, \r\n\r\n**Regarding the lack of tf examples**:\r\nI am looking for a similar example ( a squad tf one) and found this [issue](https://github.com/huggingface/transformers/issues/2387), where @LysandreJik mentioned that he is currently working on exactly that. \r\n\r\n**Regarding your specific error**: \r\nsquad_convert_examples_to_features allows you to specify if you want to receive the features as pytorch data (default, I believe) or tf dataset. Just use the argument **return_dataset=\"tf\"**\r\n\r\nSince I am looking for a similar example, I would be glad if you could share your code as soon as it works ;) ",
"Thanks for the reply @jwallat. I must have misunderstood that argument. I added it now but now I get an error. Well, two actually. I get this error on the first run:\r\n\r\nAttributeError Traceback (most recent call last)\r\n\r\n<ipython-input-23-615fcff126d8> in <module>()\r\n----> 1 model.fit(training_features, validation_data = test_features ,epochs=3)\r\n\r\n8 frames\r\n\r\n/tensorflow-2.1.0/python3.6/tensorflow_core/python/keras/engine/training_utils.py in cast_if_floating_dtype_and_mismatch(targets, outputs)\r\n 1339 if isinstance(target, np.ndarray):\r\n 1340 target = ops.convert_to_tensor(target)\r\n-> 1341 if target.dtype != out.dtype:\r\n 1342 new_targets.append(cast_single_tensor(target, dtype=out.dtype))\r\n 1343 else:\r\n\r\nAttributeError: 'str' object has no attribute 'dtype'\r\n\r\nIf I just run the fit statement again, without any changes, I get a different error:\r\n\r\n---------------------------------------------------------------------------\r\n\r\nValueError Traceback (most recent call last)\r\n\r\n<ipython-input-4-865a9afcc901> in <module>()\r\n----> 1 model.fit(training_features, validation_data = test_features ,epochs=3)\r\n\r\n16 frames\r\n\r\n/tensorflow-2.1.0/python3.6/tensorflow_core/python/autograph/impl/api.py in wrapper(*args, **kwargs)\r\n 235 except Exception as e: # pylint:disable=broad-except\r\n 236 if hasattr(e, 'ag_error_metadata'):\r\n--> 237 raise e.ag_error_metadata.to_exception(e)\r\n 238 else:\r\n 239 raise\r\n\r\nValueError: in converted code:\r\n\r\n /tensorflow-2.1.0/python3.6/tensorflow_core/python/keras/engine/training_v2.py:677 map_fn\r\n batch_size=None)\r\n /tensorflow-2.1.0/python3.6/tensorflow_core/python/keras/engine/training.py:2469 _standardize_tensors\r\n exception_prefix='target')\r\n /tensorflow-2.1.0/python3.6/tensorflow_core/python/keras/engine/training_utils.py:510 standardize_input_data\r\n 'for each key in: ' + str(names))\r\n\r\n ValueError: No data provided for \"output_1\". Need data for each key in: ['output_1', 'output_2']\r\n\r\n\r\nI've been at this for a whole day now and I'm stumped. I tried changing the loss function to sparse categorical crossentropy but it doesn't make any difference. I keep getting the same two errors. What am I doing wrong?\r\n\r\nEdit: Excuse my beginner's incompetence. I've been digging through keras code for a while now and now I think the first error is the actual error because along the trace there's a `standardize` function which looks like it changes the original dataset in spite of throwing the error. So on the next run it bypasses the original error line and throws an error at a later stage.\r\n\r\nStill not sure why the first error is happening though! Would appreciate some pointers!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,585 | 1,585 | NONE | null | ## β Questions & Help
<!-- A clear and concise description of the question. -->
Hi. I'm trying to finetune a TFDistilBertForQuestionAnswering model on the SQuAD 1.1 dataset, but I'm getting the following error at the "fit" statement:
ValueError: Failed to find data adapter that can handle input: (<class 'list'> containing values of types {"<class 'transformers.data.processors.squad.SquadFeatures'>"}), <class 'NoneType'>
Here is my code (running on Colab):
```
!pip install transformers
try:
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from transformers import *
from transformers.data.processors.squad import SquadV1Processor
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFDistilBertForQuestionAnswering.from_pretrained('distilbert-base-uncased')
import tensorflow_datasets as tfds
dataset = tfds.load("squad")
processor = SquadV1Processor()
training_examples = processor.get_examples_from_dataset(dataset, evaluate=False)
evaluation_examples = processor.get_examples_from_dataset(dataset, evaluate=True)
training_features = squad_convert_examples_to_features(
examples=training_examples,
tokenizer=tokenizer,
max_seq_length=384,
doc_stride=128,
max_query_length=96,
is_training=True,
)
test_features = squad_convert_examples_to_features(
examples=evaluation_examples,
tokenizer=tokenizer,
max_seq_length=384,
doc_stride=128,
max_query_length=96,
is_training=False,
)
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08, clipnorm=1.0)
model.compile(optimizer=optimizer, loss="mse", metrics=["mae"])
model.fit(training_features, validation_data = test_features ,epochs=3)
```
The documentation for `squad_convert_examples_to_features` says:
> Converts a list of examples into a list of features that can be directly given as input to a model.
It doesn't specify whether that model is a TF or PT model (honestly there's a frustrating lack of TF examples for this repo in general).
Side question: Is my choice of loss function (mse) in this case correct?
Appreciate the help. Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2539/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2538 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2538/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2538/comments | https://api.github.com/repos/huggingface/transformers/issues/2538/events | https://github.com/huggingface/transformers/pull/2538 | 550,488,329 | MDExOlB1bGxSZXF1ZXN0MzYzMzg2NTg5 | 2,538 | :lipstick: super | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The CI errors are unrelated to this PR (got the same ones on a commit to master), so I'll try to fix them on another branch @thomwolf @LysandreJik ",
"CI error fixed in #2540 "
] | 1,579 | 1,579 | 1,579 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2538/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2538",
"html_url": "https://github.com/huggingface/transformers/pull/2538",
"diff_url": "https://github.com/huggingface/transformers/pull/2538.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2538.patch",
"merged_at": 1579177036000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2537 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2537/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2537/comments | https://api.github.com/repos/huggingface/transformers/issues/2537/events | https://github.com/huggingface/transformers/issues/2537 | 550,467,541 | MDU6SXNzdWU1NTA0Njc1NDE= | 2,537 | [Question] Help needed to understand how torch.distributed.barrier() works | {
"login": "hlums",
"id": 16907204,
"node_id": "MDQ6VXNlcjE2OTA3MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/16907204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hlums",
"html_url": "https://github.com/hlums",
"followers_url": "https://api.github.com/users/hlums/followers",
"following_url": "https://api.github.com/users/hlums/following{/other_user}",
"gists_url": "https://api.github.com/users/hlums/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hlums/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hlums/subscriptions",
"organizations_url": "https://api.github.com/users/hlums/orgs",
"repos_url": "https://api.github.com/users/hlums/repos",
"events_url": "https://api.github.com/users/hlums/events{/privacy}",
"received_events_url": "https://api.github.com/users/hlums/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I've provided an answer on Stack Overflow. Please close the issue here on Github. Thanks.",
"Thanks @BramVanroy for the detailed answer! "
] | 1,579 | 1,579 | 1,579 | CONTRIBUTOR | null | ## β Questions & Help
<!-- A clear and concise description of the question. -->
I have been trying to understand how torch.distributed.barrier() is being used in the examples in this repo. I posted [this stackoverflow question](https://stackoverflow.com/questions/59760328/how-does-torch-distributed-barrier-work). Maybe someone from the huggingface team can help answering it? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2537/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2536 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2536/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2536/comments | https://api.github.com/repos/huggingface/transformers/issues/2536/events | https://github.com/huggingface/transformers/issues/2536 | 550,460,269 | MDU6SXNzdWU1NTA0NjAyNjk= | 2,536 | Universal Sentence Encoder | {
"login": "rdisipio",
"id": 7974270,
"node_id": "MDQ6VXNlcjc5NzQyNzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7974270?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rdisipio",
"html_url": "https://github.com/rdisipio",
"followers_url": "https://api.github.com/users/rdisipio/followers",
"following_url": "https://api.github.com/users/rdisipio/following{/other_user}",
"gists_url": "https://api.github.com/users/rdisipio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rdisipio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rdisipio/subscriptions",
"organizations_url": "https://api.github.com/users/rdisipio/orgs",
"repos_url": "https://api.github.com/users/rdisipio/repos",
"events_url": "https://api.github.com/users/rdisipio/events{/privacy}",
"received_events_url": "https://api.github.com/users/rdisipio/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"+1 !!\r\nAt reply.ai we have been using USE a lot for Semantic Retrieval. What most impressed us was the Q&A dual encoder model. Works better than anything else I know in case you need semantic similarity between a query and contexts.\r\nIt's true that Tensorflow Hub makes it super easy to work with. But we use your Transformers lib for everything else. So would be nice to have it all in one place.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Bump. I would appreciate this, as it would be handy to have a model geared toward semantic similarity rather than auto-encoding/ auto-regression, as all of the other default models are.\r\nThanks!",
"+1. I think this could be a great addition.",
"+1",
"This might be a more appropriate model to port: https://tfhub.dev/google/universal-sentence-encoder-multilingual-large/3\r\n\r\nThe model in the OP was updated with the one linked above, alongwith addition of 15 languages.",
"+1. ",
"+1 or https://ai.googleblog.com/2020/08/language-agnostic-bert-sentence.html",
"I started working on this independently [here](https://github.com/setu4993/convert-use-tf-pt). Would be great to get some help from anyone interested to get it done faster.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> +1 or https://ai.googleblog.com/2020/08/language-agnostic-bert-sentence.html\r\n\r\n@MiroFurtado : I ported and published the LaBSE model to the HF model hub here: https://huggingface.co/setu4993/LaBSE",
"+1",
"Are there any news on this?",
"Did you try using @setu4993's model shared above?",
"Are we getting it on hugging face?"
] | 1,579 | 1,677 | 1,604 | NONE | null | # πNew model addition
## Model description
Encoder of greater-than-word length text trained on a variety of data.
## Open Source status
* [ ] the model implementation is available: see paper https://arxiv.org/abs/1803.11175
* [ ] the model weights are available: available from tfhub: https://tfhub.dev/google/universal-sentence-encoder/4
* [ ] who are the authors: Google
## Additional context
Standard for sentence embedding, but would like to compare to other methods without having to rely on Tensorflow hub. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2536/reactions",
"total_count": 11,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2536/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2535 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2535/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2535/comments | https://api.github.com/repos/huggingface/transformers/issues/2535/events | https://github.com/huggingface/transformers/pull/2535 | 550,431,924 | MDExOlB1bGxSZXF1ZXN0MzYzMzM5Njcz | 2,535 | Tokenizer.from_pretrained: fetch all possible files remotely | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2535?src=pr&el=h1) Report\n> Merging [#2535](https://codecov.io/gh/huggingface/transformers/pull/2535?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eb59e9f70513b538d2174d4ea1efea7ba8554b58?src=pr&el=desc) will **decrease** coverage by `0.06%`.\n> The diff coverage is `75%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2535?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2535 +/- ##\n==========================================\n- Coverage 74.67% 74.61% -0.07% \n==========================================\n Files 87 87 \n Lines 14798 14798 \n==========================================\n- Hits 11050 11041 -9 \n- Misses 3748 3757 +9\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2535?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/configuration\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2535/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.39% <100%> (+0.03%)` | :arrow_up: |\n| [src/transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2535/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `96.2% <100%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/configuration\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2535/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `100% <100%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2535/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `85.64% <65%> (-1.92%)` | :arrow_down: |\n| [src/transformers/file\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2535/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `71.69% <80%> (+1.64%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2535?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2535?src=pr&el=footer). Last update [eb59e9f...a08b24d](https://codecov.io/gh/huggingface/transformers/pull/2535?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Theyβre not *super* slow (meaning each download is a few hundred kB at most)\n\nBut maybe we should have a slow and a super_slow decorator at some point\n\nOn Thu, Jan 16, 2020 at 3:15 AM Thomas Wolf <[email protected]>\nwrote:\n\n> *@thomwolf* commented on this pull request.\n> ------------------------------\n>\n> In tests/test_tokenization_auto.py\n> <https://github.com/huggingface/transformers/pull/2535#discussion_r367280674>\n> :\n>\n> > @@ -56,3 +56,17 @@ def test_tokenizer_from_model_type(self):\n> tokenizer = AutoTokenizer.from_pretrained(DUMMY_UNKWOWN_IDENTIFIER)\n> self.assertIsInstance(tokenizer, RobertaTokenizer)\n> self.assertEqual(len(tokenizer), 20)\n> +\n> + def test_tokenizer_identifier_with_correct_config(self):\n>\n> Should we decorate these tests (and the previous ones) with @slow?\n>\n> β\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/2535?email_source=notifications&email_token=AACPXMJDS2SCYQ66THBT7LTQ6AJSFA5CNFSM4KHJ6ZEKYY3PNVWWK3TUL52HS4DFWFIHK3DMKJSXC5LFON2FEZLWNFSXPKTDN5WW2ZLOORPWSZGOCR6PMZY#pullrequestreview-343733863>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AACPXMIDZWRFCQ4EGPSQTNLQ6AJSFANCNFSM4KHJ6ZEA>\n> .\n>\n",
"> Theyβre not *super* slow (meaning each download is a few hundred kB at most) But maybe we should have a slow and a super_slow decorator at some point\r\n> [β¦](#)\r\n> On Thu, Jan 16, 2020 at 3:15 AM Thomas Wolf ***@***.***> wrote: ***@***.**** commented on this pull request. ------------------------------ In tests/test_tokenization_auto.py <[#2535 (comment)](https://github.com/huggingface/transformers/pull/2535#discussion_r367280674)> : > @@ -56,3 +56,17 @@ def test_tokenizer_from_model_type(self): tokenizer = AutoTokenizer.from_pretrained(DUMMY_UNKWOWN_IDENTIFIER) self.assertIsInstance(tokenizer, RobertaTokenizer) self.assertEqual(len(tokenizer), 20) + + def test_tokenizer_identifier_with_correct_config(self): Should we decorate these tests (and the previous ones) with @slow? β You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub <#2535?email_source=notifications&email_token=AACPXMJDS2SCYQ66THBT7LTQ6AJSFA5CNFSM4KHJ6ZEKYY3PNVWWK3TUL52HS4DFWFIHK3DMKJSXC5LFON2FEZLWNFSXPKTDN5WW2ZLOORPWSZGOCR6PMZY#pullrequestreview-343733863>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AACPXMIDZWRFCQ4EGPSQTNLQ6AJSFANCNFSM4KHJ6ZEA> .\r\n\r\nOh yes you're right, tokenizer vocabs are pretty small indeed. Ok for no `@slow`!"
] | 1,579 | 1,579 | 1,579 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2535/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2535/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2535",
"html_url": "https://github.com/huggingface/transformers/pull/2535",
"diff_url": "https://github.com/huggingface/transformers/pull/2535.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2535.patch",
"merged_at": 1579211240000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2534 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2534/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2534/comments | https://api.github.com/repos/huggingface/transformers/issues/2534/events | https://github.com/huggingface/transformers/issues/2534 | 550,241,386 | MDU6SXNzdWU1NTAyNDEzODY= | 2,534 | DistilBERT accuracies on the glue test set. | {
"login": "smr97",
"id": 18290261,
"node_id": "MDQ6VXNlcjE4MjkwMjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/18290261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smr97",
"html_url": "https://github.com/smr97",
"followers_url": "https://api.github.com/users/smr97/followers",
"following_url": "https://api.github.com/users/smr97/following{/other_user}",
"gists_url": "https://api.github.com/users/smr97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/smr97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smr97/subscriptions",
"organizations_url": "https://api.github.com/users/smr97/orgs",
"repos_url": "https://api.github.com/users/smr97/repos",
"events_url": "https://api.github.com/users/smr97/events{/privacy}",
"received_events_url": "https://api.github.com/users/smr97/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I want the numbers too.",
"You can check the model card for the evaluation results: https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english"
] | 1,579 | 1,657 | 1,584 | NONE | null | ## β Questions & Help
<!-- A clear and concise description of the question. -->
I need to compare my research against distilBERT as a baseline for a paper in progress. I went through your publication and found that you don't report accuracies on the glue test set and instead on the dev set. TINY BERT publication by huawei tries to reproduce your work, but the numbers are lower.
I would really appreciate some help regarding this. As far as i understand, i need to distill the student with the entire Wikipedia + bookcorpus? Is there any way to skip this step (load a model that you might have)?
Alternatively, if you have the latest submission on glue, it would really help to know the numbers.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2534/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2534/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2533 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2533/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2533/comments | https://api.github.com/repos/huggingface/transformers/issues/2533/events | https://github.com/huggingface/transformers/issues/2533 | 550,236,707 | MDU6SXNzdWU1NTAyMzY3MDc= | 2,533 | Gradient accumulation | {
"login": "okanlv",
"id": 29547397,
"node_id": "MDQ6VXNlcjI5NTQ3Mzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/29547397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/okanlv",
"html_url": "https://github.com/okanlv",
"followers_url": "https://api.github.com/users/okanlv/followers",
"following_url": "https://api.github.com/users/okanlv/following{/other_user}",
"gists_url": "https://api.github.com/users/okanlv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/okanlv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/okanlv/subscriptions",
"organizations_url": "https://api.github.com/users/okanlv/orgs",
"repos_url": "https://api.github.com/users/okanlv/repos",
"events_url": "https://api.github.com/users/okanlv/events{/privacy}",
"received_events_url": "https://api.github.com/users/okanlv/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,584 | 1,584 | NONE | null | Shouldn't we include `len(train_dataloader)` along with `step` here considering `len(train_dataloader)` might be a odd number? In that case, we could accumulate the gradients more times than `gradient_accumulation_steps`.
https://github.com/huggingface/transformers/blob/0412f3d9298cdb8ba7f69570753ec6a07d240c87/examples/run_squad.py#L231 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2533/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2532 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2532/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2532/comments | https://api.github.com/repos/huggingface/transformers/issues/2532/events | https://github.com/huggingface/transformers/pull/2532 | 550,221,008 | MDExOlB1bGxSZXF1ZXN0MzYzMTY1ODEx | 2,532 | Automatic testing of examples in documentation | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2532?src=pr&el=h1) Report\n> Merging [#2532](https://codecov.io/gh/huggingface/transformers/pull/2532?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cefd51c50cc08be8146c1151544495968ce8f2ad?src=pr&el=desc) will **increase** coverage by `0.09%`.\n> The diff coverage is `98.59%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2532?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2532 +/- ##\n==========================================\n+ Coverage 74.59% 74.69% +0.09% \n==========================================\n Files 87 87 \n Lines 14807 14863 +56 \n==========================================\n+ Hits 11046 11102 +56 \n Misses 3761 3761\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2532?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_tf\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2532/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.82% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/2532/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `61.09% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/configuration\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/2532/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/configuration\\_xlm\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2532/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3hsbV9yb2JlcnRhLnB5) | `100% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/configuration\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2532/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2NhbWVtYmVydC5weQ==) | `100% <ΓΈ> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2532/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `94.27% <100%> (+0.05%)` | :arrow_up: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2532/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `87.9% <100%> (+0.17%)` | :arrow_up: |\n| [src/transformers/modeling\\_tf\\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/2532/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `90.41% <100%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/2532/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jYW1lbWJlcnQucHk=) | `100% <100%> (ΓΈ)` | :arrow_up: |\n| [src/transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/2532/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.21% <100%> (+0.3%)` | :arrow_up: |\n| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/2532/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2532?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2532?src=pr&el=footer). Last update [cefd51c...904e2b2](https://codecov.io/gh/huggingface/transformers/pull/2532?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,579 | 1,579 | 1,579 | MEMBER | null | Adds a test that tests the examples in the documentation.
Addsa "Glossary" page for recurring arguments.
Updates the documentation of pytorch & tensorflow models.
Models done:
- [x] ALBERT
- [x] BERT
- [x] GPT-2
- [x] GPT
- [x] Transformer XL
- [x] XLNet
- [x] XLM
- [x] CamemBERT
- [x] RoBERTa
- [x] DistilBERT
- [x] CTRL
To be added (not currently in the docs):
- [x] XLM-RoBERTa
This PR will be merged once these changes have been done. The remaining documentation changes are the following:
- [ ] Update tokenizer documentation
- [ ] Put meaningful examples on each model | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2532/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2532/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2532",
"html_url": "https://github.com/huggingface/transformers/pull/2532",
"diff_url": "https://github.com/huggingface/transformers/pull/2532.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2532.patch",
"merged_at": 1579790326000
} |
https://api.github.com/repos/huggingface/transformers/issues/2531 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2531/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2531/comments | https://api.github.com/repos/huggingface/transformers/issues/2531/events | https://github.com/huggingface/transformers/pull/2531 | 550,120,560 | MDExOlB1bGxSZXF1ZXN0MzYzMDgyOTgx | 2,531 | Serving improvements | {
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> This PR brings some improvements over the CLI serving command.\r\n> \r\n> Changes:\r\n> \r\n> * Expose the possibility to change the number of underlying FastAPI workers.\r\n> * Make forward() async so it doesn't timeout in the middle a requests.\r\n> * Fixed USE_TF, USE_TORCH env vars fighting each other.\r\n\r\nHi @mfuntowicz \r\nI have tested multi workers on my PC with localhost, and I observe that getting more workers does not make Query request per sec any faster. \r\n\r\nI suppose the bottleneck somewhere, maybe the model file ?\r\n",
"Hi @zhoudoufu,\r\n\r\nThere is a high probability if all the workers are running on the same GPU there still no to get sequential access to the hardware.\r\n\r\nOne possible improvement would be to specify env variable / GPU ordinal for each worker instance. I may try to have a look in the near future.\r\n\r\nMorgan",
"Hi @mfuntowicz \r\nFor my test, I use only CPUs. And I do observe CPUs' usage goes up when having more workers. For CPU usage I think there might be other causes."
] | 1,579 | 1,651 | 1,579 | MEMBER | null | This PR brings some improvements over the CLI serving command.
Changes:
- Expose the possibility to change the number of underlying FastAPI workers.
- Make forward() async so it doesn't timeout in the middle a requests.
- Fixed USE_TF, USE_TORCH env vars fighting each other. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2531/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2531",
"html_url": "https://github.com/huggingface/transformers/pull/2531",
"diff_url": "https://github.com/huggingface/transformers/pull/2531.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2531.patch",
"merged_at": 1579535784000
} |
https://api.github.com/repos/huggingface/transformers/issues/2530 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2530/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2530/comments | https://api.github.com/repos/huggingface/transformers/issues/2530/events | https://github.com/huggingface/transformers/issues/2530 | 550,054,948 | MDU6SXNzdWU1NTAwNTQ5NDg= | 2,530 | SentencePiece Error with AlbertTokenizer using google pretrained chinese model | {
"login": "ubuntu733",
"id": 5701884,
"node_id": "MDQ6VXNlcjU3MDE4ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5701884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ubuntu733",
"html_url": "https://github.com/ubuntu733",
"followers_url": "https://api.github.com/users/ubuntu733/followers",
"following_url": "https://api.github.com/users/ubuntu733/following{/other_user}",
"gists_url": "https://api.github.com/users/ubuntu733/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ubuntu733/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ubuntu733/subscriptions",
"organizations_url": "https://api.github.com/users/ubuntu733/orgs",
"repos_url": "https://api.github.com/users/ubuntu733/repos",
"events_url": "https://api.github.com/users/ubuntu733/events{/privacy}",
"received_events_url": "https://api.github.com/users/ubuntu733/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Indeed, this implementation of ALBERT only supports SentencePiece as its tokenizer.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,584 | 1,584 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using albert:
Language I am using the model on , Chinese:
The problem arise when using:
* [ ] the official example scripts: (give details)
`AlbertTokenizer.from_pretrained(vocab)`
It shows:
> Traceback (most recent call last):
File "/home/shenchengen/venv/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 438, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/shenchengen/venv/lib/python3.6/site-packages/transformers/tokenization_albert.py", line 90, in __init__
self.sp_model.Load(vocab_file)
File "/home/shenchengen/venv/lib/python3.6/site-packages/sentencepiece.py", line 118, in Load
return _sentencepiece.SentencePieceProcessor_Load(self, filename)
RuntimeError: Internal: /sentencepiece/src/sentencepiece_processor.cc(73) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
sentencepiece_processor.cc(558) LOG(ERROR) /sentencepiece/src/sentencepiece_processor.cc(124) [model_] Model is not initialized.
Returns default value 0
sentencepiece_processor.cc(558) LOG(ERROR) /sentencepiece/src/sentencepiece_processor.cc(124) [model_] Model is not initialized.
Returns default value 0
**It seems origin google chinese pretrained model do not use sentence piece,but word piece, so the pretrained model has no sentence piece model but has vocab_chinese.txt** [albert-issues](https://github.com/google-research/ALBERT/issues/58)
### Expected behavior:
AlbertTokenizer support word piece method
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2530/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2530/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2529 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2529/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2529/comments | https://api.github.com/repos/huggingface/transformers/issues/2529/events | https://github.com/huggingface/transformers/issues/2529 | 550,043,911 | MDU6SXNzdWU1NTAwNDM5MTE= | 2,529 | Updating the issue template, directing general question to SO | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi Bram, first of all we want to reiterate our appreciation for what you've been doing β the community is very lucky to have you. \r\n\r\nYou raise some good points. Would you like to update the issue templates, updating what needs to be updated + linking to Stack Overflow for support requests?\r\n\r\nIn the longer term, we've floated a number of different ideas: \r\n- open a [Discourse](https://www.discourse.org/) forum on discourse.huggingface.co or equivalent\r\n- open a Discord chat server (?)\r\n- open up our internal Slack :)\r\n\r\nThoughts?",
"Thanks @julien-c for the nice words. It's not much, but I help where and when I can.\r\n\r\nI think that the decision of how to support the community best depends on the answer of how much time/effort/resources you (as a company) can put into it. I don't mean the platform, but the people that dedicate time to provide support. I can imagine that this is not lucrative because you don't really get anything in return, so it is not an easy decision. It is an important one, though, because as you can see: when I posted this not even two weeks ago there were 375 open issues, now there are 404.\r\n\r\nThree examples come to mind of types of support that I came into contact with:\r\n\r\n- numpy: they [had a discussion](https://github.com/numpy/numpy.org/issues/28) about the issue of support last year and one of the maintainers [said](https://github.com/numpy/numpy.org/issues/28#issuecomment-526878616): \"I appreciate the suggestions (Reddit also), but anything but Stack Overflow seems like redirecting people to the wrong place.\"\r\n- [PyTorch Discourse forums](https://discuss.pytorch.org/). PyTorch itself is _huge_, and still many questions only have zero or one reply. Luckily, PyTorch has invested some resources in support seeing there are some developers actively contributing to the forum. _But still..._ many questions go unanswered. In reality, most questions are posted on Stack Overflow, I think.\r\n- [Gitter for spaCy](https://gitter.im/explosion/spaCy). At first I tried to help here and there, but it's just too much with almost no other support. Things pile up quickly, and even when a user wants to help, they're just overwhelmed by a stream of questions. I'm also not a big fan of this format (discord, gitter, slack) to actually help with issues because of how \"topics\" work. I know that you can reply to someone starting a \"thread\", but imo it's all a bit messy.\r\n\r\n**Summary** (but still quite long): if you plan to extend the resources that are going to issue support, I think the discourse forum is the best option. I wouldn't really bother with discord. Opening up Slack is nice, but it should be very clear what it should be used for, then. I wouldn't allow general questions to be asked there, but rather the more one-on-one questions concerning \"I have a new model and tokenizer that I wish to add to transformers\", i.e. the questions that you can discuss with words where you don't necessarily need to write whole blocks of code.\r\n\r\nIf you decide that spending more resources on support is not in your plan, then I would just move all general questions to Stack Overflow. I know it's \"the easy\" option, but I think it's the most viable one. All general questions in **one place**, tagged with the correct tag, and **a whole community that can help out** for general PyTorch/Tensorflow questions. On top of that, it's **free advertisement**, too, because your library will pop up here and there and will get noticed by others. Something you won't have on your private forum.\r\n\r\n**tl;dr** \r\nIf you **will** put resources towards more support\r\n- Discourse forum\r\n- Slack for contributors?\r\n- Github for bugs, feature requests\r\n\r\nIf you **won't** put resources towards more support\r\n- No discourse forum\r\n- Stack Overflow for all general questions\r\n- Slack for contributors?\r\n- Github for bugs, feature requests\r\n\r\nJust my two cents, of course.",
"Reopened to trigger more discussion",
"These are very good points, thanks a lot for sharing and summarizing your thoughts @BramVanroy ",
"Regarding to the Issue template. Currently the following \"categories\" can be used (opening a new issue):\r\n\r\n\r\n\r\nI think it would be a good idea to automatically add labels for these categories! At the moment I can't really filter out bugs or general questions.",
"I agree that adding automatic labels would definitely make life easier when looking for specific issues. The templates are in a good place thanks to @BramVanroy, automatic labeling should be the next step.",
"we might want to look into [code owners](https://help.github.com/en/github/creating-cloning-and-archiving-repositories/about-code-owners) as a building block for this",
"> we might want to look into [code owners](https://help.github.com/en/github/creating-cloning-and-archiving-repositories/about-code-owners) as a building block for this\r\n\r\nCode owners might also seem like a good idea with respect to storing the README.md files of user models in `model_cards/`, as you [suggested yesterday](https://github.com/huggingface/transformers/issues/2520#issuecomment-579009439). So that everyone can edit their own model card when need be. That being said, that might give more overhead (in the CODEOWNERS file) with not much benefit (reviewing changes to model cards shouldn't take a long time).\r\n\r\n---\r\n\r\nI propose the following automatic labels:\r\n\r\n- New benchmark: `benchmark`\r\n- New model addition: `model-addition`\r\n- Bug report: `bug-report` (after review by a member, and verifying that it actually is a bug, the label should then be changed to `bug` or another relevant label)\r\n- Feature request: `feature`\r\n- Migration from pytorch-pretrained-bert of pytorch-transformers: `migration`\r\n- Questions & Help: `general`\r\n\r\nIf agreed, I can do a PR again. Discussion welcome, of course.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I think @BramVanroy did most of this so closing this issue. Thanks Bram! π€ ",
"i would +1 opening a discord server. Its pretty great for creating a general point to congregate and categorising multiple subject-channels. I have lots of smaller questions about this project that I don't feel are appropriate for SO or a github issue.",
"> i would +1 opening a discord server. Its pretty great for creating a general point to congregate and categorising multiple subject-channels. I have lots of smaller questions about this project that I don't feel are appropriate for SO or a github issue.\r\n\r\nThe problem is that with this kind of format there are billions of questions but barely any answers. spaCy's gitter is such an example. I guess something like that could be set up but without the guarantee of any response. ",
"It's of course anecdotal, but i'm a member of many framework-related discords, and they're the most responsive places typically, compared to IRC, gitter, reddit etc. In my again anecdotal experience, gitter and github are the most barren places for any conversation. I suggest we just do it and see how it goes, its only 1 click to make a discord ",
"@julien-c What do you think? Should we open a discord (without guarantee)?",
"Still dying for this :D ",
"Found this thread while googling to see if the HuggingFace community had a Discord. Was it ever created? I feel like it would be a really nice place for people to discuss NLP stuff more freely and share their findings :)",
"@andantillon Nope, but we do have a [forum](https://discuss.huggingface.co/)!"
] | 1,579 | 1,606 | 1,585 | COLLABORATOR | null | ## π Feature
In the last couple of months, `transformers` has seen an exponential increase in interest; you have exceeded 20k stars, congrats! @thomwolf wrote a blog post on how to open-source your code for a larger audience, but as expected, a side-effect is that you'll get more issues and more pull requests that need to be monitored. Not too long ago there were only 300 open issues, and now we're at 375. On top of that, many issues are closed by the stale bot and not even _actually_ solved, which is unfortunate.
I am no expert in the finer details of transformers and their implementation, but I often make do. When I have a free moment, I go over issues and see where I can help. Things can get frustrating, though, when general question about PyTorch or Tensorflow are asked, or when people have a question and don't fill in the template, or ask one-sentence questions. It makes me lose interest and enthusiasm to help out.
Not all of this can be solved, but perhaps it can be of use to direct a stream of questions to Stack Overflow. A few weeks ago I created the tag [`huggingface-transformers`](https://stackoverflow.com/tags/huggingface-transformers/info), intended for users who have a question about their specific use-case whilst using the transformers library. Considering that it seems hard for you as a company to keep track of all issues (which, again, is understandable), I would propose to direct the "Questions & Help" of the issue template to Stack Overflow. In other words, **keep Github for feature requests, bug reports, and benchmarks and models**, but nothing else. That way, it is easier to keep an overview of _real issues_ without them piling up and getting closed by stalebot, and on top of that you get a huge (free!) support team which is the open source community that is active on Stack Overflow.
It is just an idea, of course, but I think it could help out in the logistics of things.
PS: the issue template also still refers to 'Pytorch Transformers' instead of 'Transformers'.
PPS: I am aware that I also still ask questions and that I am no expert in transformers by far, so I really don't intend to place this issue from atop my high horse. But due to the increased interest and following increased issues and question, it seems a good idea to direct future general questions to a more open platform. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2529/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2529/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2528 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2528/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2528/comments | https://api.github.com/repos/huggingface/transformers/issues/2528/events | https://github.com/huggingface/transformers/issues/2528 | 550,040,692 | MDU6SXNzdWU1NTAwNDA2OTI= | 2,528 | [Question] Add extra sublayer for each layer of Transformer | {
"login": "robinsongh381",
"id": 42966248,
"node_id": "MDQ6VXNlcjQyOTY2MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/42966248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/robinsongh381",
"html_url": "https://github.com/robinsongh381",
"followers_url": "https://api.github.com/users/robinsongh381/followers",
"following_url": "https://api.github.com/users/robinsongh381/following{/other_user}",
"gists_url": "https://api.github.com/users/robinsongh381/gists{/gist_id}",
"starred_url": "https://api.github.com/users/robinsongh381/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/robinsongh381/subscriptions",
"organizations_url": "https://api.github.com/users/robinsongh381/orgs",
"repos_url": "https://api.github.com/users/robinsongh381/repos",
"events_url": "https://api.github.com/users/robinsongh381/events{/privacy}",
"received_events_url": "https://api.github.com/users/robinsongh381/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Anything is possible, if you want to! But it's not straightforward, I think. You can have a look at `BertLayer` where I would assume that you make your changes.\r\n\r\nhttps://github.com/huggingface/transformers/blob/dfe012ad9d6b6f0c9d30bc508b9f1e4c42280c07/src/transformers/modeling_bert.py#L365-L373",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,584 | 1,584 | NONE | null | ## β Questions & Help
Hello !
BERT base has 12 layers and each layer includes follwing sublayer ( and of course, add norm)
` {self-attention -> feed-foward} `
I was wondering if there is a way of adding extra unit to this sublayer , for example,
`{self-attention -> feed-foward -> **LSTM**}`
Thanks
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2528/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2527 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2527/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2527/comments | https://api.github.com/repos/huggingface/transformers/issues/2527/events | https://github.com/huggingface/transformers/issues/2527 | 550,020,997 | MDU6SXNzdWU1NTAwMjA5OTc= | 2,527 | How to get the output in other layers from Bert? | {
"login": "zy4bvb",
"id": 50108711,
"node_id": "MDQ6VXNlcjUwMTA4NzEx",
"avatar_url": "https://avatars.githubusercontent.com/u/50108711?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zy4bvb",
"html_url": "https://github.com/zy4bvb",
"followers_url": "https://api.github.com/users/zy4bvb/followers",
"following_url": "https://api.github.com/users/zy4bvb/following{/other_user}",
"gists_url": "https://api.github.com/users/zy4bvb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zy4bvb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zy4bvb/subscriptions",
"organizations_url": "https://api.github.com/users/zy4bvb/orgs",
"repos_url": "https://api.github.com/users/zy4bvb/repos",
"events_url": "https://api.github.com/users/zy4bvb/events{/privacy}",
"received_events_url": "https://api.github.com/users/zy4bvb/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Have a look at the [documentation](https://huggingface.co/transformers/model_doc/bert.html#bertmodel), particularly at the point about 'outputs'. You'll see that when you use `output_hidden_states=True`, you'll get _all_ outputs back, like so:\r\n\r\n```python\r\nmodel = BertModel.from_pretrained('bert-base-uncased', output_hidden_states=True)\r\n```",
"> Have a look at the [documentation](https://huggingface.co/transformers/model_doc/bert.html#bertmodel), particularly at the point about 'outputs'. You'll see that when you use `output_hidden_states=True`, you'll get _all_ outputs back, like so:\r\n> \r\n> ```python\r\n> model = BertModel.from_pretrained('bert-base-uncased', output_hidden_states=True)\r\n> ```\r\n\r\nThank you! "
] | 1,579 | 1,579 | 1,579 | NONE | null | ## β Questions & Help
I want to analyze the information that every bert layers contains. But i found the BertModel only output the sentence embbeding and the CLS embbeding.
<!-- A clear and concise description of the question. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2527/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2526 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2526/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2526/comments | https://api.github.com/repos/huggingface/transformers/issues/2526/events | https://github.com/huggingface/transformers/pull/2526 | 549,946,278 | MDExOlB1bGxSZXF1ZXN0MzYyOTQwOTcx | 2,526 | modified method simple_accuracy(), before:(preds == labels).mean() This⦠| {
"login": "7will10",
"id": 31179081,
"node_id": "MDQ6VXNlcjMxMTc5MDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/31179081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/7will10",
"html_url": "https://github.com/7will10",
"followers_url": "https://api.github.com/users/7will10/followers",
"following_url": "https://api.github.com/users/7will10/following{/other_user}",
"gists_url": "https://api.github.com/users/7will10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/7will10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/7will10/subscriptions",
"organizations_url": "https://api.github.com/users/7will10/orgs",
"repos_url": "https://api.github.com/users/7will10/repos",
"events_url": "https://api.github.com/users/7will10/events{/privacy}",
"received_events_url": "https://api.github.com/users/7will10/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,579 | 1,579 | 1,579 | NONE | null | modified method simple_accuracy(),
before:
it's (preds == labels).mean()
This will cause an exception[AttributeError: 'bool' object has no attribute 'mean'],
then after update:
change to accuracy_score(labels,preds),
use this method accuracy_score() in package sklearn.metrics | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2526/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2526/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2526",
"html_url": "https://github.com/huggingface/transformers/pull/2526",
"diff_url": "https://github.com/huggingface/transformers/pull/2526.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2526.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/2525 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2525/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2525/comments | https://api.github.com/repos/huggingface/transformers/issues/2525/events | https://github.com/huggingface/transformers/issues/2525 | 549,909,207 | MDU6SXNzdWU1NDk5MDkyMDc= | 2,525 | Error when running demo script in T5Model | {
"login": "g-jing",
"id": 44223191,
"node_id": "MDQ6VXNlcjQ0MjIzMTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/44223191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-jing",
"html_url": "https://github.com/g-jing",
"followers_url": "https://api.github.com/users/g-jing/followers",
"following_url": "https://api.github.com/users/g-jing/following{/other_user}",
"gists_url": "https://api.github.com/users/g-jing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-jing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-jing/subscriptions",
"organizations_url": "https://api.github.com/users/g-jing/orgs",
"repos_url": "https://api.github.com/users/g-jing/repos",
"events_url": "https://api.github.com/users/g-jing/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-jing/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I found the error is because the `lm_labels` is not poped out. One possible solution: change https://github.com/huggingface/transformers/blob/dfe012ad9d6b6f0c9d30bc508b9f1e4c42280c07/src/transformers/modeling_t5.py#L864 to \r\n```\r\n lm_labels = kwargs.pop('decoder_lm_labels', None)\r\n if not lm_labels:\r\n lm_labels = kwargs.pop('lm_labels', None)\r\n```\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,584 | 1,584 | NONE | null | ## π Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): T5
Language I am using the model on (English, Chinese....): English
The problem arise when using:
* [ ] the official example scripts: When I use the example script in https://github.com/huggingface/transformers/blob/dfe012ad9d6b6f0c9d30bc508b9f1e4c42280c07/src/transformers/modeling_t5.py#L716-L724
It shows
```
Traceback (most recent call last):
File "hello.py", line 6, in <module>
model = T5Model.from_pretrained('t5-small')
File "/home/jinggu/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/home/jinggu/anaconda3/lib/python3.7/site-packages/transformers/modeling_t5.py", line 859, in forward
encoder_outputs = self.encoder(hidden_states, **kwargs_encoder)
File "/home/jinggu/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'lm_labels'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2525/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2524 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2524/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2524/comments | https://api.github.com/repos/huggingface/transformers/issues/2524/events | https://github.com/huggingface/transformers/issues/2524 | 549,901,641 | MDU6SXNzdWU1NDk5MDE2NDE= | 2,524 | Will you release the pre-train script for T5? | {
"login": "g-jing",
"id": 44223191,
"node_id": "MDQ6VXNlcjQ0MjIzMTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/44223191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-jing",
"html_url": "https://github.com/g-jing",
"followers_url": "https://api.github.com/users/g-jing/followers",
"following_url": "https://api.github.com/users/g-jing/following{/other_user}",
"gists_url": "https://api.github.com/users/g-jing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-jing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-jing/subscriptions",
"organizations_url": "https://api.github.com/users/g-jing/orgs",
"repos_url": "https://api.github.com/users/g-jing/repos",
"events_url": "https://api.github.com/users/g-jing/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-jing/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,584 | 1,584 | NONE | null | ## β Questions & Help
<!-- A clear and concise description of the question. -->
There is a fine-tune script for BERT/GPT https://github.com/huggingface/transformers/blob/master/examples/run_lm_finetuning.py. Will you include T5 into this script? Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2524/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2524/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2523 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2523/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2523/comments | https://api.github.com/repos/huggingface/transformers/issues/2523/events | https://github.com/huggingface/transformers/issues/2523 | 549,857,822 | MDU6SXNzdWU1NDk4NTc4MjI= | 2,523 | Tokenizer encoding functions don't support 'left' and 'right' values for `pad_to_max_length` | {
"login": "ranamihir",
"id": 8270471,
"node_id": "MDQ6VXNlcjgyNzA0NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8270471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ranamihir",
"html_url": "https://github.com/ranamihir",
"followers_url": "https://api.github.com/users/ranamihir/followers",
"following_url": "https://api.github.com/users/ranamihir/following{/other_user}",
"gists_url": "https://api.github.com/users/ranamihir/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ranamihir/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ranamihir/subscriptions",
"organizations_url": "https://api.github.com/users/ranamihir/orgs",
"repos_url": "https://api.github.com/users/ranamihir/repos",
"events_url": "https://api.github.com/users/ranamihir/events{/privacy}",
"received_events_url": "https://api.github.com/users/ranamihir/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, the documentation could definitely be improved in that regard but what the docstring means is that it will follow the class attribute `padding_side`:\r\n\r\n```py\r\n>>> from transformers import BertTokenizer\r\n>>> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n>>> text = 'Eiffel Tower'\r\n>>> print(tokenizer.encode(text, max_length=20, pad_to_max_length=False))\r\n[101, 1041, 13355, 2884, 3578, 102]\r\n>>> print(tokenizer.encode(text, max_length=20, pad_to_max_length=True))\r\n[101, 1041, 13355, 2884, 3578, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\r\n>>> tokenizer.padding_side = 'left'\r\n>>> print(tokenizer.encode(text, max_length=20, pad_to_max_length=True))\r\n[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 101, 1041, 13355, 2884, 3578, 102]\r\n```\r\n",
"I slightly modified the documentation in 9aeb0b9 and c024ab9\r\n",
"Ahh, gotcha. Thanks for such a quick response! On a related note, would it not be better (to maintain consistency) to have the `padding_side` as an argument for `encode()` instead of setting it as a class attribute? We're providing the rest of them all inside the function.",
"Well padding side is more of a model attribute than an encode functionality. Some models were pre-trained with a padding side on the right (e.g. BERT, GPT-2) while others (e.g. XLNet) pad on the left, and need to be padded on the left in order to obtain coherent results.\r\n\r\nHaving it as a tokenizer attribute allows to set model-relative defaults, while allowing a change if need be!",
"That's a good point. I guess what I had in mind is to have that param in the function as well, besides it being a class attribute -- just like\r\n`max_length` -- to which a similar rationale applies (and it defaults to the model default). But it's not really that important, I guess you guys have more important things to do like bringing out those insanely fast tokenizers. :)"
] | 1,579 | 1,579 | 1,579 | NONE | null | ## π Bug
In the tokenizer encoding functions (`encode`, `encode_plus`, etc.), it seems `pad_to_max_length` only supports boolean values. In the [documentation](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils.py#L801-L805), it's mentioned it can also be set to `left` or `right`, but in the [code](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils.py#L1151-L1162) these values are never checked for -- it's assumed that it's a boolean.
A simple illustration:
```python
>>> from transformers import BertTokenizer
>>> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
>>> text = 'Eiffel Tower'
>>> print(tokenizer.encode(text, max_length=20, pad_to_max_length=False))
[101, 1041, 13355, 2884, 3578, 102]
>>> print(tokenizer.encode(text, max_length=20, pad_to_max_length=True))
[101, 1041, 13355, 2884, 3578, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
>>> print(tokenizer.encode(text, max_length=20, pad_to_max_length='left'))
[101, 1041, 13355, 2884, 3578, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
>>> print(tokenizer.encode(text, max_length=20, pad_to_max_length='right'))
[101, 1041, 13355, 2884, 3578, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
```
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2523/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2522 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2522/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2522/comments | https://api.github.com/repos/huggingface/transformers/issues/2522/events | https://github.com/huggingface/transformers/issues/2522 | 549,674,540 | MDU6SXNzdWU1NDk2NzQ1NDA= | 2,522 | https://s3.amazonaws.com/models.huggingface.co/xxx/pytorch_model.bin failed or can not open at xxx/.cache/xxxxxxxxxx | {
"login": "578123043",
"id": 16147509,
"node_id": "MDQ6VXNlcjE2MTQ3NTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/16147509?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/578123043",
"html_url": "https://github.com/578123043",
"followers_url": "https://api.github.com/users/578123043/followers",
"following_url": "https://api.github.com/users/578123043/following{/other_user}",
"gists_url": "https://api.github.com/users/578123043/gists{/gist_id}",
"starred_url": "https://api.github.com/users/578123043/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/578123043/subscriptions",
"organizations_url": "https://api.github.com/users/578123043/orgs",
"repos_url": "https://api.github.com/users/578123043/repos",
"events_url": "https://api.github.com/users/578123043/events{/privacy}",
"received_events_url": "https://api.github.com/users/578123043/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The correct URL is `https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base-pytorch_model.bin`\r\n\r\nDid you see the url above somewhere?",
"\r\nTo resoleved this , adding `config_class pretrained_model_archive_map base_model_prefix` at Class definition (not def __init__) as the image"
] | 1,579 | 1,579 | 1,579 | NONE | null | https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base/pytorch_model.bin
Could you access on https://s3.amazonaws.com/models.huggingface.co/bert/roberta-base/pytorch_model.bin?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2522/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2521 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2521/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2521/comments | https://api.github.com/repos/huggingface/transformers/issues/2521/events | https://github.com/huggingface/transformers/pull/2521 | 549,632,026 | MDExOlB1bGxSZXF1ZXN0MzYyNjg3NTEw | 2,521 | Bias should be resized with the weights | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2521?src=pr&el=h1) Report\n> Merging [#2521](https://codecov.io/gh/huggingface/transformers/pull/2521?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6c32d8bb95aa81de6a047cca5ae732b93b9db020?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2521?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2521 +/- ##\n==========================================\n+ Coverage 73.24% 73.25% +<.01% \n==========================================\n Files 87 87 \n Lines 15008 15011 +3 \n==========================================\n+ Hits 10993 10996 +3 \n Misses 4015 4015\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2521?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2521/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `64.42% <100%> (+0.17%)` | :arrow_up: |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2521/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `87.72% <100%> (+0.02%)` | :arrow_up: |\n| [src/transformers/modeling\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/2521/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `78.91% <100%> (+0.05%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2521?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2521?src=pr&el=footer). Last update [6c32d8b...b7832ab](https://codecov.io/gh/huggingface/transformers/pull/2521?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,579 | 1,579 | 1,579 | MEMBER | null | Created a link between the linear layer bias and the model attribute bias. This does not change anything for the user nor for the conversion scripts, but allows the `resize_token_embeddings` method to resize the bias as well as the weights of the decoder.
Added a test. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2521/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2521",
"html_url": "https://github.com/huggingface/transformers/pull/2521",
"diff_url": "https://github.com/huggingface/transformers/pull/2521.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2521.patch",
"merged_at": 1579027426000
} |
https://api.github.com/repos/huggingface/transformers/issues/2520 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2520/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2520/comments | https://api.github.com/repos/huggingface/transformers/issues/2520/events | https://github.com/huggingface/transformers/issues/2520 | 549,598,736 | MDU6SXNzdWU1NDk1OTg3MzY= | 2,520 | Descriptions of shared models and interaction with contributors | {
"login": "chinisan",
"id": 59874749,
"node_id": "MDQ6VXNlcjU5ODc0NzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/59874749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chinisan",
"html_url": "https://github.com/chinisan",
"followers_url": "https://api.github.com/users/chinisan/followers",
"following_url": "https://api.github.com/users/chinisan/following{/other_user}",
"gists_url": "https://api.github.com/users/chinisan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chinisan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chinisan/subscriptions",
"organizations_url": "https://api.github.com/users/chinisan/orgs",
"repos_url": "https://api.github.com/users/chinisan/repos",
"events_url": "https://api.github.com/users/chinisan/events{/privacy}",
"received_events_url": "https://api.github.com/users/chinisan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"In a sense, this is related to the discussion that we had over at https://github.com/huggingface/transformers/pull/2281#issuecomment-570574944. The [answer](https://github.com/huggingface/transformers/pull/2281#issuecomment-571418343) by @julien-c was that they are aware of the difficulties and sensitivities that custom models bring, but that they are still figuring out how they want to approach this.\r\n\r\nAn up-voting system seems a good idea, and I also want to emphasise that good documentation for each model is paramount, explaining how it was trained (hyperparameters, data, task), and how it possibly differs from the \"standard\" or \"official\" implementation.",
"If I am not missing anything here, `AutoModel` will only load the language model, but not a model for question answering (incl. the prediction head).\r\n\r\nTry exchanging the line where you load the model with: \r\n```\r\nmodel = AutoModelForQuestionAnswering.from_pretrained(\"ktrapeznikov/albert-xlarge-v2-squad-v2\", cache_dir=model_directory)\r\n```\r\n\r\nSo right now, I guess we need to know what type of community model we are loading?!\r\nIt would be helpful to have some information in the stored configs to infer what task the model was trained on. I found the param `finetuning_task` there, but it's `null` for all models I have checked. \r\n\r\n\r\n\r\n ",
"@tholor Thanks that worked! ",
"Hi all, \r\n\r\n- the sample code on the model pages should indeed showcase the `AutoModelXXX` variant that uses the head(s) defined in the weights. Fix coming soon β οΈ.\r\n- For model description, you can already add a **`README.md`** file to your shared folder on S3 and it will be rendered on your model's page: see e.g. https://huggingface.co/dbmdz/bert-base-german-uncased from @stefan-it \r\n - You can use this file as a model card to describe your model,Β which datasets did you train on, eval results, etc.\r\n - We'll also add metadata such as language, downstream task, etc. which will let us filter results on the models listing page (e.g. \"find models for QA in π³π±\")\r\n\r\n- Finally, we are thinking of storing the README.md files inside a `model_cards/` folder inside the transformers repo itself. i.e. use git and GitHub to let the community collaborate on model READMEs, not just the model author.\r\n\r\nThe kind of editing rules we could put into place would be :\r\n- anyone can propose a PR anywhere.\r\n- on \"main\" canonical models we (HuggingFace and/or maintainers) validate the PRs.\r\n- on \"user\" models the model's author(s) decide. (they are pinged automatically, and they can validate/refuse)\r\n\r\nThoughts?",
"Sounds great, @julien-c !\r\n\r\nI like the idea of having a `readme.md`. Given the variety of tasks, it might be difficult to press everything into a structured config / modelcard format. Nevertheless, I would appreciate having the most important metadata (language, downstream task, training data, performance) in a config and maybe even making it a requirement for upload. Otherwise, it might become easily a big mess after a while and comparison will become more difficult (e.g. eval results).\r\n\r\nRegarding the git workflow for the readme: I like it. If I got it right, people would still be able to upload an initial readme via CLI and only subsequent changes are managed via git? Otherwise, it could slow down the upload of new models a lot.",
"Being relatively strict, as @tholor suggests, seems like a good idea to prevent a forest of random models (see what I did there?). As metadata, I think at least language should definitely be required.\r\n\r\nI very much like the idea of being able to filter by metadata. One could imagine scenarios where you want to filter by language, trained head, upload date, architecture, and so on.\r\n\r\nThe difference between canonical and user models should be made very clear, though. Take for example the models that were explicitly added to the repo (bert-base-german-dbmdz-cased for instance) but that were created by users. Are those canonical (it seems like that because it's part of `BERT_PRETRAINED_MODEL_ARCHIVE_MAP`)?\r\n\r\nLooking forward to it @julien-c!",
"Hi π€\r\n\r\nmy thoughts on this issue:\r\n\r\nI really like that an author of a model can upload a README file via `transformers-cli` interface. This really speeds up a change/additions to the README - and won't require a review process (so I'm not stealing someone's time for a simple README change). On the other side there's no detailed overview of trained models (languages, training data, results), except I visit all model pages.\r\n\r\nSo here's my suggestion (and this hopefully fixes the model name issues that were discussed by @BramVanroy ):\r\n\r\nWhenever a user model is added to the `*_PRETRAINED_MODEL_ARCHIVE_MAP` the model alias must be identical to the S3 model name. E.g. I would rename `bert-base-german-dbmdz-cased` to `dbmdz/bert-base-german-cased`. This may break backward compatibility (but I could live with that).\r\n\r\nAdding a new user model to `*_PRETRAINED_MODEL_ARCHIVE_MAP` is done in a PR and this PR requires an additional README file for a detailed model description (location: `model_cards/`). \r\n\r\nWe should really define a kind of template for that README that includes all relevant information like language, training data and results on downstream tasks.\r\n\r\nI could also image a kind of json-based model card, that will be parsed by the Hugging Face model hub page, so that we can search for models/languages.",
"@stefan-it That solves the issue I had in part, indeed. The question remains (as was discussed elsewhere), which models go into the `_PRETRAINED_MODEL_ARCHIVE_MAP` and is it even still necessary if you can then download the model via `user/weights-name`? I would then remove those user models from the archive map, and only make them available through the `user/weights-name` directive. That way it is clear that the `_PRETRAINED_MODEL_ARCHIVE_MAP` contains canonical models that HuggingFace added themselves. Any and all other models should then be loaded through the user approach. What do you think?",
"I think that's a good idea :+1: \r\n\r\nIn addition to the `model_cards` folder it would be great to have an overview page of all available user models (that were added via PR) to e.g. to find all available BERT models.\r\n\r\nI'm thinking of this kind of overview page:\r\n\r\nhttp://pfliu.com/ner/ner.html\r\n\r\nwith filters like architecture, language or amount of training data π€",
"> * the sample code on the model pages should indeed showcase the `AutoModelXXX` variant that uses the head(s) defined in the weights\r\n\r\nShould now be implemented on the website, please let us know if you see anything fishy. Thanks!\r\n\r\n([example model page](https://huggingface.co/Musixmatch/umberto-commoncrawl-cased-v1))",
"It's really starting to take shape, I like it!\r\n\r\nSome overall suggestions design-wise:\r\n\r\n- there is overflow on the website's x-axis, even on my 2560x1440 monitor\r\n- the column width of the website is very small. So small even that a single `.from_pretrained(...)` command doesn't fit in one line. Perhaps make the width larger, but only for the code blocks?\r\n- in the usage section, code seems to be styled twice (once as a block, and once as marked text), same for citation section\r\n- some more vertical space between code blocks would be nice\r\n- code is overflowing their containers (should be manageable when making code blocks wider + using `pre {max-width: 100%; overflow-x: auto}`\r\n- on mobile: the tables overflow, too\r\n- on mobile: the model name/heading is too large which causes an overflow\r\n\r\nMobile testing is probably quite important. The most new models that I come into contact with are from twitter, so I open the model cards on my phone. I suppose many others, too.\r\n\r\nGreat job already, I really like it!\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I just re-read this thread and it feels like even though there's still lots of stuff todo, we're starting to get there!\r\n\r\n=> https://huggingface.co/\r\n\r\nThanks so much for your ideas in this thread, they were super helpful. \r\n\r\nI'll close this particular issue as \"completed\", but would love your feedback and ideas for the next steps\r\n\r\ncc @LysandreJik @thomwolf "
] | 1,579 | 1,585 | 1,585 | NONE | null | ## π Feature
It would be nice to have more room for descriptions from the contributors of shared models since at the moment one can only guess from the title as to what the model does and what was improved from existing models.
Additionally, ways of interactions with the contributors such as comments and upvotes would be helpful in further improving these models. This would be a good indication of what models work well, and contributors can clarify how the model was trained so the same work doesn't have to be done twice
## Motivation
While looking for an AlbertForQuestionAnwering model, I discovered that transformers doesn't provide a pretrained model at the moment (see (https://github.com/huggingface/transformers/issues/1979)) even though it is mentioned here (https://huggingface.co/transformers/model_doc/albert.html#albertforquestionanswering)
Within the shared models, I tried out 2 models that mention albert and squad, but I couldn't get either to run on this simple example:
```
tokenizer = AutoTokenizer.from_pretrained("ktrapeznikov/albert-xlarge-v2-squad-v2", cache_dir=model_directory)
model = AutoModel.from_pretrained("ktrapeznikov/albert-xlarge-v2-squad-v2", cache_dir=model_directory)`
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
input_ids = tokenizer.encode(question, text)
print(input_ids)
token_type_ids = [0 if i <= input_ids.index(3) else 1 for i in range(len(input_ids))]
start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))
all_tokens = tokenizer.convert_ids_to_tokens(input_ids)
print(' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]))
````
The example runs but doesn't give me an answer. Same with replydotai/albert-xxlarge-v1-finetuned-squad2
Now, I am not sure if I just made a mistake or if the model does not do question answering. Could it be that these models were only finetuned on the language in the squad dataset? But that wouldn't make a lot of sense.
Having a better description of the models could have helped and in a comment section, I could have talked directly to the contributor.
I would appreciate any help in getting a Albert for question answering running.
Thanks!
## Additional context
Python 3.7.3
transformers 2.3.0
Pytorch 1.3.1
<!-- Add any other context or screenshots about the feature request here. -->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2520/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2520/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2519 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2519/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2519/comments | https://api.github.com/repos/huggingface/transformers/issues/2519/events | https://github.com/huggingface/transformers/issues/2519 | 549,585,017 | MDU6SXNzdWU1NDk1ODUwMTc= | 2,519 | Does calling fit() method on TFBertForSequenceClassification change the weights of internal pre-trained bert? | {
"login": "hamidreza-ghader",
"id": 8751979,
"node_id": "MDQ6VXNlcjg3NTE5Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8751979?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hamidreza-ghader",
"html_url": "https://github.com/hamidreza-ghader",
"followers_url": "https://api.github.com/users/hamidreza-ghader/followers",
"following_url": "https://api.github.com/users/hamidreza-ghader/following{/other_user}",
"gists_url": "https://api.github.com/users/hamidreza-ghader/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hamidreza-ghader/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hamidreza-ghader/subscriptions",
"organizations_url": "https://api.github.com/users/hamidreza-ghader/orgs",
"repos_url": "https://api.github.com/users/hamidreza-ghader/repos",
"events_url": "https://api.github.com/users/hamidreza-ghader/events{/privacy}",
"received_events_url": "https://api.github.com/users/hamidreza-ghader/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I can't tell for sure about the TF version, but I would assume it's the same as the one in PyTorch, in which case yes: all weights are changed. You can freeze layers, though. Here (for PyTorch nn.Module) only freezing the embeddings:\r\n\r\n```python\r\nbert = BertModel.from_pretrained('bert-base-uncased')\r\nfor name, param in bert.named_parameters(): \r\n if name.startswith('embeddings'):\r\n param.requires_grad = False\r\n```",
"If your problem is solved, please close this.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,579 | 1,584 | 1,584 | NONE | null | Hi all,
Let's say, I have a TFBertForSequenceClassification object and I call fit method on it. Does it change the weights of the internal TFBertMainLayer too or it only trains the weights of the Dropout and the classifier Dense layers?
Best | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2519/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2518 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2518/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2518/comments | https://api.github.com/repos/huggingface/transformers/issues/2518/events | https://github.com/huggingface/transformers/issues/2518 | 549,493,220 | MDU6SXNzdWU1NDk0OTMyMjA= | 2,518 | Type of Training file needed for finetuning | {
"login": "ankush20m",
"id": 45195876,
"node_id": "MDQ6VXNlcjQ1MTk1ODc2",
"avatar_url": "https://avatars.githubusercontent.com/u/45195876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ankush20m",
"html_url": "https://github.com/ankush20m",
"followers_url": "https://api.github.com/users/ankush20m/followers",
"following_url": "https://api.github.com/users/ankush20m/following{/other_user}",
"gists_url": "https://api.github.com/users/ankush20m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ankush20m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ankush20m/subscriptions",
"organizations_url": "https://api.github.com/users/ankush20m/orgs",
"repos_url": "https://api.github.com/users/ankush20m/repos",
"events_url": "https://api.github.com/users/ankush20m/events{/privacy}",
"received_events_url": "https://api.github.com/users/ankush20m/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, currently the `run_lm_finetuning` script does not take into account the line returns to split the data. It splits the data according to the maximum length the model will allow (which is 512 tokens for BERT), as it is generally used to fine-tune a model on a lengthy text corpus.\r\n\r\nIf you want to do a line by line split, you could modify the `TextDataset` so that it constructs a dataset like the one you want (creating examples from line returns).",
"> Hi, currently the `run_lm_finetuning` script does not take into account the line returns to split the data. It splits the data according to the maximum length the model will allow (which is 512 tokens for BERT), as it is generally used to fine-tune a model on a lengthy text corpus.\r\n> \r\n> If you want to do a line by line split, you could modify the `TextDataset` so that it constructs a dataset like the one you want (creating examples from line returns).\r\n\r\n@LysandreJik Thanks a lot. Could you suggest any other way how can I finetune the BERT model with unlabeled data i.e. only with a text file containing sentences?",
"Re. a `LineByLineTextDataset`, you could take a look at the implementation in https://github.com/huggingface/transformers/pull/2570 (should be merged to master soon).\r\n\r\nHowever, a 32 sentences dataset is very, very small, even for finetuning.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,578 | 1,585 | 1,585 | NONE | null | ## β Questions & Help
<!-- A clear and concise description of the question. -->
What kind of **training text** file needed for `run_lm_finetuning.py` script. I have created the **text file** in which I have put **line by line sentences**.
Is this format correct for finetuning?
Because I want to finetune the **BERT** with **unlabeled data** i.e. unsupervised training.
In may **_train.txt_** file I have **total 32 sentences**, but while running the script, it shows:
**Num examples = 5**
What's going wrong here?
Below is my command:
```
!python transformers/examples/run_lm_finetuning.py \
--output_dir=./output \
--model_type=bert \
--model_name_or_path=bert-base-uncased \
--config_name=./custom \
--do_train \
--train_data_file=./train.txt \
--do_eval \
--eval_data_file=./test.txt \
--do_lower_case \
--learning_rate=5e-5 \
--num_train_epochs=10 \
--warmup_steps=0 \
--overwrite_output_dir \
--per_gpu_train_batch_size=1 \
--per_gpu_eval_batch_size=1 \
--mlm
```
Below is the information came while running script on which I need clarification:
```
01/14/2020 10:54:53 - INFO - __main__ - Loading features from cached file ./bert-base-uncased_cached_lm_510_train.txt
01/14/2020 10:54:53 - INFO - __main__ - ***** Running training *****
01/14/2020 10:54:53 - INFO - __main__ - Num examples = 5
01/14/2020 10:54:53 - INFO - __main__ - Num Epochs = 10
01/14/2020 10:54:53 - INFO - __main__ - Instantaneous batch size per GPU = 1
01/14/2020 10:54:53 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 1
01/14/2020 10:54:53 - INFO - __main__ - Gradient Accumulation steps = 1
01/14/2020 10:54:53 - INFO - __main__ - Total optimization steps = 50
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2518/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2517 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2517/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2517/comments | https://api.github.com/repos/huggingface/transformers/issues/2517/events | https://github.com/huggingface/transformers/issues/2517 | 549,460,271 | MDU6SXNzdWU1NDk0NjAyNzE= | 2,517 | Save only Bert Model after training a Sequence Classification Task/ LM finetuning Task. | {
"login": "rahulbaburaj",
"id": 8917417,
"node_id": "MDQ6VXNlcjg5MTc0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8917417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rahulbaburaj",
"html_url": "https://github.com/rahulbaburaj",
"followers_url": "https://api.github.com/users/rahulbaburaj/followers",
"following_url": "https://api.github.com/users/rahulbaburaj/following{/other_user}",
"gists_url": "https://api.github.com/users/rahulbaburaj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rahulbaburaj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rahulbaburaj/subscriptions",
"organizations_url": "https://api.github.com/users/rahulbaburaj/orgs",
"repos_url": "https://api.github.com/users/rahulbaburaj/repos",
"events_url": "https://api.github.com/users/rahulbaburaj/events{/privacy}",
"received_events_url": "https://api.github.com/users/rahulbaburaj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"By saving only BERT, do you mean saving only the transformer and not the classification layer as well?",
"@rahulbaburaj you can use the code snippet below, change the 'bert-base-uncased' to your fine-tuned model directory.\r\n\r\n```python\r\n# load config\r\nconf = BertConfig.from_pretrained('bert-base-uncased', num_labels=2)\r\n# load a sequence model\r\nbsm = BertForTokenClassification.from_pretrained('bert-base-uncased', config=conf)\r\n# get bert core model\r\nbcm = bsm.bert\r\n# save the core model\r\nbcm.save_pretrained('the output directory path')\r\n# you also need to save your tokenizer in the same directory\r\n```",
"@FacingBugs Thank you. "
] | 1,578 | 1,579 | 1,579 | NONE | null | ## β Questions & Help
<!-- A clear and concise description of the question. -->
1) How do I save only the BERT Model after finetuning on a Sequence Classification Task/ LM finetuning Task
2) How to load only BERT Model from a saved model trained on Sequence Classification Task/ LM finetuning Task | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2517/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2516 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2516/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2516/comments | https://api.github.com/repos/huggingface/transformers/issues/2516/events | https://github.com/huggingface/transformers/pull/2516 | 549,304,736 | MDExOlB1bGxSZXF1ZXN0MzYyNDIxOTQ5 | 2,516 | update | {
"login": "hjc3613",
"id": 37894838,
"node_id": "MDQ6VXNlcjM3ODk0ODM4",
"avatar_url": "https://avatars.githubusercontent.com/u/37894838?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hjc3613",
"html_url": "https://github.com/hjc3613",
"followers_url": "https://api.github.com/users/hjc3613/followers",
"following_url": "https://api.github.com/users/hjc3613/following{/other_user}",
"gists_url": "https://api.github.com/users/hjc3613/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hjc3613/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hjc3613/subscriptions",
"organizations_url": "https://api.github.com/users/hjc3613/orgs",
"repos_url": "https://api.github.com/users/hjc3613/repos",
"events_url": "https://api.github.com/users/hjc3613/events{/privacy}",
"received_events_url": "https://api.github.com/users/hjc3613/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2516?src=pr&el=h1) Report\n> Merging [#2516](https://codecov.io/gh/huggingface/transformers/pull/2516?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/51d2683fdcffc03f79dcbdc373628d449d1a0385?src=pr&el=desc) will **decrease** coverage by `12.73%`.\n> The diff coverage is `24.33%`.\n\n[](https://codecov.io/gh/huggingface/transformers/pull/2516?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #2516 +/- ##\n===========================================\n- Coverage 85.98% 73.25% -12.74% \n===========================================\n Files 91 87 -4 \n Lines 13579 15010 +1431 \n===========================================\n- Hits 11676 10995 -681 \n- Misses 1903 4015 +2112\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/2516?src=pr&el=tree) | Coverage Ξ | |\n|---|---|---|\n| [src/transformers/tokenization\\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/2516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `93.93% <ΓΈ> (ΓΈ)` | |\n| [src/transformers/tokenization\\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/2516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <ΓΈ> (ΓΈ)` | |\n| [src/transformers/configuration\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JvYmVydGEucHk=) | `100% <ΓΈ> (ΓΈ)` | |\n| [src/transformers/tokenization\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZGlzdGlsYmVydC5weQ==) | `100% <ΓΈ> (ΓΈ)` | |\n| [src/transformers/modeling\\_tf\\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/2516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `32.14% <ΓΈ> (ΓΈ)` | |\n| [src/transformers/configuration\\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/2516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100% <ΓΈ> (ΓΈ)` | |\n| [src/transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/2516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `87.7% <ΓΈ> (ΓΈ)` | |\n| [src/transformers/modeling\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `64.25% <ΓΈ> (ΓΈ)` | |\n| [src/transformers/tokenization\\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/2516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `100% <ΓΈ> (ΓΈ)` | |\n| [src/transformers/configuration\\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/2516/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100% <ΓΈ> (ΓΈ)` | |\n| ... and [164 more](https://codecov.io/gh/huggingface/transformers/pull/2516/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/2516?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/2516?src=pr&el=footer). Last update [51d2683...f924594](https://codecov.io/gh/huggingface/transformers/pull/2516?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,578 | 1,579 | 1,579 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2516/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/2516",
"html_url": "https://github.com/huggingface/transformers/pull/2516",
"diff_url": "https://github.com/huggingface/transformers/pull/2516.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/2516.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/2515 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2515/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2515/comments | https://api.github.com/repos/huggingface/transformers/issues/2515/events | https://github.com/huggingface/transformers/issues/2515 | 549,300,414 | MDU6SXNzdWU1NDkzMDA0MTQ= | 2,515 | How to use transformers to convert batch sentences into word vectors??? | {
"login": "duan348733684",
"id": 26431015,
"node_id": "MDQ6VXNlcjI2NDMxMDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/26431015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/duan348733684",
"html_url": "https://github.com/duan348733684",
"followers_url": "https://api.github.com/users/duan348733684/followers",
"following_url": "https://api.github.com/users/duan348733684/following{/other_user}",
"gists_url": "https://api.github.com/users/duan348733684/gists{/gist_id}",
"starred_url": "https://api.github.com/users/duan348733684/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/duan348733684/subscriptions",
"organizations_url": "https://api.github.com/users/duan348733684/orgs",
"repos_url": "https://api.github.com/users/duan348733684/repos",
"events_url": "https://api.github.com/users/duan348733684/events{/privacy}",
"received_events_url": "https://api.github.com/users/duan348733684/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm sorry but this is not how you should ask questions to begin with, second it is very general. There are tons of tutorials about this kind of stuff. You can have a look at a notebook that I made. It shows you how to get a feature vector for your input sentence. https://github.com/BramVanroy/bert-for-inference/blob/master/introduction-to-bert.ipynb\r\n\r\nIf that doesn't help you, post a question on a website like Stack Overflow or Google.",
"> I'm sorry but this is not how you should ask questions to begin with, second it is very general. There are tons of tutorials about this kind of stuff. You can have a look at a notebook that I made. It shows you how to get a feature vector for your input sentence. https://github.com/BramVanroy/bert-for-inference/blob/master/introduction-to-bert.ipynb\r\n> \r\n> If that doesn't help you, post a question on a website like Stack Overflow or Google.\r\n\r\nthank you! I understand what you mean. I can use transformers to convert a single sentence into feature vector. Following:\r\n\r\nstring = \"I like the girl\"\r\ntokens = tokenizer.tokenize(string)\r\nids = tokenizer.convert_tokens_to_ids(tokens)\r\ntokens_tensor = torch.tensor([ids])\r\nwith torch.no_grad():\r\n outputs = model(tokens_tensor )\r\n\r\nBut if I want to get feature vector about [\"I like the girl\", \"post a quesetion on a website\", \"I often use facebook\"] at once. Here are three sentences. How to use transformer? This is my main question.\r\n",
"You are not hearing what I am saying. You should ask this type of question on Stack Overflow and tag it with huggingface-transformers because it is a _general_ question. Below you can find a basic approach. Please close this question and direct your future questions like this to SO.\r\n\r\n```python\r\nimport torch\r\nfrom transformers import BertModel, BertTokenizer\r\n\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\nmodel = BertModel.from_pretrained('bert-base-uncased')\r\nmodel.eval()\r\n\r\ntext = ['I like cookies.', 'Do you like cookies?']\r\nencoded = tokenizer.batch_encode_plus(text, return_tensors='pt', add_special_tokens=True)\r\nprint(encoded)\r\n# {'input_ids': tensor([[ 101, 1045, 2066, 16324, 1012, 102, 0],\r\n [ 101, 2079, 2017, 2066, 16324, 1029, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 1],\r\n [0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1],\r\n [1, 1, 1, 1, 1, 1, 1]])}\r\n\r\nwith torch.no_grad():\r\n out = model(**encoded)\r\n\r\nprint(out[0].size())\r\n# torch.Size([2, 7, 768])\r\n\r\n```",
"> You are not hearing what I am saying. You should ask this type of question on Stack Overflow and tag it with huggingface-transformers because it is a _general_ question. Below you can find a basic approach. Please close this question and direct your future questions like this to SO.\r\n> \r\n> ```python\r\n> import torch\r\n> from transformers import BertModel, BertTokenizer\r\n> \r\n> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n> model = BertModel.from_pretrained('bert-base-uncased')\r\n> model.eval()\r\n> \r\n> text = ['I like cookies.', 'Do you like cookies?']\r\n> encoded = tokenizer.batch_encode_plus(text, return_tensors='pt', add_special_tokens=True)\r\n> print(encoded)\r\n> # {'input_ids': tensor([[ 101, 1045, 2066, 16324, 1012, 102, 0],\r\n> [ 101, 2079, 2017, 2066, 16324, 1029, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 1],\r\n> [0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1],\r\n> [1, 1, 1, 1, 1, 1, 1]])}\r\n> \r\n> with torch.no_grad():\r\n> out = model(**encoded)\r\n> \r\n> print(out[0].size())\r\n> # torch.Size([2, 7, 768])\r\n> ```\r\n\r\nthank you for your guidance!!!! sorry, I'm just a high school student from Zimbabwe.. I'll close it right now"
] | 1,578 | 1,579 | 1,579 | NONE | null | ## β Questions & Help
<!-- A clear and concise description of the question. -->
How to use transformers to convert batch sentences into word vectors???
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2515/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2514 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2514/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2514/comments | https://api.github.com/repos/huggingface/transformers/issues/2514/events | https://github.com/huggingface/transformers/issues/2514 | 549,127,868 | MDU6SXNzdWU1NDkxMjc4Njg= | 2,514 | T5 Masked LM -- pre-trained model import? | {
"login": "moscow25",
"id": 1473764,
"node_id": "MDQ6VXNlcjE0NzM3NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1473764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moscow25",
"html_url": "https://github.com/moscow25",
"followers_url": "https://api.github.com/users/moscow25/followers",
"following_url": "https://api.github.com/users/moscow25/following{/other_user}",
"gists_url": "https://api.github.com/users/moscow25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moscow25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moscow25/subscriptions",
"organizations_url": "https://api.github.com/users/moscow25/orgs",
"repos_url": "https://api.github.com/users/moscow25/repos",
"events_url": "https://api.github.com/users/moscow25/events{/privacy}",
"received_events_url": "https://api.github.com/users/moscow25/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I found a solution https://github.com/huggingface/transformers/issues/3985#issue-606998741"
] | 1,578 | 1,665 | 1,584 | CONTRIBUTOR | null | ## β Questions & Help
Hi, thanks for merging the T5 model! However it is not clear to me how to use the pretrained model for masked language modeling. It appears that the model example only returns a hidden state, or `T5WithLMHeadModel` which is not clear what this is doing -- it tends to return the same token for me in all locations.
My understanding of the T5 paper was that one could add input tags like `<extra_id_0>` and receive multi-token masked responses in the decoder. Has this functionality been replicated in the codebase? And if not, do you think it possible to add it -- or do you have pointers to the community to try to add this ourselves?
Unfortunately, the T5 documentation for this use case is also not great. Which is too bad because multi-token masked responses seems like a great feature of the T5 model. Testing and inference for masked language modeling is simple for BERT and variants, but does not support multi-token responses.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2514/reactions",
"total_count": 8,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 8
} | https://api.github.com/repos/huggingface/transformers/issues/2514/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/2513 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/2513/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/2513/comments | https://api.github.com/repos/huggingface/transformers/issues/2513/events | https://github.com/huggingface/transformers/issues/2513 | 549,061,548 | MDU6SXNzdWU1NDkwNjE1NDg= | 2,513 | Error in AlbertForMaskedLM with add_tokens and model.resize_token_embeddings | {
"login": "mckunkel",
"id": 9967035,
"node_id": "MDQ6VXNlcjk5NjcwMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9967035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mckunkel",
"html_url": "https://github.com/mckunkel",
"followers_url": "https://api.github.com/users/mckunkel/followers",
"following_url": "https://api.github.com/users/mckunkel/following{/other_user}",
"gists_url": "https://api.github.com/users/mckunkel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mckunkel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mckunkel/subscriptions",
"organizations_url": "https://api.github.com/users/mckunkel/orgs",
"repos_url": "https://api.github.com/users/mckunkel/repos",
"events_url": "https://api.github.com/users/mckunkel/events{/privacy}",
"received_events_url": "https://api.github.com/users/mckunkel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hi, I've pushed a fix that was just merged in `master`. Could you please try and install from source:\r\n```py\r\npip install git+https://github.com/huggingface/transformers\r\n```\r\nand tell me if you face the same error?",
"Greetings, \r\nThanks for the reply. \r\nI do not get the same error anymore, I get a different error.\r\n\r\n> RuntimeError: cuda runtime error (710) : device-side assert triggered at /pytorch/aten/src/ATen/native/cuda/Normalization.cuh:581\r\n\r\nHere is a full stack trace if it helps.\r\n\r\n> Epoch: 0%| | 0/10 [00:00<?, ?it/s/\r\ncode/src/pretrain_roberta.py:93: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).\r\n return torch.tensor(self.examples[item])\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [5,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\nTHCudaCheck FAIL file=/pytorch/aten/src/ATen/native/cuda/Normalization.cuh line=581 error=710 : device-side assert triggered\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [5,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes` failed.\r\n...\r\n...\r\n...\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py\", line 152, in forward\r\n outputs = self.parallel_apply(replicas, inputs, kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py\", line 162, in parallel_apply\r\n return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py\", line 85, in parallel_apply\r\n output.reraise()\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/_utils.py\", line 385, in reraise\r\n raise self.exc_type(msg)\r\nRuntimeError: Caught RuntimeError in replica 0 on device 0.\r\nOriginal Traceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py\", line 60, in _worker\r\n output = module(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_albert.py\", line 659, in forward\r\n prediction_scores = self.predictions(sequence_outputs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/modeling_albert.py\", line 588, in forward\r\n hidden_states = self.LayerNorm(hidden_states)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 541, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/normalization.py\", line 153, in forward\r\n input, self.normalized_shape, self.weight, self.bias, self.eps)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py\", line 1696, in layer_norm\r\n torch.backends.cudnn.enabled)\r\nRuntimeError: cuda runtime error (710) : device-side assert triggered at /pytorch/aten/src/ATen/native/cuda/Normalization.cuh:581",
"Intuitively I would say it has to do with a Cross Entropy having it's `ignore index` set to -1. We have recently updated all our Cross Entropy methods to be set to a default of -100 like the official PyTorch default.\r\n\r\nWould you mind checking if you don't have a something similar in your code? If you're using one of our hosted scripts, you can simply take the updated version of the script which is updated as we update source code.",
"@LysandreJik Thank you ! i changed the -100 to -1 ,then the program works.",
"Hi @LysandreJik \r\n\r\nAfter fixing the modeling_bert.py file, now I can successfully add new tokens and train rm_lm_finetuning file using one gpu. However, when I try to allocate 2 gpus, an error came out below, any thoughts?\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"run_lm_finetuning.py\", line 723, in <module>\r\n main()\r\n File \"run_lm_finetuning.py\", line 673, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer)\r\n File \"run_lm_finetuning.py\", line 317, in train\r\n loss.backward()\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/tensor.py\", line 107, in backward\r\n torch.autograd.backward(self, gradient, retain_graph, create_graph)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/autograd/__init__.py\", line 93, in backward\r\n allow_unreachable=True) # allow_unreachable flag\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/autograd/function.py\", line 77, in apply\r\n return self._forward_cls.backward(self, *args)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/_functions.py\", line 32, in backward\r\n return (None,) + ReduceAddCoalesced.apply(ctx.input_device, ctx.num_inputs, *grad_outputs)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/_functions.py\", line 43, in forward\r\n return comm.reduce_add_coalesced(grads, destination)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/cuda/comm.py\", line 121, in reduce_add_coalesced\r\n flat_result = reduce_add(flat_tensors, destination)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/cuda/comm.py\", line 77, in reduce_add\r\n nccl.reduce(inputs, outputs, root=nccl_root)\r\n File \"/opt/conda/lib/python3.6/site-packages/torch/cuda/nccl.py\", line 51, in reduce\r\n torch._C._nccl_reduce(inputs, outputs, root, op, streams, comms)\r\nRuntimeError: NCCL Error 2: unhandled system error\r\n```\r\n\r\nThank you in advance for your help.\r\n",
"Hi @jasonwu0731, do you mind opening a new issue with your problem, detailing your environment (python, pytorch, transformers versions) following the `bug` issue template?\r\n\r\nThank you ",
"@LysandreJik is it the same issue as #2373 ? Is the issue resolved?\r\nThank you for your help. \r\n",
"Greetings, \r\nI have had issues getting the fix to work, however I think the issue is on my end and I have been slowly investigating it.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,578 | 1,585 | 1,585 | NONE | null | ## π Bug
Model I am using: Albert & Bert
Language I am using the model on English
The problem arise when using:
* [X] the official example scripts: run_lm_finetuning.py
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: mlm
## To Reproduce
Steps to reproduce the behavior:
1. add following lines after line 244 in run_lm_finetuning
`tokenizer.add_tokens(['mewhomp', 'skype', 'kiltrim'])
model.resize_token_embeddings(len(tokenizer))
`
Error
> RuntimeError: The size of tensor a (30003) must match the size of tensor b (30000) at non-singleton dimension 2
## Expected behavior
Add the 3 additional tokens and train
## Environment
* OS: Ubuntu 16.04
* Python version: 3.6.9
* PyTorch version: 1.3.1
* PyTorch Transformers version (or branch): All branches with Albert. All version of pip install that supports Albert
* Using GPU ? yes
* Distributed or parallel setup ? Yes
* Any other relevant information:
## Additional context
This is similar to issues
[2373](https://github.com/huggingface/transformers/issues/2373)
[2468](https://github.com/huggingface/transformers/issues/2468)
[2480](https://github.com/huggingface/transformers/issues/2480)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/2513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/2513/timeline | completed | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.