url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/809 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/809/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/809/comments | https://api.github.com/repos/huggingface/transformers/issues/809/events | https://github.com/huggingface/transformers/issues/809 | 469,429,682 | MDU6SXNzdWU0Njk0Mjk2ODI= | 809 | Problem loading finetuned XLNet model | {
"login": "igormis",
"id": 6599037,
"node_id": "MDQ6VXNlcjY1OTkwMzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6599037?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/igormis",
"html_url": "https://github.com/igormis",
"followers_url": "https://api.github.com/users/igormis/followers",
"following_url": "https://api.github.com/users/igormis/following{/other_user}",
"gists_url": "https://api.github.com/users/igormis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/igormis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/igormis/subscriptions",
"organizations_url": "https://api.github.com/users/igormis/orgs",
"repos_url": "https://api.github.com/users/igormis/repos",
"events_url": "https://api.github.com/users/igormis/events{/privacy}",
"received_events_url": "https://api.github.com/users/igormis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"What task did you fine-tuned it on?\r\nYou can convert it by running the `convert_xlnet_checkpoint_to_pytorch.py` script with a `--finetuning_task` argument (see [here](https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/convert_xlnet_checkpoint_to_pytorch.py#L94-L97))\r\n",
"binary classification (sentiment) on my dataset\r\nI tried the following\r\n`import torch\r\nfrom pytorch_transformers import XLNetConfig\r\nfrom pytorch_transformers import XLNetTokenizer\r\nfrom pytorch_transformers import XLNetModel\r\nconfig = XLNetConfig.from_pretrained('./')\r\ntokenizer = XLNetTokenizer.from_pretrained('./')\r\nmodel = XLNetModel(config)\r\n\r\ninput_ids = torch.tensor(tokenizer.encode(\"Apple stocks increase\")).unsqueeze(0) # Batch size 1\r\noutputs = model(input_ids)\r\nlast_hidden_states = outputs[0]`\r\nhowever, the output is not binary, the output tensor is :\r\ntensor([[[-1.0645, -1.1170, -0.5549, ..., 0.7089, 0.2862, -0.2995],\r\n [ 0.0198, -1.6598, -0.1760, ..., 0.1167, -0.1545, 0.1777],\r\n [ 0.6514, -0.5614, -1.2180, ..., 1.0796, 1.1217, 1.0362]]],\r\n grad_fn=<PermuteBackward>)\r\n\r\n",
"the converting was done using:\r\n`pytorch_transformers xlnet $TRANSFO_XL_CHECKPOINT_PATH $TRANSFO_XL_CONFIG_PAH $PYTORCH_DUMP_OUTPUT . `\r\n\r\nI did not specified the task, because I though that it is not important.",
"It is important to match the last (classification) layer otherwise the conversion will fail.\r\nIf you give me more information on your TF training (like which script you usde for instance) I may be able to help.",
"I was using the:\r\n`train_command = \"python xlnet/run_classifier.py \\\r\n --do_train=True \\\r\n --do_eval=True \\\r\n --eval_all_ckpt=True \\\r\n --task_name=spam \\\r\n --data_dir=\"+DATA_DIR+\" \\\r\n --output_dir=\"+OUTPUT_DIR+\" \\\r\n --model_dir=\"+CHECKPOINT_DIR+\" \\\r\n --uncased=False \\\r\n --spiece_model_file=\"+PRETRAINED_MODEL_DIR+\"/spiece.model \\\r\n --model_config_path=\"+PRETRAINED_MODEL_DIR+\"/xlnet_config.json \\\r\n --init_checkpoint=\"+PRETRAINED_MODEL_DIR+\"/xlnet_model.ckpt \\\r\n --max_seq_length=128 \\\r\n --train_batch_size=8 \\\r\n --eval_batch_size=8 \\\r\n --num_hosts=1 \\\r\n --num_core_per_host=1 \\\r\n --learning_rate=2e-5 \\\r\n --train_steps=4000 \\\r\n --warmup_steps=500 \\\r\n --save_steps=500 \\\r\n --iterations=500\"\r\n\r\n! {train_command}`\r\nIt is similar to the imdb sentiment task\r\nThe colab is here\r\nhttps://colab.research.google.com/drive/1nfWEEDxPOE8myb-hwGdoXs9nVXM3AVcz",
"I have modified the:\r\nclass ImdbProcessor(DataProcessor):",
"Here is the model after model.eval():\r\n`XLNetModel(\r\n (word_embedding): Embedding(32000, 1024)\r\n (layer): ModuleList(\r\n (0): XLNetLayer(\r\n (rel_attn): XLNetRelativeAttention(\r\n (layer_norm): XLNetLayerNorm()\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (ff): XLNetFeedForward(\r\n (layer_norm): XLNetLayerNorm()\r\n (layer_1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (layer_2): Linear(in_features=4096, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (1): XLNetLayer(\r\n (rel_attn): XLNetRelativeAttention(\r\n (layer_norm): XLNetLayerNorm()\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (ff): XLNetFeedForward(\r\n (layer_norm): XLNetLayerNorm()\r\n (layer_1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (layer_2): Linear(in_features=4096, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (2): XLNetLayer(\r\n (rel_attn): XLNetRelativeAttention(\r\n (layer_norm): XLNetLayerNorm()\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (ff): XLNetFeedForward(\r\n (layer_norm): XLNetLayerNorm()\r\n (layer_1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (layer_2): Linear(in_features=4096, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (3): XLNetLayer(\r\n (rel_attn): XLNetRelativeAttention(\r\n (layer_norm): XLNetLayerNorm()\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (ff): XLNetFeedForward(\r\n (layer_norm): XLNetLayerNorm()\r\n (layer_1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (layer_2): Linear(in_features=4096, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (4): XLNetLayer(\r\n (rel_attn): XLNetRelativeAttention(\r\n (layer_norm): XLNetLayerNorm()\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (ff): XLNetFeedForward(\r\n (layer_norm): XLNetLayerNorm()\r\n (layer_1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (layer_2): Linear(in_features=4096, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (5): XLNetLayer(\r\n (rel_attn): XLNetRelativeAttention(\r\n (layer_norm): XLNetLayerNorm()\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (ff): XLNetFeedForward(\r\n (layer_norm): XLNetLayerNorm()\r\n (layer_1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (layer_2): Linear(in_features=4096, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (6): XLNetLayer(\r\n (rel_attn): XLNetRelativeAttention(\r\n (layer_norm): XLNetLayerNorm()\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (ff): XLNetFeedForward(\r\n (layer_norm): XLNetLayerNorm()\r\n (layer_1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (layer_2): Linear(in_features=4096, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (7): XLNetLayer(\r\n (rel_attn): XLNetRelativeAttention(\r\n (layer_norm): XLNetLayerNorm()\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (ff): XLNetFeedForward(\r\n (layer_norm): XLNetLayerNorm()\r\n (layer_1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (layer_2): Linear(in_features=4096, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (8): XLNetLayer(\r\n (rel_attn): XLNetRelativeAttention(\r\n (layer_norm): XLNetLayerNorm()\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (ff): XLNetFeedForward(\r\n (layer_norm): XLNetLayerNorm()\r\n (layer_1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (layer_2): Linear(in_features=4096, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (9): XLNetLayer(\r\n (rel_attn): XLNetRelativeAttention(\r\n (layer_norm): XLNetLayerNorm()\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (ff): XLNetFeedForward(\r\n (layer_norm): XLNetLayerNorm()\r\n (layer_1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (layer_2): Linear(in_features=4096, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (10): XLNetLayer(\r\n (rel_attn): XLNetRelativeAttention(\r\n (layer_norm): XLNetLayerNorm()\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (ff): XLNetFeedForward(\r\n (layer_norm): XLNetLayerNorm()\r\n (layer_1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (layer_2): Linear(in_features=4096, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (11): XLNetLayer(\r\n (rel_attn): XLNetRelativeAttention(\r\n (layer_norm): XLNetLayerNorm()\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (ff): XLNetFeedForward(\r\n (layer_norm): XLNetLayerNorm()\r\n (layer_1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (layer_2): Linear(in_features=4096, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (12): XLNetLayer(\r\n (rel_attn): XLNetRelativeAttention(\r\n (layer_norm): XLNetLayerNorm()\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (ff): XLNetFeedForward(\r\n (layer_norm): XLNetLayerNorm()\r\n (layer_1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (layer_2): Linear(in_features=4096, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (13): XLNetLayer(\r\n (rel_attn): XLNetRelativeAttention(\r\n (layer_norm): XLNetLayerNorm()\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (ff): XLNetFeedForward(\r\n (layer_norm): XLNetLayerNorm()\r\n (layer_1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (layer_2): Linear(in_features=4096, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (14): XLNetLayer(\r\n (rel_attn): XLNetRelativeAttention(\r\n (layer_norm): XLNetLayerNorm()\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (ff): XLNetFeedForward(\r\n (layer_norm): XLNetLayerNorm()\r\n (layer_1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (layer_2): Linear(in_features=4096, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (15): XLNetLayer(\r\n (rel_attn): XLNetRelativeAttention(\r\n (layer_norm): XLNetLayerNorm()\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (ff): XLNetFeedForward(\r\n (layer_norm): XLNetLayerNorm()\r\n (layer_1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (layer_2): Linear(in_features=4096, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (16): XLNetLayer(\r\n (rel_attn): XLNetRelativeAttention(\r\n (layer_norm): XLNetLayerNorm()\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (ff): XLNetFeedForward(\r\n (layer_norm): XLNetLayerNorm()\r\n (layer_1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (layer_2): Linear(in_features=4096, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (17): XLNetLayer(\r\n (rel_attn): XLNetRelativeAttention(\r\n (layer_norm): XLNetLayerNorm()\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (ff): XLNetFeedForward(\r\n (layer_norm): XLNetLayerNorm()\r\n (layer_1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (layer_2): Linear(in_features=4096, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (18): XLNetLayer(\r\n (rel_attn): XLNetRelativeAttention(\r\n (layer_norm): XLNetLayerNorm()\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (ff): XLNetFeedForward(\r\n (layer_norm): XLNetLayerNorm()\r\n (layer_1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (layer_2): Linear(in_features=4096, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (19): XLNetLayer(\r\n (rel_attn): XLNetRelativeAttention(\r\n (layer_norm): XLNetLayerNorm()\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (ff): XLNetFeedForward(\r\n (layer_norm): XLNetLayerNorm()\r\n (layer_1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (layer_2): Linear(in_features=4096, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (20): XLNetLayer(\r\n (rel_attn): XLNetRelativeAttention(\r\n (layer_norm): XLNetLayerNorm()\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (ff): XLNetFeedForward(\r\n (layer_norm): XLNetLayerNorm()\r\n (layer_1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (layer_2): Linear(in_features=4096, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (21): XLNetLayer(\r\n (rel_attn): XLNetRelativeAttention(\r\n (layer_norm): XLNetLayerNorm()\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (ff): XLNetFeedForward(\r\n (layer_norm): XLNetLayerNorm()\r\n (layer_1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (layer_2): Linear(in_features=4096, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (22): XLNetLayer(\r\n (rel_attn): XLNetRelativeAttention(\r\n (layer_norm): XLNetLayerNorm()\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (ff): XLNetFeedForward(\r\n (layer_norm): XLNetLayerNorm()\r\n (layer_1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (layer_2): Linear(in_features=4096, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (23): XLNetLayer(\r\n (rel_attn): XLNetRelativeAttention(\r\n (layer_norm): XLNetLayerNorm()\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (ff): XLNetFeedForward(\r\n (layer_norm): XLNetLayerNorm()\r\n (layer_1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (layer_2): Linear(in_features=4096, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1)\r\n )\r\n (dropout): Dropout(p=0.1)\r\n )\r\n )\r\n (dropout): Dropout(p=0.1)\r\n)\r\n`",
"You'll have to modify the conversion script because this is not a standard task and the number of labels won't be found in the list (see top of the conversion script) you should add `'spam': 2` in the list (if you have two labels indeed).\r\n\r\nThe other option is to directly load the TF model in PyTorch and save the pytorch model afterwards with something like this:\r\n```\r\nconfig = XLNetConfig.from_pretrained('xlnet-large-cased', num_labels=2, finetuning_task='spam')\r\nmodel = XLNetForSequenceClassification.from_pretrained('path/to/your/tf/model.ckpt.index', config=config, from_tf=True)\r\nmodel.save_pretrained('pytorch_model_saving_directory')\r\n```",
"Perfect, tnx.\r\nJust a minor question if I have classification model with 4 labels, I do the same changes spam:4?",
"Yes, change my `2` to `4`",
"Hi Thom,\r\nI did:\r\n`import torch\r\nfrom pytorch_transformers import XLNetConfig\r\nfrom pytorch_transformers import XLNetForSequenceClassification\r\nconfig = XLNetConfig.from_pretrained('xlnet-large-cased', num_labels=2, finetuning_task='sentiment')\r\nmodel = XLNetForSequenceClassification.from_pretrained('../model.ckpt-2500', config=config, from_tf=True)\r\nmodel.save_pretrained('./test')`\r\n\r\nAfterwards I get in the test folder the config.json and pytorch_model.bin.\r\n\r\nI tried to run:\r\n'import torch\r\nfrom pytorch_transformers import XLNetConfig\r\nfrom pytorch_transformers import XLNetTokenizer\r\nfrom pytorch_transformers import XLNetModel\r\nconfig = XLNetConfig.from_pretrained('./')\r\ntokenizer = XLNetTokenizer.from_pretrained('./')\r\nmodel = XLNetModel(config)\r\n\r\ninput_ids = torch.tensor(tokenizer.encode(\"Apple stocks increase\")).unsqueeze(0) # Batch size 1\r\noutputs = model(input_ids)\r\nlast_hidden_states = outputs[0]\r\nprint(last_hidden_states)\r\n\r\n\r\nThere was an error:\r\nModel name './' was not found in model name list (xlnet-base-cased, xlnet-large-cased). We assumed './' was a path or url but couldn't find tokenizer filesat this path or url.\r\n\r\nSo I included the spiece.model into the same test folder, but still the results is:\r\ntensor([[[ 0.0156, -0.9046, 0.9789, ..., -0.7764, 1.1309, 0.2862],\r\n [ 0.4416, 0.0665, 2.0020, ..., 0.4117, 0.9779, -1.0588],\r\n [ 0.6908, -0.0000, 0.1479, ..., 0.0032, 0.9871, -0.9482]]],\r\n grad_fn=<PermuteBackward>)\r\n\r\nDo u know what I am doing wrong? ",
"While converting this is some of the warnings:\r\nWeights not copied to PyTorch model: beta1_power, beta2_power, global_step, model/classification_finsent/logit/bias, model/classification_finsent/logit/bias/Adam, model/classification_finsent/logit/bias/Adam_1, model/classification_finsent/logit/kernel, model/classification_finsent/logit/kernel/Adam, model/classification_finsent/logit/kernel/Adam_1",
"In addition, I have tried the other approach, to modify the script, but the results are the same. I think that the problem is that the weights from the classification_finsent are not copied to pytorch model. Any suggestions on this",
"I have tried also in this way:\r\n`import torch\r\nfrom pytorch_transformers import XLNetConfig\r\nfrom pytorch_transformers import XLNetTokenizer\r\nfrom pytorch_transformers import XLNetForSequenceClassification\r\n\r\nconfig = XLNetConfig.from_pretrained('/home/igor/Falcon/XLNet/sentiment_model/PyTorch/pytorch_transformer_script/test')\r\ntokenizer = XLNetTokenizer.from_pretrained('/home/igor/Falcon/XLNet/sentiment_model/PyTorch/pytorch_transformer_script/test')\r\nconfig.output_hidden_states=True\r\nmodel = XLNetForSequenceClassification(config)\r\n\r\ninput_ids = torch.tensor(tokenizer.encode(\"Apple stocks increase rapidly\")).unsqueeze(0) # Batch size 1\r\nlabels = torch.tensor([1]).unsqueeze(0) # Batch size 1\r\noutputs = model(input_ids, labels=labels)\r\nprint(outputs[0])`\r\n\r\nand the output now is:\r\ntensor(0.3944, grad_fn=<NllLossBackward>)\r\n",
"In the last attempt I did:\r\n```python\r\nimport torch\r\nfrom pytorch_transformers import XLNetConfig\r\nfrom pytorch_transformers import XLNetTokenizer\r\nfrom pytorch_transformers import XLNetForSequenceClassification\r\n\r\nconfig = XLNetConfig.from_pretrained('xlnet-large-cased', num_labels=2, finetuning_task='finsent')\r\ntokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased') #I am using the default tokenizer\r\n\r\nmodel = XLNetForSequenceClassification.from_pretrained('/home/igor/Falcon/XLNet/sentiment_model/model.ckpt-2500',config=config, from_tf=True) #this is my finetuned model\r\nmodel.eval() #in evaluation mode\r\ninput_ids = torch.tensor(tokenizer.encode(\"Apple stock increase and they are overwhelmed with the success\")).unsqueeze(0) # Batch size 1\r\nlabels = torch.tensor([1]).unsqueeze(0) # Batch size 1\r\noutputs = model(input_ids, labels=labels)\r\nloss, logits = outputs[:2]\r\nprint(torch.nn.functional.softmax(logits.data))\r\n```\r\n\r\nHowever I am getting different prediction for the same input data, I did model.eval() to stop the dropout, but still the inference looks random.\r\n\r\nIn addition here is my config file printed:\r\n```json\r\n{\r\n \"attn_type\": \"bi\",\r\n \"bi_data\": false,\r\n \"clamp_len\": -1,\r\n \"d_head\": 64,\r\n \"d_inner\": 4096,\r\n \"d_model\": 1024,\r\n \"dropatt\": 0.1,\r\n \"dropout\": 0.1,\r\n \"end_n_top\": 5,\r\n \"ff_activation\": \"gelu\",\r\n \"finetuning_task\": \"finsent\",\r\n \"init\": \"normal\",\r\n \"init_range\": 0.1,\r\n \"init_std\": 0.02,\r\n \"initializer_range\": 0.02,\r\n \"layer_norm_eps\": 1e-12,\r\n \"max_position_embeddings\": 512,\r\n \"mem_len\": null,\r\n \"n_head\": 16,\r\n \"n_layer\": 24,\r\n \"n_token\": 32000,\r\n \"num_labels\": 2,\r\n \"output_attentions\": false,\r\n \"output_hidden_states\": false,\r\n \"reuse_len\": null,\r\n \"same_length\": false,\r\n \"start_n_top\": 5,\r\n \"summary_activation\": \"tanh\",\r\n \"summary_last_dropout\": 0.1,\r\n \"summary_type\": \"last\",\r\n \"summary_use_proj\": true,\r\n \"torchscript\": false,\r\n \"untie_r\": true\r\n}\r\n```\r\nIt looks that I getting closer, but still the inference is strange",
"@thomwolf I finally succeed to import the checkpoint model and infer. Still I am not sure if this a valid approach:\r\n```python\r\nimport torch\r\nfrom pytorch_transformers import XLNetConfig\r\nfrom pytorch_transformers import XLNetTokenizer\r\nfrom pytorch_transformers import XLNetForSequenceClassification\r\n\r\n\r\nseed = 0\r\ntorch.manual_seed(seed)\r\nif torch.cuda.is_available():\r\n\ttorch.cuda.manual_seed_all(seed)\r\n#load the initial config file from the XLNet model\r\nconfig = XLNetConfig.from_pretrained('xlnet_config.json', num_labels=2, finetuning_task='finsent')\r\n#the tokenizer I am using is the initial one (spiece.model)\r\ntokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased')\r\n\r\nmodel = XLNetForSequenceClassification.from_pretrained('/sentiment_over_imdb/model.ckpt-4000',config=config, from_tf=True)\r\nmodel.eval()\r\n\r\ndef sentiment(data):\r\n\tinput_ids = torch.tensor(tokenizer.encode(data)).unsqueeze(0) # Batch size 1\r\n\tlabels = torch.tensor([1]).unsqueeze(0) # Batch size 1\r\n\r\n\toutputs = model(input_ids, labels=labels)\r\n\tloss, logits = outputs[:2]\r\n\toutput = torch.nn.functional.softmax(logits.data, dim=1)\r\n\tsent = output.tolist()[0][1]\r\n return sent\r\n```"
] | 1,563 | 1,564 | 1,564 | NONE | null | After fine-tuning an XLNet classification model and obtaining TF checkpoints I converted the checkpoint to pytorch_model.bin and config.json. I need to make prediction on input text, but I have problems loading the models correctly. Any help? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/809/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/808 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/808/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/808/comments | https://api.github.com/repos/huggingface/transformers/issues/808/events | https://github.com/huggingface/transformers/issues/808 | 469,380,474 | MDU6SXNzdWU0NjkzODA0NzQ= | 808 | GPT2 model does not have attention mask | {
"login": "Saner3",
"id": 30628796,
"node_id": "MDQ6VXNlcjMwNjI4Nzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/30628796?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saner3",
"html_url": "https://github.com/Saner3",
"followers_url": "https://api.github.com/users/Saner3/followers",
"following_url": "https://api.github.com/users/Saner3/following{/other_user}",
"gists_url": "https://api.github.com/users/Saner3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Saner3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Saner3/subscriptions",
"organizations_url": "https://api.github.com/users/Saner3/orgs",
"repos_url": "https://api.github.com/users/Saner3/repos",
"events_url": "https://api.github.com/users/Saner3/events{/privacy}",
"received_events_url": "https://api.github.com/users/Saner3/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, I will remove this doctring, there is no attention_mask on GPT-2.",
"> Indeed, I will remove this doctring, there is no attention_mask on GPT-2.\r\n\r\nBut what to do if I do want to avoid computing attention on the paddings in the input sequences.",
"@Saner3 @thomwolf I have same question? don't we need that for paddings?",
"GPT-2 is a model with absolute position embeddings (like Bert) so you should always pad on the right to get best performances for this model (will add this information to the doc_string).\r\n\r\nAs it's a causal model (only attend to the left context), also means that the model will not attend to the padding tokens (which are on the right) for any real token anyway.\r\n\r\nSo in conclusion, no need to take special care of avoiding attention on padding.\r\n\r\nJust don't use the output of the padded tokens for anything as they don't contain any reliable information (which is obvious I hope).",
"@thomwolf thanks much, and great job!"
] | 1,563 | 1,566 | 1,566 | NONE | null | Hello, in the doc string of GPT2 model, it says there is an optional input called [attention_mask](https://github.com/huggingface/pytorch-transformers/blob/f289e6cfe46885f260e4f2b3c8a164aa1a567e4c/pytorch_transformers/modeling_gpt2.py#L405) to avoid computing attention on paddings. But actually I cannot find the implementation and there is no such arguments either. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/808/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/807 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/807/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/807/comments | https://api.github.com/repos/huggingface/transformers/issues/807/events | https://github.com/huggingface/transformers/issues/807 | 469,360,451 | MDU6SXNzdWU0NjkzNjA0NTE= | 807 | AttributeError: 'tuple' object has no attribute 'softmax' | {
"login": "Raghavendra15",
"id": 7957331,
"node_id": "MDQ6VXNlcjc5NTczMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7957331?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Raghavendra15",
"html_url": "https://github.com/Raghavendra15",
"followers_url": "https://api.github.com/users/Raghavendra15/followers",
"following_url": "https://api.github.com/users/Raghavendra15/following{/other_user}",
"gists_url": "https://api.github.com/users/Raghavendra15/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Raghavendra15/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Raghavendra15/subscriptions",
"organizations_url": "https://api.github.com/users/Raghavendra15/orgs",
"repos_url": "https://api.github.com/users/Raghavendra15/repos",
"events_url": "https://api.github.com/users/Raghavendra15/events{/privacy}",
"received_events_url": "https://api.github.com/users/Raghavendra15/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"HI,\r\nI have the same problem!\r\nWhat was the solution here?",
"Me too.",
"Need more information like version of python/pytorch/transformers (all the information requested in the issue templates actually)",
"> Need more information like version of python/pytorch/transformers (all the information requested in the issue templates actually)\r\n\r\nI am experiencing this issue as well with the BertForNextSentencePrediction model and not having much luck with a solution. I'm using macOS Mojave 10.14.6, python 3.7, pytorch 1.3.1 and transformers 2.2.1. \r\n\r\nPlease let me know if there is any more details I can provide. Thanks!",
"You should open a new issue with a clean code example we can test and the associate full error message."
] | 1,563 | 1,576 | 1,563 | NONE | null | I get the following error when I use the pytorch transformers, It used to work just fine in the previous pretrained-bert,
Original code: https://github.com/ceshine/pytorch-pretrained-BERT/blob/master/notebooks/Next%20Sentence%20Prediction.ipynb
Code which has error:
model.eval()
res = []
mb = progress_bar(eval_dataloader)
for input_ids, input_mask, segment_ids in mb:
input_ids = input_ids.to(device)
input_mask = input_mask.to(device)
segment_ids = segment_ids.to(device)
with torch.no_grad():
res.append(nn.functional.softmax(
model(input_ids, segment_ids, input_mask), dim=1
)[:, 0].detach().cpu().numpy())
res = np.concatenate(res)
Error stacktrace:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-37-a816e24060d8> in <module>
24 with torch.no_grad():
25 res.append(nn.functional.softmax(
---> 26 model(input_ids, segment_ids, input_mask), dim=1
27 )[:, 0].detach().cpu().numpy())
28
/common/users/rs1693/my_venv/venv_bert/lib64/python3.6/site-packages/torch/nn/functional.py in softmax(input, dim, _stacklevel, dtype)
1261 dim = _get_softmax_dim('softmax', input.dim(), _stacklevel)
1262 if dtype is None:
-> 1263 ret = input.softmax(dim)
1264 else:
1265 ret = input.softmax(dim, dtype=dtype)
AttributeError: 'tuple' object has no attribute 'softmax'
I read many posts where they say to do the following:(But not sure where in the code I have to make these changes)
1. disable aux_logits when the model is created here by also passing aux_logits=False to the inception_v3 function.
2. Edit your train function to accept and unpack the returned tuple here to be something like:
output, aux = model(input_var)
But where in the above function I have to do this?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/807/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/806 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/806/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/806/comments | https://api.github.com/repos/huggingface/transformers/issues/806/events | https://github.com/huggingface/transformers/pull/806 | 469,311,770 | MDExOlB1bGxSZXF1ZXN0Mjk4NTY5ODA2 | 806 | Fix a path so that a test can run on Windows | {
"login": "wschin",
"id": 3524474,
"node_id": "MDQ6VXNlcjM1MjQ0NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3524474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wschin",
"html_url": "https://github.com/wschin",
"followers_url": "https://api.github.com/users/wschin/followers",
"following_url": "https://api.github.com/users/wschin/following{/other_user}",
"gists_url": "https://api.github.com/users/wschin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wschin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wschin/subscriptions",
"organizations_url": "https://api.github.com/users/wschin/orgs",
"repos_url": "https://api.github.com/users/wschin/repos",
"events_url": "https://api.github.com/users/wschin/events{/privacy}",
"received_events_url": "https://api.github.com/users/wschin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ok for that, thanks @wschin"
] | 1,563 | 1,566 | 1,566 | CONTRIBUTOR | null | The path for a temporal file is hard coded, so the test fails on Windows. This PR changes that line to a more platform-natural path. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/806/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/806",
"html_url": "https://github.com/huggingface/transformers/pull/806",
"diff_url": "https://github.com/huggingface/transformers/pull/806.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/806.patch",
"merged_at": 1566342881000
} |
https://api.github.com/repos/huggingface/transformers/issues/805 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/805/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/805/comments | https://api.github.com/repos/huggingface/transformers/issues/805/events | https://github.com/huggingface/transformers/issues/805 | 469,270,852 | MDU6SXNzdWU0NjkyNzA4NTI= | 805 | Where is "run_bert_classifier.py"? | {
"login": "amirj",
"id": 1645137,
"node_id": "MDQ6VXNlcjE2NDUxMzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1645137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amirj",
"html_url": "https://github.com/amirj",
"followers_url": "https://api.github.com/users/amirj/followers",
"following_url": "https://api.github.com/users/amirj/following{/other_user}",
"gists_url": "https://api.github.com/users/amirj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amirj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amirj/subscriptions",
"organizations_url": "https://api.github.com/users/amirj/orgs",
"repos_url": "https://api.github.com/users/amirj/repos",
"events_url": "https://api.github.com/users/amirj/events{/privacy}",
"received_events_url": "https://api.github.com/users/amirj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It's now `run_glue.py`",
"Hi @thomwolf \r\nDoc needs changes from run_bert_classifier to run_glue\r\nhttps://huggingface.co/pytorch-transformers/examples.html",
"Hey, just another headsup @thomwolf \r\nThis Doc also needs changing for the run_bert_classifier.py:\r\nhttps://huggingface.co/transformers/v1.1.0/examples.html#introduction",
"That's an old version of the doc @mtwright (notice the `v1.1.0`), you should check out the up to date one:\r\n\r\nhttps://huggingface.co/transformers/examples.html#introduction",
"documentation is still broken, by the way",
"like, here, points to a bunch of files that do not exist. also not quite sure if these instructions work anymore:\r\n\r\nhttps://huggingface.co/transformers/converting_tensorflow_models.html"
] | 1,563 | 1,606 | 1,563 | NONE | null | Thanks for this great repo.
Is there any equivalent to [the previous run_bert_classifier.py](https://github.com/huggingface/pytorch-pretrained-BERT/tree/master/examples/run_bert_classifier.py)?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/805/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/805/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/804 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/804/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/804/comments | https://api.github.com/repos/huggingface/transformers/issues/804/events | https://github.com/huggingface/transformers/issues/804 | 469,195,603 | MDU6SXNzdWU0NjkxOTU2MDM= | 804 | Answers to Bullet/List Items by bert | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,569 | 1,569 | NONE | null | Hi,
There are lists in document (Bullet items), and I am running BERT (non trained as well as squad trained). But seems BERT does not understands Bullets/lines starting with number or star.
Will any text preprocessing help? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/804/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/803 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/803/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/803/comments | https://api.github.com/repos/huggingface/transformers/issues/803/events | https://github.com/huggingface/transformers/issues/803 | 469,135,790 | MDU6SXNzdWU0NjkxMzU3OTA= | 803 | AssertionError in BERT-Quickstart example | {
"login": "marcalt94",
"id": 44497700,
"node_id": "MDQ6VXNlcjQ0NDk3NzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/44497700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marcalt94",
"html_url": "https://github.com/marcalt94",
"followers_url": "https://api.github.com/users/marcalt94/followers",
"following_url": "https://api.github.com/users/marcalt94/following{/other_user}",
"gists_url": "https://api.github.com/users/marcalt94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marcalt94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marcalt94/subscriptions",
"organizations_url": "https://api.github.com/users/marcalt94/orgs",
"repos_url": "https://api.github.com/users/marcalt94/repos",
"events_url": "https://api.github.com/users/marcalt94/events{/privacy}",
"received_events_url": "https://api.github.com/users/marcalt94/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"What is the full line of the assertion you are testing?\r\nAnd what is your input text?",
"I think the mistake was with me, sorry"
] | 1,563 | 1,563 | 1,563 | NONE | null | Hey
I tried running the Quickstart example with my own little text. Everything works fine until I get to the ```assert tokenized_text ==... ``` part. When I try to enter my text instead of the Jim Henson text, I get the following error message: ```Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AssertionError ```
I'm not sure if the error is an operation-problem or if it is an issue... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/803/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/802 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/802/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/802/comments | https://api.github.com/repos/huggingface/transformers/issues/802/events | https://github.com/huggingface/transformers/issues/802 | 469,086,281 | MDU6SXNzdWU0NjkwODYyODE= | 802 | fp16+xlnet did not gain any speed increase | {
"login": "fyubang",
"id": 25549892,
"node_id": "MDQ6VXNlcjI1NTQ5ODky",
"avatar_url": "https://avatars.githubusercontent.com/u/25549892?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fyubang",
"html_url": "https://github.com/fyubang",
"followers_url": "https://api.github.com/users/fyubang/followers",
"following_url": "https://api.github.com/users/fyubang/following{/other_user}",
"gists_url": "https://api.github.com/users/fyubang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fyubang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fyubang/subscriptions",
"organizations_url": "https://api.github.com/users/fyubang/orgs",
"repos_url": "https://api.github.com/users/fyubang/repos",
"events_url": "https://api.github.com/users/fyubang/events{/privacy}",
"received_events_url": "https://api.github.com/users/fyubang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"XLNet makes heavy use of `torch.einsum()` but I'm not sure this method is fp16 compatible.\r\nIt's also quite slow currently so maybe in the mid/long-term it would be good to change these einsum to standard matmul. I won't have time to do that very soon though.",
"As as a suggestion, you can add ```apex.amp.register_half_function(torch, 'einsum')``` somewhere near the top of your driver script (examples/run_squad.py for instance).\r\n\r\nThis forces `amp` to cast the inputs to einsum to `torch.half` before executing, allowing you to get the perf benefits of fp16 + TensorCores when appropriate.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi, @slayton58, could `torch.einsum` be automatically processed by `apex.amp` now?\r\nYour response will be appreciated!"
] | 1,563 | 1,656 | 1,574 | NONE | null | Hi,
I tried fp16 + xlnet, it did not work.
when I set opt_level='O2', the memory was half, but it was much slower than fp32.
when I set opt_level='O1', the memory was original, and it has similar speed with fp32.
Environment: v100, cuda, 10.1, torch 1.1
The environment is ok, because I tried bert + fp16 and it was much faster than fp32.
I thought it is the problem of torch.einsum, but I am not that sure.
I used the code here to test: https://github.com/huggingface/pytorch-transformers/blob/master/examples/run_squad.py | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/802/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/801 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/801/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/801/comments | https://api.github.com/repos/huggingface/transformers/issues/801/events | https://github.com/huggingface/transformers/pull/801 | 469,077,120 | MDExOlB1bGxSZXF1ZXN0Mjk4Mzc1OTk1 | 801 | import sys twice | {
"login": "bzantium",
"id": 19511788,
"node_id": "MDQ6VXNlcjE5NTExNzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/19511788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bzantium",
"html_url": "https://github.com/bzantium",
"followers_url": "https://api.github.com/users/bzantium/followers",
"following_url": "https://api.github.com/users/bzantium/following{/other_user}",
"gists_url": "https://api.github.com/users/bzantium/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bzantium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bzantium/subscriptions",
"organizations_url": "https://api.github.com/users/bzantium/orgs",
"repos_url": "https://api.github.com/users/bzantium/repos",
"events_url": "https://api.github.com/users/bzantium/events{/privacy}",
"received_events_url": "https://api.github.com/users/bzantium/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,563 | 1,563 | 1,563 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/801/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/801",
"html_url": "https://github.com/huggingface/transformers/pull/801",
"diff_url": "https://github.com/huggingface/transformers/pull/801.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/801.patch",
"merged_at": 1563359486000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/800 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/800/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/800/comments | https://api.github.com/repos/huggingface/transformers/issues/800/events | https://github.com/huggingface/transformers/issues/800 | 469,011,508 | MDU6SXNzdWU0NjkwMTE1MDg= | 800 | attention_mask at run_squad.py | {
"login": "seanie12",
"id": 19561061,
"node_id": "MDQ6VXNlcjE5NTYxMDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/19561061?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seanie12",
"html_url": "https://github.com/seanie12",
"followers_url": "https://api.github.com/users/seanie12/followers",
"following_url": "https://api.github.com/users/seanie12/following{/other_user}",
"gists_url": "https://api.github.com/users/seanie12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seanie12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seanie12/subscriptions",
"organizations_url": "https://api.github.com/users/seanie12/orgs",
"repos_url": "https://api.github.com/users/seanie12/repos",
"events_url": "https://api.github.com/users/seanie12/events{/privacy}",
"received_events_url": "https://api.github.com/users/seanie12/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I have same issue. ",
"Thanks @seanie12!"
] | 1,563 | 1,563 | 1,563 | NONE | null | I think there's minor mistake in [run_squad.py](https://github.com/huggingface/pytorch-transformers/blob/5fe0b378d8/examples/run_squad.py#L298) at line 298
```
inputs = {'input_ids': batch[0],
'token_type_ids': None if args.model_type == 'xlm' else batch[1],
'attention_mask': batch[2],
'start_positions': batch[3],
'end_positions': batch[4]}
```
but i think batch[1] is attention_mask and batch[2] is segment_ids, so it should be like this
```
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'token_type_ids': None if args.model_type == 'xlm' else batch[2],
'start_positions': batch[3],
'end_positions': batch[4]}
```
because the data is
```
dataset = TensorDataset(all_input_ids, all_input_mask, all_segment_ids,
all_start_positions, all_end_positions,
all_cls_index, all_p_mask)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/800/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/800/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/799 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/799/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/799/comments | https://api.github.com/repos/huggingface/transformers/issues/799/events | https://github.com/huggingface/transformers/issues/799 | 469,008,302 | MDU6SXNzdWU0NjkwMDgzMDI= | 799 | Error while adding new tokens to GPT2 tokenizer | {
"login": "ZHAOTING",
"id": 5592709,
"node_id": "MDQ6VXNlcjU1OTI3MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5592709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZHAOTING",
"html_url": "https://github.com/ZHAOTING",
"followers_url": "https://api.github.com/users/ZHAOTING/followers",
"following_url": "https://api.github.com/users/ZHAOTING/following{/other_user}",
"gists_url": "https://api.github.com/users/ZHAOTING/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZHAOTING/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZHAOTING/subscriptions",
"organizations_url": "https://api.github.com/users/ZHAOTING/orgs",
"repos_url": "https://api.github.com/users/ZHAOTING/repos",
"events_url": "https://api.github.com/users/ZHAOTING/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZHAOTING/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Added:\r\n\r\nI also found that `_convert_token_to_id()` in tokenization_gpt2.py (line 182) uses `unk_token`, which is initially `None` in GPT2 tokenizer. This line of code can also lead to bugs.\r\n~~~~\r\ndef _convert_token_to_id(self, token):\r\n \"\"\" Converts a token (str/unicode) in an id using the vocab. \"\"\"\r\n if token in self.encoder:\r\n return self.encoder.get(token)\r\n return self.encoder.get(self.unk_token)\r\n~~~~",
"Indeed, GPT-2 doesn't have a `unk_token` since it's supposed to be able to encode any string but this does have some unintended consequences since we also use the fact that a tokenizer returns the `unk_token` to check whether a token is in the current vocabulary or not.\r\n\r\nI'll see how we can update this in the most coherent way. Probably mapping the special `<|endoftext|>` token as `unk_token` and using the same logic as the other models (returning the `unk_token` when the token is not in the vocabulary) is the simplest way to fix it.",
"What will be the quick fix for this? When running `run_generation.py` in examples, I could resolve the `None' error by adding special tokens to the tokenizer like below:\r\n\r\n` special_tokens = {\"cls_token\":\"[CLS]\", \"unk_token\":\"[UNK]\"} `\r\n` tokenier = tokenizer_class.from_pretrained(\"gpt2\", cls_token=\"[CLS]\", unk_token=\"[UNK]\")`\r\n` tokenizer.add_special_tokens(special_tokens) `\r\n\r\nBut then, I got a CUDA error (probably) due to the different embedding size of the model and tokenizer. So, I resized the model's token embedding size from 50257 to 50259 like below:\r\n\r\n` model.resize_token_embeddings(len(tokenizer)) `\r\n\r\nThen, it tokenizes the tokens correctly with the additional token encoders that have `tokenizer.added_tokens_encoder.keys()` with [CLS] and [UNK]. But, regardless of input, the gpt2 output seems to be wrong: a sequence [CLS] [CLS] ...\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"@dykang \r\nA quick fix would be cleaning the generated sentence by replacing unwanted tokens as \"\".\r\nHowever, it is reasonable that the GPT2 outputs weird sentences given inputs including \"[CLS]\" as in your example because the new word embeddings are not trained. ",
"Thanks, @ZHAOTING. However, GH-910 seems to only add unk token though. @thomwolf, would the PR be generalizable to add any special tokens as my earlier comment above? When special tokens are added, how do existing pre-trained gpt2 models work properly? ",
"Adding some context. Is it possible to add [CLS] and [SEP] tokens to gpt2-medium in a non destructive way. After finetuning a bit following the structure indicated here: \r\n```\r\n # (Default, BERT/XLM pattern): [CLS] + A + [SEP] + B + [SEP]\r\n # (XLNet/GPT pattern): A + [SEP] + B + [SEP] + [CLS]\r\n```\r\nit is clear that GPT2 will attempt to predict the BPE tokens for [CLS] as ` [C LS]`, but adding the [SEP] and [CLS] tokens produce only an output of [CLS] / [SEP] tokens despite any top_k, top_p, or temperature settings.\r\n\r\n\r\n",
"I had a similar problem.\r\n\r\nI tried to finetune a gpt2 model using the simpletransformers library. \r\n\r\nThe error seems to originate from\r\n\"/usr/local/lib/python3.6/site-packages/transformers/optimization.py\"\r\nThis is the file where the optimizer AdamW is implemented.\r\n\r\nIn my case, I made a backup of the \r\n\"scheduler.pt\" and \"optimizer.pt\" \r\nfiles in my saved checkpoint.\r\n\r\nIn file \r\n\"/usr/local/lib/python3.6/site-packages/simpletransformers/language_modeling/language_modeling_model.py\"\r\nthe optimizer and scheduler were also loaded from the checkpoint but this broke the code.\r\nIf these two files cannot be found in the path then your code will proceed with the rest of it.\r\n\r\nThis however remains a bug as your new optimizer and scheduler will start from scratch ignoring any \"knowledge\" they already contained about your previous optimization.\r\n\r\n "
] | 1,563 | 1,606 | 1,565 | CONTRIBUTOR | null | A **NoneType Error** is encountered when I call `add_tokens()` to add new tokens to **GPT2 tokenizer** and the error is as following:
~~~~
File ".../pytorch_transformers/tokenization_utils.py", line 311, in add_tokens
if self.convert_tokens_to_ids(token) == self.convert_tokens_to_ids(self.unk_token):
File ".../pytorch_transformers/tokenization_utils.py", line 381, in convert_tokens_to_ids
for token in tokens:
TypeError: 'NoneType' object is not iterable
~~~~
The error comes from that it checks if the word id of a new `token` equals to that of the `unk_token` (as in tokenization_utils.py, line 311) but a **GPT2 tokenizer**'s `unk_token` is `None`. Therefore, the error happens when it tries iterating over `unk_token` in `convert_tokens_to_ids()` (as in tokenization_utils.py, line 381).
I think we can solve it by either
1) changing the if-condition in line 311 and not checking the new token's equality to `unk_token` (I also don't quite understand the logic behind checking the equality here)
or
2) dealing with `None` input in `convert_tokens_to_ids(self, tokens)`so it returns `[]` if `tokens is None`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/799/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/799/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/798 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/798/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/798/comments | https://api.github.com/repos/huggingface/transformers/issues/798/events | https://github.com/huggingface/transformers/issues/798 | 469,001,218 | MDU6SXNzdWU0NjkwMDEyMTg= | 798 | [bug]BertAdam change to AdamW in example | {
"login": "shibing624",
"id": 10249622,
"node_id": "MDQ6VXNlcjEwMjQ5NjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/10249622?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shibing624",
"html_url": "https://github.com/shibing624",
"followers_url": "https://api.github.com/users/shibing624/followers",
"following_url": "https://api.github.com/users/shibing624/following{/other_user}",
"gists_url": "https://api.github.com/users/shibing624/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shibing624/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shibing624/subscriptions",
"organizations_url": "https://api.github.com/users/shibing624/orgs",
"repos_url": "https://api.github.com/users/shibing624/repos",
"events_url": "https://api.github.com/users/shibing624/events{/privacy}",
"received_events_url": "https://api.github.com/users/shibing624/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Changing that line still causes an error in line 568, BertAdam has to be changed to AdamW as well and the warmup kwarg has to be removed.",
"#797 (specifically d6522e28732fd14a926440ef5f315e6a8e13792c) ",
"have fix the error! I tested it on toy dataset.",
"@shibing624 this bug can be closed"
] | 1,563 | 1,563 | 1,563 | NONE | null | https://github.com/huggingface/pytorch-transformers/blob/master/examples/lm_finetuning/simple_lm_finetuning.py#L35 BertAdam change to AdamW | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/798/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/798/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/797 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/797/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/797/comments | https://api.github.com/repos/huggingface/transformers/issues/797/events | https://github.com/huggingface/transformers/pull/797 | 468,939,227 | MDExOlB1bGxSZXF1ZXN0Mjk4MjcwMzY0 | 797 | fix some errors for distributed lm_finetuning | {
"login": "yzy5630",
"id": 9417680,
"node_id": "MDQ6VXNlcjk0MTc2ODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9417680?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yzy5630",
"html_url": "https://github.com/yzy5630",
"followers_url": "https://api.github.com/users/yzy5630/followers",
"following_url": "https://api.github.com/users/yzy5630/following{/other_user}",
"gists_url": "https://api.github.com/users/yzy5630/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yzy5630/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yzy5630/subscriptions",
"organizations_url": "https://api.github.com/users/yzy5630/orgs",
"repos_url": "https://api.github.com/users/yzy5630/repos",
"events_url": "https://api.github.com/users/yzy5630/events{/privacy}",
"received_events_url": "https://api.github.com/users/yzy5630/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/797?src=pr&el=h1) Report\n> Merging [#797](https://codecov.io/gh/huggingface/pytorch-transformers/pull/797?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/5fe0b378d899f81eb0a7f2db0c4eb0234748e915?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/797?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #797 +/- ##\n=======================================\n Coverage 78.91% 78.91% \n=======================================\n Files 34 34 \n Lines 6193 6193 \n=======================================\n Hits 4887 4887 \n Misses 1306 1306\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/797?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/797?src=pr&el=footer). Last update [5fe0b37...a7ba27b](https://codecov.io/gh/huggingface/pytorch-transformers/pull/797?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi, do you want to update these examples to the new `pytorch-transformers` API at the same time?\r\nModels now return `tuple` so we should take the first element of the model output as the loss and we should also update `BertAdam` to `AdamW`.",
"ok, I have a try, I‘m stilll use the old API. ",
"Hi, I have updated the opt to the new API , please check. ",
"Thanks @yzy5630!"
] | 1,563 | 1,563 | 1,563 | CONTRIBUTOR | null | 1. makedirs
2. save models | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/797/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/797/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/797",
"html_url": "https://github.com/huggingface/transformers/pull/797",
"diff_url": "https://github.com/huggingface/transformers/pull/797.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/797.patch",
"merged_at": 1563485554000
} |
https://api.github.com/repos/huggingface/transformers/issues/796 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/796/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/796/comments | https://api.github.com/repos/huggingface/transformers/issues/796/events | https://github.com/huggingface/transformers/pull/796 | 468,879,150 | MDExOlB1bGxSZXF1ZXN0Mjk4MjIzNzQz | 796 | Minor documentation updates | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks Stefan!"
] | 1,563 | 1,563 | 1,563 | COLLABORATOR | null | Hi,
this PR just updates some urls in the documentation :)
---
Thanks for your great work on PyTorch-Transformers 🤗 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/796/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/796",
"html_url": "https://github.com/huggingface/transformers/pull/796",
"diff_url": "https://github.com/huggingface/transformers/pull/796.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/796.patch",
"merged_at": 1563359195000
} |
https://api.github.com/repos/huggingface/transformers/issues/795 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/795/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/795/comments | https://api.github.com/repos/huggingface/transformers/issues/795/events | https://github.com/huggingface/transformers/issues/795 | 468,878,037 | MDU6SXNzdWU0Njg4NzgwMzc= | 795 | XLNet-large-cased: hyper-parameters for fine-tuning on SST-2 | {
"login": "avostryakov",
"id": 174194,
"node_id": "MDQ6VXNlcjE3NDE5NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/174194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avostryakov",
"html_url": "https://github.com/avostryakov",
"followers_url": "https://api.github.com/users/avostryakov/followers",
"following_url": "https://api.github.com/users/avostryakov/following{/other_user}",
"gists_url": "https://api.github.com/users/avostryakov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avostryakov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avostryakov/subscriptions",
"organizations_url": "https://api.github.com/users/avostryakov/orgs",
"repos_url": "https://api.github.com/users/avostryakov/repos",
"events_url": "https://api.github.com/users/avostryakov/events{/privacy}",
"received_events_url": "https://api.github.com/users/avostryakov/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I also tried to finetune xlnet base on squad 2.0 but the numbers on dev are pretty bad\r\n`Results: {'exact': 3.0405120862461046, 'f1': 6.947601433150003, 'total': 11873, 'HasAns_exact': 6.056005398110662, 'HasAns_f1': 13.881388632893048, 'HasAns_total': 5928, 'NoAns_exact': 0.0336417157275021, 'NoAns_f1': 0.0336417157275021, 'NoAns_total': 5945, 'best_exact': 50.07159100480081, 'best_exact_thresh': 0.0, 'best_f1': 50.07159100480081, 'best_f1_thresh': 0.0}`",
"I suspect something is wrong with the evaluation code. Looking into it now.",
"@tbright17 Nothing wrong with evaluation. Accuracy and evaluation loss aren't changed during training. I used my own evaluation script, I used old BertAdam or OpenAIAdam optimizers without success.\r\n@thomwolf Can you help?",
"I'll give a look, I've only tested XLNet on STS-B for the moment. You should check the hyper-parameters as well, they probably won't be the same as the ones of STS-B (some are mentioned in the XLNet paper).",
"First thing that comes to mind is that SST-2 is ~10 times bigger than STS-B (see the [GLUE paper](https://arxiv.org/abs/1804.07461)) so you need to increase the number of training step a lot if you want to do at least one full epoch on SST-2 training dataset (here you use the value for STS-B). And you should probably do several epochs, e.g. we do 6-7 epochs on STS-B). Check some examples of recommended hyper-parameters table 8 of the [xlnet paper](http://arxiv.org/abs/1906.08237).\r\n\r\nYou can also directly specify the number of epochs instead of the maximum number of steps in the script. You can see all the hyper-parameters of the script with `python ./run_glue.py --help`.",
"> First thing that comes to mind is that SST-2 is ~10 times bigger than STS-B (see the [GLUE paper](https://arxiv.org/abs/1804.07461)) so you need to increase the number of training step a lot if you want to do at least one full epoch on SST-2 training dataset (here you use the value for STS-B). And you should probably do several epochs, e.g. we do 6-7 epochs on STS-B). Check some examples of recommended hyper-parameters table 8 of the [xlnet paper](http://arxiv.org/abs/1906.08237).\r\n> \r\n> You can also directly specify the number of epochs instead of the maximum number of steps in the script. You can see all the hyper-parameters of the script with `python ./run_glue.py --help`.\r\n\r\nI trained STS-B task with the same problem. You can see the following output with evaluation of every 100 steps (I added train and evaluation loss in output):\r\n\r\n```\r\n07/17/2019 13:09:55 - INFO - __main__ - ***** Running evaluation *****\r\n07/17/2019 13:09:55 - INFO - __main__ - Num examples = 1500\r\n07/17/2019 13:09:55 - INFO - __main__ - Batch size = 8\r\n07/17/2019 13:10:09 - INFO - __main__ - ***** Eval results *****\r\n07/17/2019 13:10:09 - INFO - __main__ - corr = -0.05367882385720809\r\n07/17/2019 13:10:09 - INFO - __main__ - eval_loss = 2.8412214481133096##################################################################################################################| 188/188 [00:14<00:00, 13.41it/s]\r\n07/17/2019 13:10:09 - INFO - __main__ - pearson = -0.041275192\r\n07/17/2019 13:10:09 - INFO - __main__ - spearmanr = -0.06608245566229025\r\n07/17/2019 13:10:09 - INFO - __main__ - Training loss: 307.258519500494\r\n 07/17/2019 13:10:41 - INFO - __main__ - Loading features from cached file ...glue_data/STS-B/cached_dev_xlnet-large-cased_128_sts-b | 199/719 [01:18<03:25, 2.53it/s]\r\n07/17/2019 13:10:41 - INFO - __main__ - ***** Running evaluation *****\r\n07/17/2019 13:10:41 - INFO - __main__ - Num examples = 1500\r\n07/17/2019 13:10:41 - INFO - __main__ - Batch size = 8\r\n07/17/2019 13:10:56 - INFO - __main__ - ***** Eval results *****\r\n07/17/2019 13:10:56 - INFO - __main__ - corr = 0.13943037650184956\r\n07/17/2019 13:10:56 - INFO - __main__ - eval_loss = 2.3762524007482733##################################################################################################################| 188/188 [00:14<00:00, 13.29it/s]\r\n07/17/2019 13:10:56 - INFO - __main__ - pearson = 0.13502572\r\n07/17/2019 13:10:56 - INFO - __main__ - spearmanr = 0.1438350282350605\r\n07/17/2019 13:10:56 - INFO - __main__ - Training loss: 533.9101385176182\r\n 07/17/2019 13:11:28 - INFO - __main__ - Loading features from cached file .../glue_data/STS-B/cached_dev_xlnet-large-cased_128_sts-b | 299/719 [02:05<02:56, 2.39it/s]\r\n07/17/2019 13:11:28 - INFO - __main__ - ***** Running evaluation *****\r\n07/17/2019 13:11:28 - INFO - __main__ - Num examples = 1500\r\n07/17/2019 13:11:28 - INFO - __main__ - Batch size = 8\r\n07/17/2019 13:11:42 - INFO - __main__ - ***** Eval results *****\r\n07/17/2019 13:11:42 - INFO - __main__ - corr = -0.0830871973267994\r\n07/17/2019 13:11:42 - INFO - __main__ - eval_loss = 2.5565993221516305##################################################################################################################| 188/188 [00:14<00:00, 13.20it/s]\r\n07/17/2019 13:11:42 - INFO - __main__ - pearson = -0.08915693\r\n07/17/2019 13:11:42 - INFO - __main__ - spearmanr = -0.077017461524765\r\n07/17/2019 13:11:42 - INFO - __main__ - Training loss: 761.6802722513676\r\n 07/17/2019 13:12:15 - INFO - __main__ - Loading features from cached file .../glue_data/STS-B/cached_dev_xlnet-large-cased_128_sts-b | 399/719 [02:52<02:18, 2.32it/s]\r\n07/17/2019 13:12:15 - INFO - __main__ - ***** Running evaluation *****\r\n07/17/2019 13:12:15 - INFO - __main__ - Num examples = 1500\r\n07/17/2019 13:12:15 - INFO - __main__ - Batch size = 8\r\n07/17/2019 13:12:29 - INFO - __main__ - ***** Eval results *****\r\n07/17/2019 13:12:29 - INFO - __main__ - corr = -0.08715267932681456\r\n07/17/2019 13:12:29 - INFO - __main__ - eval_loss = 2.398741365113157###################################################################################################################| 188/188 [00:14<00:00, 13.12it/s]\r\n07/17/2019 13:12:29 - INFO - __main__ - pearson = -0.08428703\r\n07/17/2019 13:12:29 - INFO - __main__ - spearmanr = -0.09001832616862088\r\n07/17/2019 13:12:29 - INFO - __main__ - Training loss: 974.8287971913815\r\n```\r\n\r\nHow you can see training loss is increasing, eval loss is almost the same, other metrics fluctuate around 0.",
"@thomwolf So, it looks like training is happening but in opposite direction for some reason",
"Maybe you haven't fully read the [explanation](https://github.com/huggingface/pytorch-transformers#fine-tuning-xlnet-model-on-the-sts-b-regression-task) accompanying the STS-B example in the readme?\r\n\r\nIt says \"On this machine we thus have a batch size of 32, please increase `gradient_accumulation_steps` to reach the same batch size if you have a smaller machine.\"",
"@avostryakov Did you try to reduce the learning rate? I had a similar issue training with the TensorFlow version XLNet on only one GPU. I tried reducing the learning rate from 5e-5 to 1e-5, and it worked. Wish this can help you.",
"@thomwolf @tbright17 I got similar numbers like you Squad 2.0. Seems that the model probably isn't learning much. I'll print out the losses to explore. Also should we change the LR as well? \r\n : the best I got with fine-tuning on Squad 2.0 with a `train_batch_size=8` and `gas=1` all others are default on a single v100 gpu was:\r\n `07/16/2019 16:21:43 - INFO - __main__ - Results: {'exact': 26.438136949380947, 'f1': 28.470459931964722, 'total': 11873, 'HasAns_exact': 0.08434547908232119, 'HasAns_f1': 4.154819630940996, 'HasAns_total': 5928, 'NoAns_exact': 52.716568544995795, 'NoAns_f1': 52.716568544995795, 'NoAns_total': 5945, 'best_exact': 50.07159100480081, 'best_exact_thresh': 0.0, 'best_f1': 50.07159100480081, 'best_f1_thresh': 0.0}`",
"May also be a problem of batch size, the authors use a batch size between 32 and 128 in the paper. \r\n\r\nWhat effective batch size do you have (printed during training)?\r\n\r\nWhile we reproduce the official XLNet number on STS-B, I still have to work a bit on the SQuAD example for XLNet, the XLNet authors used a complex pre- and post-processing of the data (smarter than Bert's) that I haven't fully integrated into our `run_squad` example yet.",
"> Maybe you haven't fully read the [explanation accompanying the STS-B example in the readme](https://github.com/huggingface/pytorch-transformers#fine-tuning-xlnet-model-on-the-sts-b-regression-task)?\r\n> \r\n> It says \"On this machine we thus have a batch size of 32, please increase `gradient_accumulation_steps` to reach the same batch size if you have a smaller machine.\"\r\n\r\n@thomwolf You are right, STS-B started to train with batch size 32 and gradient_accumulation_steps = 2. Now I'm wondering why it so heavily depends on batch size. But it doesn't help for STS-2, I set max_steps=5000 (it's 5 epochs) and training and evaluation loss didn't change at all during training. I'm trying to train with learning rate 1e-5 how it was recommended by @alexpython1988 ",
"@thomwolf maybe. Also my sequence length is `384`: the authors did mention they prolly did 512. Here's my batch size related printout: I think the number of examples seem a lil low. No? I think Squad has about 150K examples (ha and na questions) and with the `doc_stride` I think it should be more than 150k examples (I think).\r\n\r\n`07/15/2019 13:23:32 - INFO - __main__ - ***** Running training *****`\r\n`07/15/2019 13:23:32 - INFO - __main__ - Num examples = 133947`\r\n`07/15/2019 13:23:32 - INFO - __main__ - Num Epochs = 3`\r\n`07/15/2019 13:23:32 - INFO - __main__ - Instantaneous batch size per GPU = 4`\r\n`07/15/2019 13:23:32 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 4`\r\n`07/15/2019 13:23:32 - INFO - __main__ - Gradient Accumulation steps = 1`\r\n`07/15/2019 13:23:32 - INFO - __main__ - Total optimization steps = 100461`\r\n\r\n\r\nI saw in the [renatoviolin's repo](https://github.com/renatoviolin/xlnet/blob/master/run_squad_GPU.py) that they have the following which gives them `86F1` on a RTX2080: \r\n`flags.DEFINE_integer(\"max_seq_length\",\r\n default=512, help=\"Max sequence length\")\r\nflags.DEFINE_integer(\"max_query_length\",\r\n default=64, help=\"Max query length\")\r\nflags.DEFINE_integer(\"doc_stride\",\r\n default=128, help=\"Doc stride\")\r\nflags.DEFINE_integer(\"max_answer_length\",\r\n default=64, help=\"Max answer length\")`\r\n\r\nAlso, lr is different than ours (`5e-5` in this repo):\r\n`flags.DEFINE_float(\"learning_rate\", default=3e-5, help=\"initial learning rate\")`\r\n",
"Learning rate = 1e-5 helps to train STS-2 together with batch size 32 and accumulation steps = 2. I need more experiments but it works. Thanks, @thomwolf, and @alexpython1988!",
"Great to hear, good job and good luck @avostryakov! Feel free to share good hyper-parameters if you find a nice set and I can add them to the documentation (with credits).",
"> May also be a problem of batch size, the authors use a batch size between 32 and 128 in the paper.\r\n> \r\n> What effective batch size do you have (printed during training)?\r\n> \r\n> While we reproduce the official XLNet number on STS-B, I still have to work a bit on the SQuAD example for XLNet, the XLNet authors used a complex pre- and post-processing of the data (smarter than Bert's) that I haven't fully integrated into our `run_squad` example yet.\r\n\r\nI was using per_gpu_train_batch 8 for squad 2.0. Powerful model is hard to tune maybe",
"> Great to hear, good job and good luck @avostryakov! Feel free to share good hyper-parameters if you find a nice set and I can add them to the documentation (with credits).\r\n\r\n@thomwolf My the best result for SST-2 so far is 94.15 of accuracy (in xlnet's article 95.6). It's better than BERT-large. I trained with the following parameters:\r\n\r\n```\r\npython ./examples/run_glue.py \\\r\n --model_type xlnet \\\r\n --model_name_or_path xlnet-large-cased \\\r\n --do_train \\\r\n --evaluate_during_training \\\r\n --do_eval \\\r\n --logging_steps 500 \\\r\n --save_steps 3000 \\\r\n --task_name=sst-2 \\\r\n --data_dir=${GLUE_DIR}/SST-2 \\\r\n --output_dir=./proc_data/sst-2 \\\r\n --max_seq_length=128 \\\r\n --learning_rate 1e-5 \\\r\n --per_gpu_eval_batch_size=8 \\\r\n --per_gpu_train_batch_size=8 \\\r\n --gradient_accumulation_steps=1 \\\r\n --max_steps=16000 \\\r\n --model_name=xlnet-large-cased \\\r\n --overwrite_output_dir \\\r\n --overwrite_cache \\\r\n --warmup_steps=120 \\\r\n --fp16\r\n```",
"@thomwolf Ok, the last result for SST-2 almost matched with XLNet article: Accuracy 95.4:\r\n\r\n```\r\npython ./examples/run_glue.py \\\r\n --model_type xlnet \\\r\n --model_name_or_path xlnet-large-cased \\\r\n --do_train \\\r\n --evaluate_during_training \\\r\n --do_eval \\\r\n --logging_steps 400 \\\r\n --save_steps 3000 \\\r\n --task_name=sst-2 \\\r\n --data_dir=${GLUE_DIR}/SST-2 \\\r\n --output_dir=./proc_data/sst-2 \\\r\n --max_seq_length=128 \\\r\n --learning_rate 1e-5 \\\r\n --per_gpu_eval_batch_size=16 \\\r\n --per_gpu_train_batch_size=16 \\\r\n --gradient_accumulation_steps=1 \\\r\n --max_steps=8000 \\\r\n --model_name=xlnet-large-cased \\\r\n --overwrite_output_dir \\\r\n --overwrite_cache \\\r\n --warmup_steps=120 \\\r\n --fp16\r\n```\r\n\r\nThank you for your work!",
"This is great @avostryakov! Thanks for sharing the results!\r\nI'm editing the issue title until I've time to add the hyperparameters to the doc.",
"Hi, how could I finetune the model for text generation? Is it possible just having raw text for the finetuning?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,569 | 1,569 | NONE | null | I tried to finetune XLNet on one of the classification tasks from GLUE (Ubuntu, GPU Titan RTX, CUDA 10.0, pytorch 1.1):
export GLUE_DIR=/path/to/glue
python ./examples/run_glue.py \
--model_type xlnet \
--model_name_or_path xlnet-large-cased \
--do_train \
--do_eval \
--task_name=sst-2 \
--data_dir=${GLUE_DIR}/SST-2 \
--output_dir=./proc_data/sst-2 \
--max_seq_length=128 \
--per_gpu_eval_batch_size=8 \
--per_gpu_train_batch_size=8 \
--gradient_accumulation_steps=1 \
--max_steps=1200 \
--model_name=xlnet-large-cased \
--overwrite_output_dir \
--overwrite_cache \
--warmup_steps=120
Training and evaluation work without errors but it looks like accuracy doesn't increase during training, I evaluated every 500 steps:
07/16/2019 22:29:30 - INFO - __main__ - ***** Eval results *****
07/16/2019 22:29:30 - INFO - __main__ - acc = 0.5091743119266054
07/16/2019 22:32:16 - INFO - __main__ - Loading features from cached file glue_data/SST-2/cached_dev_xlnet-large-cased_128_sst-2 | 999/8419 [05:37<41:47, 2.96it/s]
07/16/2019 22:32:17 - INFO - __main__ - ***** Running evaluation *****
07/16/2019 22:32:17 - INFO - __main__ - Num examples = 872
07/16/2019 22:32:17 - INFO - __main__ - Batch size = 8
07/16/2019 22:32:25 - INFO - __main__ - ***** Eval results *****
07/16/2019 22:32:25 - INFO - __main__ - acc = 0.5091743119266054
Finally the same acc:
07/16/2019 22:33:59 - INFO - __main__ - ***** Eval results *****
07/16/2019 22:33:59 - INFO - __main__ - acc = 0.5091743119266054
The same situation is with my own classification dataset. Accuracy wasn't changed during training. Something is wrong with finetuning of XLNet | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/795/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/794 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/794/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/794/comments | https://api.github.com/repos/huggingface/transformers/issues/794/events | https://github.com/huggingface/transformers/pull/794 | 468,805,569 | MDExOlB1bGxSZXF1ZXN0Mjk4MTYzODY4 | 794 | Adding additional model loading functionality | {
"login": "zphang",
"id": 1668462,
"node_id": "MDQ6VXNlcjE2Njg0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zphang",
"html_url": "https://github.com/zphang",
"followers_url": "https://api.github.com/users/zphang/followers",
"following_url": "https://api.github.com/users/zphang/following{/other_user}",
"gists_url": "https://api.github.com/users/zphang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zphang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zphang/subscriptions",
"organizations_url": "https://api.github.com/users/zphang/orgs",
"repos_url": "https://api.github.com/users/zphang/repos",
"events_url": "https://api.github.com/users/zphang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zphang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi Jason,\r\n\r\nCan you give me a little more information on the model-loading workflow you are using so I can understand the whys and wherefores of these proposed modifications?",
"Hey Thomas,\r\n\r\nSorry for the delay. My thinking is this: the `from_pretrained` method current does two things: resolve the path/archive for loading a pretrained model, and the specialized model loading logic (e.g. handing the fact that the current model may have different heads from those in the loaded weights). My proposed change is to separate the two and allow the user to just do the second. \r\nThis would be useful in cases where the user already has access to the `state_dict` in memory (e.g. if they have a different model saving workflow/format).",
"Hi Jason,\r\nThere is a `state_dict` option in `from_pretrained` that, I think, let you do just that!\r\nSee here for instance: https://huggingface.co/pytorch-transformers/main_classes/model.html#pytorch_transformers.PreTrainedModel.from_pretrained",
"Closing this for now. Feel free to re-open if the provided solution doesn't solve your problem, Jason."
] | 1,563 | 1,567 | 1,567 | CONTRIBUTOR | null | (Porting over some functionality from my old fork)
This PR adds additional methods to `PreTrainedModel` for loading models for `state_dict`s. Currently, `from_pretrained()` does a lot of the heavy lifting, but is primarily designed to load from file/folders. This adds additional options for users with different model-loading workflows. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/794/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/794",
"html_url": "https://github.com/huggingface/transformers/pull/794",
"diff_url": "https://github.com/huggingface/transformers/pull/794.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/794.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/793 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/793/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/793/comments | https://api.github.com/repos/huggingface/transformers/issues/793/events | https://github.com/huggingface/transformers/issues/793 | 468,792,527 | MDU6SXNzdWU0Njg3OTI1Mjc= | 793 | BertModel docstring missing pooled_output | {
"login": "sleepinyourhat",
"id": 1284441,
"node_id": "MDQ6VXNlcjEyODQ0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1284441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sleepinyourhat",
"html_url": "https://github.com/sleepinyourhat",
"followers_url": "https://api.github.com/users/sleepinyourhat/followers",
"following_url": "https://api.github.com/users/sleepinyourhat/following{/other_user}",
"gists_url": "https://api.github.com/users/sleepinyourhat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sleepinyourhat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sleepinyourhat/subscriptions",
"organizations_url": "https://api.github.com/users/sleepinyourhat/orgs",
"repos_url": "https://api.github.com/users/sleepinyourhat/repos",
"events_url": "https://api.github.com/users/sleepinyourhat/events{/privacy}",
"received_events_url": "https://api.github.com/users/sleepinyourhat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Damned, missed that one, you are right.\r\nAdding the missing doc-string:\r\n```\r\n**pooler_output**: ``torch.FloatTensor`` of shape ``(batch_size, hidden_size)``\r\n Last layer hidden-state of the first token of the sequence (classification token)\r\n further processed by a Linear layer and a Tanh activation function. The Linear\r\n layer weights are trained from the next sentence prediction (classification)\r\n objective during Bert pretraining. This output is usually *not* a good summary\r\n of the semantic content of the input, you're often better with averaging or pooling\r\n the sequence of hidden-states for the whole input sequence.\r\n```\r\nWe'll probably do a small release in a few days once we have gathered all the feedbacks from the main release. In the meantime, I'll set up PyTorch-Hub so people can get the models from master.",
"A minor edit with the final two optional outputs `hidden_states` and `attentions` are tuples, not lists.",
"cc @LysandreJik :)",
"The documentation is outdated regarding that issue. It should probably be re-compiled :-)"
] | 1,563 | 1,564 | 1,563 | NONE | null | The BERT docstring describes three outputs here:
https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/modeling_bert.py#L626
But none of these correspond to the pooled_output output that's added here:
https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/modeling_bert.py#L713
I may be missing something, but this looks like a dated docstring. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/793/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/792 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/792/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/792/comments | https://api.github.com/repos/huggingface/transformers/issues/792/events | https://github.com/huggingface/transformers/issues/792 | 468,790,463 | MDU6SXNzdWU0Njg3OTA0NjM= | 792 | Issue running run_transfo_xl.py | {
"login": "korymath",
"id": 178099,
"node_id": "MDQ6VXNlcjE3ODA5OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/178099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/korymath",
"html_url": "https://github.com/korymath",
"followers_url": "https://api.github.com/users/korymath/followers",
"following_url": "https://api.github.com/users/korymath/following{/other_user}",
"gists_url": "https://api.github.com/users/korymath/gists{/gist_id}",
"starred_url": "https://api.github.com/users/korymath/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/korymath/subscriptions",
"organizations_url": "https://api.github.com/users/korymath/orgs",
"repos_url": "https://api.github.com/users/korymath/repos",
"events_url": "https://api.github.com/users/korymath/events{/privacy}",
"received_events_url": "https://api.github.com/users/korymath/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,563 | 1,563 | 1,563 | NONE | null | Run code:
```
python run_transfo_xl.py --work_dir ../log
```
Output
```
07/16/2019 18:01:46 - INFO - __main__ - device: cuda
07/16/2019 18:01:46 - INFO - pytorch_transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/transfo-xl-wt103-vocab.bin from cache at /home/korymathewson/.cache/torch/pytorch_transformers/b24cb708726fd43cbf1a382da9ed3908263e4fb8a156f9e0a4f45b7540c69caa.a6a9c41b856e5c31c9f125dd6a7ed4b833fbcefda148b627871d4171b25cffd1
07/16/2019 18:01:46 - INFO - pytorch_transformers.tokenization_utils - loading file https://s3.amazonaws.com/models.huggingface.co/bert/transfo-xl-wt103-vocab.bin from cache at /home/korymathewson/.cache/torch/pytorch_transformers/b24cb708726fd43cbf1a382da9ed3908263e4fb8a156f9e0a4f45b7540c69caa.a6a9c41b856e5c31c9f125dd6a7ed4b833fbcefda148b627871d4171b25cffd1
07/16/2019 18:01:47 - INFO - pytorch_transformers.tokenization_transfo_xl - loading corpus file https://s3.amazonaws.com/models.huggingface.co/bert/transfo-xl-wt103-corpus.bin from cache at /home/korymathewson/.cache/torch/pytorch_transformers/b927918d674805742f3febcd807b375d5819f40410b83d09e3c0fb8344394216.a7d11b2fa856afe836727fbd95638053f056c4a3ac571d7800faed25ce81a4e1
07/16/2019 18:01:53 - INFO - pytorch_transformers.modeling_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/transfo-xl-wt103-config.json from cache at /home/korymathewson/.cache/torch/pytorch_transformers/a6dfd6a3896b3ae4c1a3c5f26ff1f1827c26c15b679de9212a04060eaf1237df.aef76fb1064c932cd6a2a2be3f23ebbfa5f9b6e29e8e87b571c45b4a5d5d1b90
07/16/2019 18:01:53 - INFO - pytorch_transformers.modeling_utils - Model config {
"adaptive": true,
"attn_type": 0,
"clamp_len": 1000,
"cutoffs": [
20000,
40000,
200000
],
"d_embed": 1024,
"d_head": 64,
"d_inner": 4096,
"d_model": 1024,
"div_val": 4,
"dropatt": 0.0,
"dropout": 0.1,
"ext_len": 0,
"finetuning_task": null,
"init": "normal",
"init_range": 0.01,
"init_std": 0.02,
"mem_len": 1600,
"n_head": 16,
"n_layer": 18,
"n_token": 267735,
"num_labels": 2,
"output_attentions": false,
"output_hidden_states": false,
"pre_lnorm": false,
"proj_init_std": 0.01,
"same_length": true,
"sample_softmax": -1,
"tgt_len": 128,
"tie_projs": [
false,
true,
true,
true
],
"tie_weight": true,
"torchscript": false,
"untie_r": true
}
07/16/2019 18:01:53 - INFO - pytorch_transformers.modeling_utils - loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/transfo-xl-wt103-pytorch_model.bin from cache at /home/korymathewson/.cache/torch/pytorch_transformers/12642ff7d0279757d8356bfd86a729d9697018a0c93ad042de1d0d2cc17fd57b.e9704971f27275ec067a00a67e6a5f0b05b4306b3f714a96e9f763d8fb612671
07/16/2019 18:02:06 - INFO - __main__ - Evaluating with bsz 10 tgt_len 128 ext_len 0 mem_len 1600 clamp_len 1000
Traceback (most recent call last):
File "run_transfo_xl.py", line 153, in <module>
main()
File "run_transfo_xl.py", line 134, in main
test_loss = evaluate(te_iter)
File "run_transfo_xl.py", line 117, in evaluate
loss, mems = ret
ValueError: too many values to unpack (expected 2)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/792/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/792/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/791 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/791/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/791/comments | https://api.github.com/repos/huggingface/transformers/issues/791/events | https://github.com/huggingface/transformers/pull/791 | 468,738,423 | MDExOlB1bGxSZXF1ZXN0Mjk4MTA5NTg2 | 791 | RestructuredText table for pretrained models. | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/791?src=pr&el=h1) Report\n> Merging [#791](https://codecov.io/gh/huggingface/pytorch-transformers/pull/791?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/b33a385091de604afb566155ec03329b84c96926?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/791?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #791 +/- ##\n=======================================\n Coverage 78.91% 78.91% \n=======================================\n Files 34 34 \n Lines 6193 6193 \n=======================================\n Hits 4887 4887 \n Misses 1306 1306\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/791?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/791?src=pr&el=footer). Last update [b33a385...9d381e7](https://codecov.io/gh/huggingface/pytorch-transformers/pull/791?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,563 | 1,565 | 1,565 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/791/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/791",
"html_url": "https://github.com/huggingface/transformers/pull/791",
"diff_url": "https://github.com/huggingface/transformers/pull/791.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/791.patch",
"merged_at": 1565014681000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/790 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/790/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/790/comments | https://api.github.com/repos/huggingface/transformers/issues/790/events | https://github.com/huggingface/transformers/issues/790 | 468,574,882 | MDU6SXNzdWU0Njg1NzQ4ODI= | 790 | XLNet Embeddings | {
"login": "kushalj001",
"id": 32245327,
"node_id": "MDQ6VXNlcjMyMjQ1MzI3",
"avatar_url": "https://avatars.githubusercontent.com/u/32245327?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kushalj001",
"html_url": "https://github.com/kushalj001",
"followers_url": "https://api.github.com/users/kushalj001/followers",
"following_url": "https://api.github.com/users/kushalj001/following{/other_user}",
"gists_url": "https://api.github.com/users/kushalj001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kushalj001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kushalj001/subscriptions",
"organizations_url": "https://api.github.com/users/kushalj001/orgs",
"repos_url": "https://api.github.com/users/kushalj001/repos",
"events_url": "https://api.github.com/users/kushalj001/events{/privacy}",
"received_events_url": "https://api.github.com/users/kushalj001/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm currently finishing to add the documentation but just use `XLNetModel` instead of `BertModel` in the usage example with `BertModel`",
"Thanks a lot, @thomwolf for the quick reply. I'll try it out.",
"Here is an example now: https://huggingface.co/pytorch-transformers/model_doc/xlnet.html#pytorch_transformers.XLNetModel",
"@thomwolf, I tried the following snippet. The similarity score changes every time I run the cell. That is, the embeddings or the weights are changing every time. Is this related to dropout?\r\n\r\n\r\n```\r\nconfig = XLNetConfig.from_pretrained('xlnet-large-cased')\r\n\r\ntokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased')\r\nmodel = XLNetModel(config)\r\ninput_ids = torch.tensor(tokenizer.encode(\"The apple juice is sour.\")).unsqueeze(0) \r\ninput_ids_2 = torch.tensor(tokenizer.encode(\"The orange juice is sweet.\")).unsqueeze(0) \r\n\r\noutputs = model(input_ids)\r\noutputs_2 = model(input_ids_2)\r\nlast_hidden_states = outputs[0] \r\nlast_hidden_states_2 = outputs_2[0]\r\n\r\napple = last_hidden_states[0][1]\r\norange = last_hidden_states_2[0][1]\r\n\r\nx = apple\r\ny = orange\r\ncos_sim = dot(x.detach().numpy(),y.detach().numpy())/(norm(x.detach().numpy())*norm(y.detach().numpy()))\r\nprint(cos_sim)\r\n\r\n```\r\n\r\n",
"For me logits values changes as well ... using exactly the same settings as mentioned in the example.\r\n\r\nHave you found a way to fix that?",
"@Oxi84 put `model.eval()` before you make the predictions. This fixed the problem of changing weights for me.",
"Thanks. For me it works when call like that:\r\n\r\n tokenizer = XLNetTokenizer.from_pretrained(\"xlnet-large-cased\")\r\n model = XLNetLMHeadModel.from_pretrained(\"xlnet-large-cased\")\r\n model.eval()\r\n\r\nHowever accuracy seems to be much lower that for Bert - with the code i wrote here: https://github.com/huggingface/pytorch-transformers/issues/846\r\n\r\nDid you find that the accuracy is good or bad? I compared with Bert on few examples for masked word prediction and most of XLNet predicted word with the highest probability do not fit at all. \r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"@kushalj001 hi, how can I get the sentence vector",
"Hi, so it seems that creating a model with a configuration is primarily the problem here:\r\n`model = XLNetLMHeadModel.from_pretrained(\"xlnet-large-cased\")`\r\nyields consistent outputs, but \r\n`config = XLNetConfig.from_pretrained(\"xlnet-large-uncased\")`\r\n`model = XLNetModel(config)` \r\ndoes not at all. \r\nMy question is, how is it possible to set configuration states (like getting hidden states of the model). I have run the glue STS-B fine tuning code to customize the model which is stored at `./proc_data/sts-b-100`, but when I load the model using code like this to get hidden states:\r\n\r\n`config = XLNetConfig.from_pretrained('./proc_data/sts-b-110/')`\r\n`config.output_hidden_states=True`\r\n`tokenizer = XLNetTokenizer.from_pretrained('././proc_data/sts-b-110/')`\r\n`model = XLNetForSequenceClassification(config)`\r\n\r\nI get results that vary wildly across runs. \r\n\r\nSpecifically, I would like to get the hidden states of each layer from the fine tuned model and correlate it to the actual text similarity. I was thinking I'd load the model with XLNetForSequenceClassification, get all the hidden states setting the configuration to output hidden states and do such a correlation. Is my approach incorrect?",
"Looking at run_glue, it seems that actually outputs[1] is used for prediction? This is confusing because all the examples use [0] and the documentation is not very clear.\r\n `outputs = model(**inputs)`\r\n `tmp_eval_loss, logits = outputs[:2]`\r\nFrom run_glue.py\r\n",
"Ok, I figured the logits and loss issue out - the issue is that for XLNetForSequenceClassification, the second index does in fact have logits while the first has loss.",
"@thomwolf @Oxi84 while calculating word-embeddings of a document, i.e multiple sentences, is it necessary to pass the document sentence-wise? For my dataset, I removed punctuation as a part of the pre-processing step. So now, my whole document goes into the model. Does this hurt the model's performance? Does it make a lot of difference in terms of capturing the context of words?\r\nThanks",
"It should improve acuracy if the text is longer, but still for me Bert is way better ... on 20-40 words long text.",
"> It should improve acuracy if the text is longer, but still for me Bert is way better ... on 20-40 words long text.\r\n\r\nYeah, even for my experiments, BERT simply outperfoms XLNet. Still don't know why though.\r\nWhen you say \"it should improve accuracy\", you mean that feeding sentences to calculate word-vec would be better, right?",
"Did you managed to try tensorflow version of XLNet, there is a chance it might be different from the pytorch version?",
"Maybe there is some bug, but its unlikely since the bechmark results with the XLnet pytorch are the same. But I gues this would the first thing to try to recheck.",
"> Did you managed to try tensorflow version of XLNet, there is a chance it might be different from the pytorch version?\r\n\r\nAny simple way of doing this?",
"any updates regarding this issue? ",
"@kushalj001 why remove the punctuation ? Is it domain specific or to improve accuracy?",
"> @kushalj001 why remove the punctuation ? Is it domain specific or to improve accuracy?\r\n\r\nMy dataset had a lot of random punctuation, ie misplaced single and double-quotes.\r\nBut also, do punctuations add any valuable information to the text? Apart from the period (which can be used to break a large para into sentences), does keeping other punctuation symbols make sense? ",
"I will close this issue which dates back before we had the clean documentation up here: https://huggingface.co/pytorch-transformers/\r\n\r\nPlease open a new issue with a clear explanation of your specific problem if you have related issues."
] | 1,563 | 1,566 | 1,566 | CONTRIBUTOR | null | How can I retrieve contextual word vectors for my dataset using XLNet ?
The usage and examples in the documentation do not include any guide to use XLNet.
Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/790/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/789 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/789/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/789/comments | https://api.github.com/repos/huggingface/transformers/issues/789/events | https://github.com/huggingface/transformers/issues/789 | 468,390,083 | MDU6SXNzdWU0NjgzOTAwODM= | 789 | XLNet text generation ability : inference is slow | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1834059054,
"node_id": "MDU6TGFiZWwxODM0MDU5MDU0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Ex:%20Generation",
"name": "Ex: Generation",
"color": "06EFF8",
"default": false,
"description": "Natural Language Generation"
}
] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | [
"I tried it, but text quality is lowered a lot and inference time does not change at all.\r\n\r\nI simply changed `perm_mask` to be 0 over initial context and 1 over generated tokens.\r\n\r\n---\r\n\r\nInput :\r\n\r\n> In Seoul, you can do a lot of things ! For example you can\r\n\r\nGenerated text with full bidirectionality :\r\n\r\n> buy grocery stores and restaurants, or even buy liquor, tobacco, etc. Then you can go to the mall. Then you can visit shopping mall. Then you can go to the university, then you can visit an outdoor pool. You can visit the cinema. You can visit art galleries. Then you can visit a garden.<eop> Etc. etc. etc. After all, if you can buy items and enjoy them, then yes, you can enjoy them in Seoul. It is that simple.\r\n\r\nGenerated text with bidirectionality over context tokens, and unidirectionality over generated tokens :\r\n\r\n> buy tons Free do hotel on you whichT Seoul, list and do you coffee non can many of you sit- shopping People you river boatou. and Koreans in long you into graduate train/ by teacher college c people there ho sister formst to in city plain daughtera kayak cat.: years World home. still home later N will plan yearses street his looks a marriage different by tell it too stunning out to what ice by person a, people a bag.\r\n\r\n**Why is it that bad ?**",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi @Colanim,\r\n\r\nThanks for the issue - sorry that we overlooked it! \r\n\r\nI will take a closer look into this. GPT2 uses key value state caching whet doing generation. Not sure whether XLNet does something similar. Will see if it'd be easy to add or not!",
"Sorry to answer that late. `XLNet` is known to be rather slow for text generation due to the needed padding to get it started. \r\n\r\n`XLNet` uses `mems` which is similar to `past` to have a longer memory span.\r\nSince the quality seems to degrade much when applying your suggestion, I don't think trying to add a `XLNet` enhancement for generation is of high priority at the moment...Sorry! But feel free to open a PR if you have a good solution :-) "
] | 1,563 | 1,591 | 1,591 | CONTRIBUTOR | null | I compared the inference time for generating text with the given [example script](https://github.com/huggingface/pytorch-pretrained-BERT/blob/xlnet/examples/run_generation.py) between XLNet & GPT-2, on CPU.
To generate 100 tokens, XLNet takes **3m22s** while GPT-2 takes **14s**. And it grows exponentially : for 500 tokens, XLNet takes **51m46s** while GPT-2 takes **2m52s**.
Due to bidirectionality of the model, each tokens' attention should be computed again to relate to the newly generated token.
To reduce the time needed, we should allow the model to use unidirectional attention over generated tokens (even if it means that some older tokens will not see some newly generated tokens, i.e. reducing bidirectionality).
---
According to the [original post of Aman Rusia](https://medium.com/@amanrusia/xlnet-speaks-comparison-to-gpt-2-ea1a4e9ba39e), doing so greatly decrease the quality of text.
However the post was updated as it was a mistake in the code. It seems fine to generate tokens with unidirectional attention. Please refer to [this issue](https://github.com/rusiaaman/XLnet-gen/issues/1#issuecomment-511508957) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/789/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/788 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/788/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/788/comments | https://api.github.com/repos/huggingface/transformers/issues/788/events | https://github.com/huggingface/transformers/issues/788 | 468,140,681 | MDU6SXNzdWU0NjgxNDA2ODE= | 788 | bert-large config file | {
"login": "desperadoola",
"id": 30496727,
"node_id": "MDQ6VXNlcjMwNDk2NzI3",
"avatar_url": "https://avatars.githubusercontent.com/u/30496727?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/desperadoola",
"html_url": "https://github.com/desperadoola",
"followers_url": "https://api.github.com/users/desperadoola/followers",
"following_url": "https://api.github.com/users/desperadoola/following{/other_user}",
"gists_url": "https://api.github.com/users/desperadoola/gists{/gist_id}",
"starred_url": "https://api.github.com/users/desperadoola/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/desperadoola/subscriptions",
"organizations_url": "https://api.github.com/users/desperadoola/orgs",
"repos_url": "https://api.github.com/users/desperadoola/repos",
"events_url": "https://api.github.com/users/desperadoola/events{/privacy}",
"received_events_url": "https://api.github.com/users/desperadoola/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,568 | 1,568 | NONE | null | Here is the config file I download from path in modelling for bert large,
{
"attention_probs_dropout_prob": 0.1,
"directionality": "bidi",
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"initializer_range": 0.02,
"intermediate_size": 4096,
"max_position_embeddings": 512,
"num_attention_heads": 16,
"num_hidden_layers": 24,
"pooler_fc_size": 768,
"pooler_num_attention_heads": 12,
"pooler_num_fc_layers": 3,
"pooler_size_per_head": 128,
"pooler_type": "first_token_transform",
"type_vocab_size": 2,
"vocab_size": 28996
}
I am wondering what are the following params for? I can't find them in the modelling file and the checkpoint I download.
"pooler_fc_size": 768,
"pooler_num_attention_heads": 12,
"pooler_num_fc_layers": 3,
"pooler_size_per_head": 128,
"pooler_type": "first_token_transform", | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/788/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/788/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/787 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/787/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/787/comments | https://api.github.com/repos/huggingface/transformers/issues/787/events | https://github.com/huggingface/transformers/issues/787 | 467,904,319 | MDU6SXNzdWU0Njc5MDQzMTk= | 787 | How to use Bert QA model for predictions? | {
"login": "Swathygsb",
"id": 23665054,
"node_id": "MDQ6VXNlcjIzNjY1MDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/23665054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Swathygsb",
"html_url": "https://github.com/Swathygsb",
"followers_url": "https://api.github.com/users/Swathygsb/followers",
"following_url": "https://api.github.com/users/Swathygsb/following{/other_user}",
"gists_url": "https://api.github.com/users/Swathygsb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Swathygsb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Swathygsb/subscriptions",
"organizations_url": "https://api.github.com/users/Swathygsb/orgs",
"repos_url": "https://api.github.com/users/Swathygsb/repos",
"events_url": "https://api.github.com/users/Swathygsb/events{/privacy}",
"received_events_url": "https://api.github.com/users/Swathygsb/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You can write your own code like the prediction phase [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/78462aad6113d50063d8251e27dbaadb7f44fbf0/examples/run_squad.py#L345) ",
"@Swathygsb have you figured it out? I have the same use case as you and I'm struggling to understand the source code. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,569 | 1,569 | NONE | null | Hi,
Can you give sample codes for how to use Bert QA model for predicting an answer given a text corpus and a question? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/787/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/787/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/786 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/786/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/786/comments | https://api.github.com/repos/huggingface/transformers/issues/786/events | https://github.com/huggingface/transformers/pull/786 | 467,324,508 | MDExOlB1bGxSZXF1ZXN0Mjk2OTk4NDUy | 786 | New documentation for pytorch-transformers | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,562 | 1,566 | 1,563 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/786/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/786",
"html_url": "https://github.com/huggingface/transformers/pull/786",
"diff_url": "https://github.com/huggingface/transformers/pull/786.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/786.patch",
"merged_at": 1563012537000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/785 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/785/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/785/comments | https://api.github.com/repos/huggingface/transformers/issues/785/events | https://github.com/huggingface/transformers/issues/785 | 467,272,230 | MDU6SXNzdWU0NjcyNzIyMzA= | 785 | Implementation of 15% words masking would cause the drop of performance in short text | {
"login": "zhangsh950618",
"id": 39693134,
"node_id": "MDQ6VXNlcjM5NjkzMTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39693134?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangsh950618",
"html_url": "https://github.com/zhangsh950618",
"followers_url": "https://api.github.com/users/zhangsh950618/followers",
"following_url": "https://api.github.com/users/zhangsh950618/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangsh950618/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangsh950618/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangsh950618/subscriptions",
"organizations_url": "https://api.github.com/users/zhangsh950618/orgs",
"repos_url": "https://api.github.com/users/zhangsh950618/repos",
"events_url": "https://api.github.com/users/zhangsh950618/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangsh950618/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,562 | 1,568 | 1,568 | NONE | null | I found the same problem that the implementation is different from tensorflow. If we use the implementation of pytorch will produce two extreme case especially for short sentences like article title,usually 10-20 characters.
case 1. sentence with too much '[MASK]'
case 2. sentence with none '[MASK]'
both case1 and case2 would cause the drop of performance. case 1 make the model difficult to predict and case2 would not produce the loss.
Given a corpus with an average sentence length of 10. The implementation of tensorflow would generate 1 '[MASK]' for the sentences, but the implementation of pytorch would have :
0.85^10 = 0.19 to generate 0 '[MASK]'
0.15 * 0.85^9 * 10 =0.34 to generate 1 '[MASK]'
0.15^2 * 0.85^8 * 45 =0.27 to generate 2 '[MASK]'
0.15^2 * 0.85^7 * 120 =0.12 to generate 3 '[MAKS]'
...
If we roughly consider the sentence with 15% '[MASK]' is appropriate, only 1/2 '[MASK]' is useful for training models. So only 0.34 + 0.27 = 0.61 training case is useful.
And we found it is this is a very serious problem for short text. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/785/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/784 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/784/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/784/comments | https://api.github.com/repos/huggingface/transformers/issues/784/events | https://github.com/huggingface/transformers/issues/784 | 467,226,420 | MDU6SXNzdWU0NjcyMjY0MjA= | 784 | [bug] from_pretrained error with from_tf | {
"login": "shibing624",
"id": 10249622,
"node_id": "MDQ6VXNlcjEwMjQ5NjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/10249622?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shibing624",
"html_url": "https://github.com/shibing624",
"followers_url": "https://api.github.com/users/shibing624/followers",
"following_url": "https://api.github.com/users/shibing624/following{/other_user}",
"gists_url": "https://api.github.com/users/shibing624/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shibing624/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shibing624/subscriptions",
"organizations_url": "https://api.github.com/users/shibing624/orgs",
"repos_url": "https://api.github.com/users/shibing624/repos",
"events_url": "https://api.github.com/users/shibing624/events{/privacy}",
"received_events_url": "https://api.github.com/users/shibing624/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yeah this is solved in the coming release"
] | 1,562 | 1,563 | 1,563 | NONE | null | https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L721 , the weights_path should be archive_file, and set from_tf to str is better to load finetuned model, like model name is model.ckpt-25000.meta. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/784/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/784/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/783 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/783/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/783/comments | https://api.github.com/repos/huggingface/transformers/issues/783/events | https://github.com/huggingface/transformers/issues/783 | 467,181,929 | MDU6SXNzdWU0NjcxODE5Mjk= | 783 | how to get the word vector from bert pretrain model ? | {
"login": "zhangyu68",
"id": 36838019,
"node_id": "MDQ6VXNlcjM2ODM4MDE5",
"avatar_url": "https://avatars.githubusercontent.com/u/36838019?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangyu68",
"html_url": "https://github.com/zhangyu68",
"followers_url": "https://api.github.com/users/zhangyu68/followers",
"following_url": "https://api.github.com/users/zhangyu68/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangyu68/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangyu68/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangyu68/subscriptions",
"organizations_url": "https://api.github.com/users/zhangyu68/orgs",
"repos_url": "https://api.github.com/users/zhangyu68/repos",
"events_url": "https://api.github.com/users/zhangyu68/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangyu68/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This will be possible in the new release out soon.",
"I find a method that can get the words embeddings.Thank you all the same!\r\nself.model = BertModel.from_pretrained(config.bert_path)\r\nself.word_emb = self.model.embeddings"
] | 1,562 | 1,562 | 1,562 | NONE | null | Could you please help me?
I just want to get bert's word vector,but I only can get the encoder's result. How can I get the word vector before data inputing the encoder model ?
Thank you ! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/783/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/783/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/782 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/782/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/782/comments | https://api.github.com/repos/huggingface/transformers/issues/782/events | https://github.com/huggingface/transformers/issues/782 | 467,175,431 | MDU6SXNzdWU0NjcxNzU0MzE= | 782 | Why the activation function is tanh in BertPooler | {
"login": "xinliweiyuan",
"id": 5919883,
"node_id": "MDQ6VXNlcjU5MTk4ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5919883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xinliweiyuan",
"html_url": "https://github.com/xinliweiyuan",
"followers_url": "https://api.github.com/users/xinliweiyuan/followers",
"following_url": "https://api.github.com/users/xinliweiyuan/following{/other_user}",
"gists_url": "https://api.github.com/users/xinliweiyuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xinliweiyuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xinliweiyuan/subscriptions",
"organizations_url": "https://api.github.com/users/xinliweiyuan/orgs",
"repos_url": "https://api.github.com/users/xinliweiyuan/repos",
"events_url": "https://api.github.com/users/xinliweiyuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/xinliweiyuan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Because that's what Bert's authors do in the official TF code:\r\nhttps://github.com/google-research/bert/blob/bee6030e31e42a9394ac567da170a89a98d2062f/modeling.py#L231",
"Just wanted to point out for future reference the motivation has been answered by the original BERT authors in [[this GitHub issue]](https://github.com/google-research/bert/issues/43)."
] | 1,562 | 1,591 | 1,563 | NONE | null | I found the activation function in the BertPooler layer is tanh, but Bert never mentions that it uses the tanh. It says gelu activation function is applied in the paper.
So why there is a tanh here ? Waiting for some explanation. Thanks.
```
class BertPooler(nn.Module):
def __init__(self, config):
super(BertPooler, self).__init__()
self.dense = nn.Linear(config.hidden_size, config.hidden_size)
self.activation = nn.Tanh()
def forward(self, hidden_states):
# We "pool" the model by simply taking the hidden state corresponding
# to the first token.
first_token_tensor = hidden_states[:, 0]
pooled_output = self.dense(first_token_tensor)
pooled_output = self.activation(pooled_output)
return pooled_output
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/782/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/782/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/781 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/781/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/781/comments | https://api.github.com/repos/huggingface/transformers/issues/781/events | https://github.com/huggingface/transformers/pull/781 | 467,128,858 | MDExOlB1bGxSZXF1ZXN0Mjk2ODQzOTgy | 781 | Clean up input embeddings resizing and weights tying | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I have added a test suite that tests both the `tie_weights` function as well as the `resize_token_embeddings`",
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781?src=pr&el=h1) Report\n> Merging [#781](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781?src=pr&el=desc) into [xlnet](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/50e62a4cb4d503e3559b88838b8cf9f745fef516?src=pr&el=desc) will **decrease** coverage by `0.23%`.\n> The diff coverage is `93.05%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## xlnet #781 +/- ##\n=========================================\n- Coverage 78.84% 78.6% -0.24% \n=========================================\n Files 35 34 -1 \n Lines 6092 6122 +30 \n=========================================\n+ Hits 4803 4812 +9 \n- Misses 1289 1310 +21\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfeGxuZXQucHk=) | `78.79% <100%> (+0.44%)` | :arrow_up: |\n| [pytorch\\_transformers/tests/modeling\\_xlnet\\_test.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfeGxuZXRfdGVzdC5weQ==) | `95.86% <100%> (+0.08%)` | :arrow_up: |\n| [...rch\\_transformers/tests/modeling\\_transfo\\_xl\\_test.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfdHJhbnNmb194bF90ZXN0LnB5) | `94.33% <100%> (+0.1%)` | :arrow_up: |\n| [pytorch\\_transformers/modeling\\_xlm.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfeGxtLnB5) | `86.48% <100%> (+0.3%)` | :arrow_up: |\n| [pytorch\\_transformers/modeling\\_bert.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfYmVydC5weQ==) | `87.69% <100%> (+0.17%)` | :arrow_up: |\n| [pytorch\\_transformers/tests/modeling\\_gpt2\\_test.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfZ3B0Ml90ZXN0LnB5) | `84.21% <66.66%> (-3.29%)` | :arrow_down: |\n| [pytorch\\_transformers/tests/modeling\\_openai\\_test.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfb3BlbmFpX3Rlc3QucHk=) | `84.21% <66.66%> (-0.79%)` | :arrow_down: |\n| [pytorch\\_transformers/tests/modeling\\_xlm\\_test.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfeGxtX3Rlc3QucHk=) | `72.13% <75%> (ø)` | :arrow_up: |\n| [pytorch\\_transformers/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfZ3B0Mi5weQ==) | `75.62% <75%> (-5.39%)` | :arrow_down: |\n| [pytorch\\_transformers/modeling\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfdHJhbnNmb194bC5weQ==) | `57.32% <76.92%> (+0.32%)` | :arrow_up: |\n| ... and [14 more](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781?src=pr&el=footer). Last update [50e62a4...2918b7d](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/781?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,562 | 1,566 | 1,562 | MEMBER | null | Still need to add tests on these features | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/781/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/781/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/781",
"html_url": "https://github.com/huggingface/transformers/pull/781",
"diff_url": "https://github.com/huggingface/transformers/pull/781.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/781.patch",
"merged_at": 1562922626000
} |
https://api.github.com/repos/huggingface/transformers/issues/780 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/780/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/780/comments | https://api.github.com/repos/huggingface/transformers/issues/780/events | https://github.com/huggingface/transformers/issues/780 | 467,127,635 | MDU6SXNzdWU0NjcxMjc2MzU= | 780 | Fail to run finetune_on_pregenerated.py | {
"login": "allisonyw",
"id": 38665667,
"node_id": "MDQ6VXNlcjM4NjY1NjY3",
"avatar_url": "https://avatars.githubusercontent.com/u/38665667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/allisonyw",
"html_url": "https://github.com/allisonyw",
"followers_url": "https://api.github.com/users/allisonyw/followers",
"following_url": "https://api.github.com/users/allisonyw/following{/other_user}",
"gists_url": "https://api.github.com/users/allisonyw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/allisonyw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/allisonyw/subscriptions",
"organizations_url": "https://api.github.com/users/allisonyw/orgs",
"repos_url": "https://api.github.com/users/allisonyw/repos",
"events_url": "https://api.github.com/users/allisonyw/events{/privacy}",
"received_events_url": "https://api.github.com/users/allisonyw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,562 | 1,568 | 1,568 | NONE | null | Hi,
I am fine tuning BERT for my own data set. Pregenerate training data was smooth but when I run finetune_on_pregenerated.py I got the following KeyError:
2019-07-11 22:53:04,151: ***** Running training *****
2019-07-11 22:53:04,151: Num examples = 35832
2019-07-11 22:53:04,151: Batch size = 32
2019-07-11 22:53:04,152: Num steps = 1119
2019-07-11 22:53:04,156: Loading training examples for epoch 0
Training examples: 0%| | 0/12078 [00:00<?, ?it/s]
Traceback (most recent call last):
File "finetune-hugging.py", line 348, in <module>
main()
File "finetune-hugging.py", line 297, in main
num_data_epochs=num_data_epochs, reduce_memory=args.reduce_memory)
File "finetune-hugging.py", line 105, in __init__
features = convert_example_to_features(example, tokenizer, seq_len)
File "finetune-hugging.py", line 43, in convert_example_to_features
input_ids = tokenizer.convert_tokens_to_ids(tokens)
File "/anaconda3/lib/python3.7/site-packages/pytorch_pretrained_bert/tokenization.py", line 121, in convert_tokens_to_ids
ids.append(self.vocab[token])
KeyError: 'Ad'
Out[21]: 256
I could really use some help from you guys. Many Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/780/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/779 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/779/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/779/comments | https://api.github.com/repos/huggingface/transformers/issues/779/events | https://github.com/huggingface/transformers/issues/779 | 467,084,782 | MDU6SXNzdWU0NjcwODQ3ODI= | 779 | Should close the SummaryWriter after using it | {
"login": "t-yaxli",
"id": 51250153,
"node_id": "MDQ6VXNlcjUxMjUwMTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/51250153?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/t-yaxli",
"html_url": "https://github.com/t-yaxli",
"followers_url": "https://api.github.com/users/t-yaxli/followers",
"following_url": "https://api.github.com/users/t-yaxli/following{/other_user}",
"gists_url": "https://api.github.com/users/t-yaxli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/t-yaxli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/t-yaxli/subscriptions",
"organizations_url": "https://api.github.com/users/t-yaxli/orgs",
"repos_url": "https://api.github.com/users/t-yaxli/repos",
"events_url": "https://api.github.com/users/t-yaxli/events{/privacy}",
"received_events_url": "https://api.github.com/users/t-yaxli/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Oh yes you are right, thanks it's fixed in the coming release."
] | 1,562 | 1,563 | 1,563 | NONE | null | Really appreciate the good work to implement this package!
I have tried to run the script: [run_glue.py](https://github.com/huggingface/pytorch-pretrained-BERT/blob/xlnet/examples/run_glue.py). When I test with this script, I found some of the scalars add into the SummaryWriter did not appears in TensorBoard. I think the cause of it is that the code leaves the SummaryWriter unclosed.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/779/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/778 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/778/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/778/comments | https://api.github.com/repos/huggingface/transformers/issues/778/events | https://github.com/huggingface/transformers/issues/778 | 466,912,862 | MDU6SXNzdWU0NjY5MTI4NjI= | 778 | Order of tokens in vocabulary of German model | {
"login": "schoennenbeck",
"id": 22288048,
"node_id": "MDQ6VXNlcjIyMjg4MDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/22288048?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/schoennenbeck",
"html_url": "https://github.com/schoennenbeck",
"followers_url": "https://api.github.com/users/schoennenbeck/followers",
"following_url": "https://api.github.com/users/schoennenbeck/following{/other_user}",
"gists_url": "https://api.github.com/users/schoennenbeck/gists{/gist_id}",
"starred_url": "https://api.github.com/users/schoennenbeck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/schoennenbeck/subscriptions",
"organizations_url": "https://api.github.com/users/schoennenbeck/orgs",
"repos_url": "https://api.github.com/users/schoennenbeck/repos",
"events_url": "https://api.github.com/users/schoennenbeck/events{/privacy}",
"received_events_url": "https://api.github.com/users/schoennenbeck/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@tholor and @timoeller may have some insights on these",
"Hey Sebastian, thanks for using the German Bert and digging into its details. The mysterious [unused3001] token was actually a special comma symbol to get rid of [UNK] tokens in some of our training texts. But we covered it up later on in the process + didn't anticipate it would be coming back to us : ) \r\n\r\nSo agreed, it is unwanted behaviour. \r\n\r\nThough TL;DR, we don't believe it is impacting either pretraining or downstream task training. \r\n\r\n\r\n\r\nApparently the token at index 0 (= [unused3001]) is used as padding token in TF Bert and pytorch Bert and the implementations do not really care if it is called [unused3001] [PAD] or [something].\r\n\r\nTo be a bit more intuitive we now swapped [unused3001] and [PAD] in the vocab files (pytorch and TF) only. \r\nMight be that future code somehow substitutes \"[PAD]\" input strings, which could cause problems.\r\n\r\nThe only thing that seems worrisome to us is that the embedding values for this padding token are non-zero (and change over the course of training) for our German Bert but also for Googles open sourced models. \r\n\r\nI tried to check how the padding embedding is handled in TF but am not familiar with debugging there...\r\n\r\nMaybe you want to dig more into it and raise an issue in the original TF Bert repro? Maybe this closed issue could be related to a rather unwanted padding embedding handling: https://github.com/google-research/bert/issues/113\r\n\r\nHope that helps, good luck!",
"Thanks a lot for the input @Timoeller (not quite sure who Christian is, though ;) ). I also got the feeling that it doesn't really impact downstream applications (NER in this case). At least not heavily.\r\n\r\nI'll do some more experiments and raise an issue with the original repo if it feel it is warranted. \r\n\r\nThanks again and all the best,\r\nSebastian\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,562 | 1,569 | 1,569 | NONE | null | The vocabulary for the German model ('bert-base-german-cased') has the token '[unused3001]' at position 0 (and the '[PAD]' token at position 1). However, the BertEmbedding has padding_idx=0 as usual.
Is this behaviour intended and if so would it be possible to get some insight into the rationale behind it?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/778/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/778/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/777 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/777/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/777/comments | https://api.github.com/repos/huggingface/transformers/issues/777/events | https://github.com/huggingface/transformers/pull/777 | 466,901,688 | MDExOlB1bGxSZXF1ZXN0Mjk2NjU3NzA1 | 777 | Working GLUE Example for XLNet (STS-B) | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,562 | 1,566 | 1,562 | MEMBER | null | Same as #776 but let's merge it on XLNet for the moment.
`run_glue.py` is now a single script able to train BERT, XLNet and XLM on all GLUE tasks.
Example for XLNet:
```bash
CUDA_VISIBLE_DEVICES=0,1,2,3 python ./examples/run_glue.py --do_train --task_name=sts-b --data_dir=${GLUE_DIR}/STS-B --output_dir=./proc_data/sts-b-110 --max_seq_length=128 --per_gpu_eval_batch_size=8 --per_gpu_train_batch_size=8 --max_steps=1200 --model_name=xlnet-large-cased --overwrite_output_dir --overwrite_cache --warmup_steps=120
```
These hyper-parameters (same as the original one) give a pearsonr > 0.918.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/777/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/777",
"html_url": "https://github.com/huggingface/transformers/pull/777",
"diff_url": "https://github.com/huggingface/transformers/pull/777.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/777.patch",
"merged_at": 1562852627000
} |
https://api.github.com/repos/huggingface/transformers/issues/776 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/776/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/776/comments | https://api.github.com/repos/huggingface/transformers/issues/776/events | https://github.com/huggingface/transformers/pull/776 | 466,900,653 | MDExOlB1bGxSZXF1ZXN0Mjk2NjU2ODQ5 | 776 | Working GLUE Example for XLNet (STS-B) | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,562 | 1,566 | 1,562 | MEMBER | null | `run_glue.py` is now a single script able to train BERT, XLNet and XLM on all GLUE tasks.
Example for XLNet:
```bash
CUDA_VISIBLE_DEVICES=0,1,2,3 python ./examples/run_glue.py --do_train --task_name=sts-b --data_dir=${GLUE_DIR}/STS-B --output_dir=./proc_data/sts-b-110 --max_seq_length=128 --per_gpu_eval_batch_size=8 --per_gpu_train_batch_size=8 --max_steps=1200 --model_name=xlnet-large-cased --overwrite_output_dir --overwrite_cache --warmup_steps=120
```
These hyper-parameters (same as the original one) give a pearsonr > 0.918.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/776/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/776",
"html_url": "https://github.com/huggingface/transformers/pull/776",
"diff_url": "https://github.com/huggingface/transformers/pull/776.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/776.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/775 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/775/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/775/comments | https://api.github.com/repos/huggingface/transformers/issues/775/events | https://github.com/huggingface/transformers/pull/775 | 466,838,926 | MDExOlB1bGxSZXF1ZXN0Mjk2NjA2MDQx | 775 | fix typo in readme: extract_classif.py ==> extract_features.py | {
"login": "xinfeng1i",
"id": 2620608,
"node_id": "MDQ6VXNlcjI2MjA2MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2620608?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xinfeng1i",
"html_url": "https://github.com/xinfeng1i",
"followers_url": "https://api.github.com/users/xinfeng1i/followers",
"following_url": "https://api.github.com/users/xinfeng1i/following{/other_user}",
"gists_url": "https://api.github.com/users/xinfeng1i/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xinfeng1i/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xinfeng1i/subscriptions",
"organizations_url": "https://api.github.com/users/xinfeng1i/orgs",
"repos_url": "https://api.github.com/users/xinfeng1i/repos",
"events_url": "https://api.github.com/users/xinfeng1i/events{/privacy}",
"received_events_url": "https://api.github.com/users/xinfeng1i/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/775?src=pr&el=h1) Report\n> Merging [#775](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/775?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/78462aad6113d50063d8251e27dbaadb7f44fbf0?src=pr&el=desc) will **decrease** coverage by `0.1%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/775?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #775 +/- ##\n==========================================\n- Coverage 61.5% 61.39% -0.11% \n==========================================\n Files 19 19 \n Lines 4026 4025 -1 \n==========================================\n- Hits 2476 2471 -5 \n- Misses 1550 1554 +4\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/775?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_pretrained\\_bert/file\\_utils.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/775/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvZmlsZV91dGlscy5weQ==) | `66.44% <0%> (-1.35%)` | :arrow_down: |\n| [pytorch\\_pretrained\\_bert/optimization.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/775/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvb3B0aW1pemF0aW9uLnB5) | `73.52% <0%> (-0.74%)` | :arrow_down: |\n| [pytorch\\_pretrained\\_bert/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/775/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `81.91% <0%> (-0.54%)` | :arrow_down: |\n| [pytorch\\_pretrained\\_bert/tokenization\\_transfo\\_xl.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/775/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX3RyYW5zZm9feGwucHk=) | `31.85% <0%> (-0.19%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/775?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/775?src=pr&el=footer). Last update [78462aa...b72f755](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/775?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,562 | 1,563 | 1,563 | NONE | null | There seems to be a typo in the `README.md` file in Section `Example` (as shown in the following figure), I guess the script name should be `extract_features.py`.

| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/775/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/775",
"html_url": "https://github.com/huggingface/transformers/pull/775",
"diff_url": "https://github.com/huggingface/transformers/pull/775.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/775.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/774 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/774/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/774/comments | https://api.github.com/repos/huggingface/transformers/issues/774/events | https://github.com/huggingface/transformers/issues/774 | 466,632,277 | MDU6SXNzdWU0NjY2MzIyNzc= | 774 | XLNet text generation ability | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, I've now added the text padding trick of Aman (add some padding text to have longer inputs) and the quality is really a lot higher.\r\n\r\nWill merge the xlnet branch in master and release on Monday."
] | 1,562 | 1,563 | 1,563 | CONTRIBUTOR | null | Really appreciate the good work to implement XLNet !
I tried running the [XLNet text generation example](https://github.com/huggingface/pytorch-pretrained-BERT/blob/xlnet/examples/generation_xlnet.py)
But the generated text quality is really low.
Tricks used by https://github.com/rusiaaman/XLnet-gen needs to be added to the example to generate a good example.
---
... Or is it because the pytorch version of XLNet is not fully working yet ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/774/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/773 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/773/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/773/comments | https://api.github.com/repos/huggingface/transformers/issues/773/events | https://github.com/huggingface/transformers/pull/773 | 466,566,367 | MDExOlB1bGxSZXF1ZXN0Mjk2Mzg0NTM3 | 773 | Sphinx doc, XLM Checkpoints | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,562 | 1,562 | 1,562 | MEMBER | null | The updated sphinx documentation with additional pages, fixed links, an added a whole new HuggingFace-based theme.
Additionally, patched the XLM weights conversion script and added 5 new checkpoints for XLM. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/773/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/773",
"html_url": "https://github.com/huggingface/transformers/pull/773",
"diff_url": "https://github.com/huggingface/transformers/pull/773.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/773.patch",
"merged_at": 1562852800000
} |
https://api.github.com/repos/huggingface/transformers/issues/772 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/772/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/772/comments | https://api.github.com/repos/huggingface/transformers/issues/772/events | https://github.com/huggingface/transformers/issues/772 | 466,560,805 | MDU6SXNzdWU0NjY1NjA4MDU= | 772 | Cannot load 'bert-base-german-cased' | {
"login": "laifi",
"id": 34584914,
"node_id": "MDQ6VXNlcjM0NTg0OTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/34584914?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laifi",
"html_url": "https://github.com/laifi",
"followers_url": "https://api.github.com/users/laifi/followers",
"following_url": "https://api.github.com/users/laifi/following{/other_user}",
"gists_url": "https://api.github.com/users/laifi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laifi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laifi/subscriptions",
"organizations_url": "https://api.github.com/users/laifi/orgs",
"repos_url": "https://api.github.com/users/laifi/repos",
"events_url": "https://api.github.com/users/laifi/events{/privacy}",
"received_events_url": "https://api.github.com/users/laifi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @laifi, \r\n\r\nI cannot reproduce this issue. Are you sure that you run with the latest code from master branch? It looks suspicious to me that `tokenizer = BertTokenizer.from_pretrained('bert-base-german-cased')` doesn't find the model. \r\nCan you please check if you have [the according line](https://github.com/huggingface/pytorch-pretrained-BERT/blob/78462aad6113d50063d8251e27dbaadb7f44fbf0/pytorch_pretrained_bert/tokenization.py#L37) in your PRETRAINED_VOCAB_ARCHIVE_MAP? \r\n\r\nFor your second approach with downloaded files: \r\n- be aware that model packaging changed lately from archives to individual files for vocab, model and config (see [here](https://github.com/huggingface/pytorch-pretrained-BERT/pull/688#issuecomment-502991015)). If you really want to download manually you should download the [.bin](https://int-deepset-models-bert.s3.eu-central-1.amazonaws.com/pytorch/bert-base-german-cased-pytorch_model.bin), [bert_config.json](https://int-deepset-models-bert.s3.eu-central-1.amazonaws.com/pytorch/bert-base-german-cased-config.json) and the [vocab file](https://int-deepset-models-bert.s3.eu-central-1.amazonaws.com/pytorch/bert-base-german-cased-vocab.txt) to a folder called \"bert-base-german-cased\"\r\n- `from_pretrained` expects a model name or path not a .bin . You should try: BertTokenizer.from_pretrained('YOUR_PATH_TO/bert-base-german-cased') \r\n\r\nHope that helps!",
"Thank you @tholor , i installed the package with pip and i cannot find 'bert-german-cased' in PRETRAINED_VOCAB_ARCHIVE_MAP\r\nNow , i tried to reinstall the package from source and it's working .\r\n",
"@laifi I am keep getting the same error as the one that you got:\r\n\r\n> UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte\r\n\r\nI also tried to reinstall it, how did you fix it?\r\n",
"> @laifi I am keep getting the same error as the one that you got:\r\n> \r\n> > UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte\r\n> \r\n> I also tried to reinstall it, how did you fix it?\r\n\r\n@shaked571 , i have just uninstalled the pip package and installed it again from source (try to not keep any cache for the package).\r\n**PS: the issue is fixed in the last migration from pytorch-pretrained-bert to pytorch-transformers .**",
"Hi,\r\nI also run into the same issue when I try this piece of code in google colab. \r\ntokenizer = BertTokenizer.from_pretrained('bert-base-german-cased')",
"Hi,\r\nI also have the same issue. Using \r\n```\r\nfrom transformers import AutoTokenizer\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-german-cased\")\r\n```\r\nsolves the problem for me"
] | 1,562 | 1,605 | 1,562 | NONE | null | `tokenizer = BertTokenizer.from_pretrained('bert-base-german-cased')`
**Output:**
> Model name 'bert-base-german-cased' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese). We assumed 'bert-base-german-cased' was a path or url but couldn't find any file associated to this path or url. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/772/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/771 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/771/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/771/comments | https://api.github.com/repos/huggingface/transformers/issues/771/events | https://github.com/huggingface/transformers/issues/771 | 466,475,280 | MDU6SXNzdWU0NjY0NzUyODA= | 771 | Performance dramatically drops down without training. | {
"login": "DariaD",
"id": 10706920,
"node_id": "MDQ6VXNlcjEwNzA2OTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/10706920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DariaD",
"html_url": "https://github.com/DariaD",
"followers_url": "https://api.github.com/users/DariaD/followers",
"following_url": "https://api.github.com/users/DariaD/following{/other_user}",
"gists_url": "https://api.github.com/users/DariaD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DariaD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DariaD/subscriptions",
"organizations_url": "https://api.github.com/users/DariaD/orgs",
"repos_url": "https://api.github.com/users/DariaD/repos",
"events_url": "https://api.github.com/users/DariaD/events{/privacy}",
"received_events_url": "https://api.github.com/users/DariaD/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If you want to evaluate only, you have to set `--output_dir` to the path of your previously trained model. Otherwise, the script will use the original model."
] | 1,562 | 1,563 | 1,563 | NONE | null | I use run_classivier and run_squad as it is shown in README.
If I remove `--do_train` (I already tuned the model and just want to evaluate one more time or with different development set) I expect that result would be the same but performance drops down.
For example, SQuAD:
with training: `{"exact_match": 81.35288552507096, "f1": 88.49520505241821}`
without training: `{"exact_match": 0.21759697256385999, "f1": 7.391520686954715}`
I tried my own processor with binary classification (not up to date code though) and without training the value` 1 `only was predicted.
Thank you in advance for any comments. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/771/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/771/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/770 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/770/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/770/comments | https://api.github.com/repos/huggingface/transformers/issues/770/events | https://github.com/huggingface/transformers/issues/770 | 466,394,359 | MDU6SXNzdWU0NjYzOTQzNTk= | 770 | How can I load a fine-tuned model? | {
"login": "daz261",
"id": 35951484,
"node_id": "MDQ6VXNlcjM1OTUxNDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/35951484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daz261",
"html_url": "https://github.com/daz261",
"followers_url": "https://api.github.com/users/daz261/followers",
"following_url": "https://api.github.com/users/daz261/following{/other_user}",
"gists_url": "https://api.github.com/users/daz261/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daz261/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daz261/subscriptions",
"organizations_url": "https://api.github.com/users/daz261/orgs",
"repos_url": "https://api.github.com/users/daz261/repos",
"events_url": "https://api.github.com/users/daz261/events{/privacy}",
"received_events_url": "https://api.github.com/users/daz261/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"you can use the path to the folder containing your fine-tuned model as `--bert_model`.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,562 | 1,568 | 1,568 | NONE | null | I finetuned a new model by running pregenerate_training_data.py and finetune_on_pregenerated.py and the output is saved as pytorch_model.bin.
How do I load the model to run the regular run_classfier,py predictions? To which files do I have to add code? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/770/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/769 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/769/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/769/comments | https://api.github.com/repos/huggingface/transformers/issues/769/events | https://github.com/huggingface/transformers/issues/769 | 466,099,337 | MDU6SXNzdWU0NjYwOTkzMzc= | 769 | XLNet tensor at wrong device issuse | {
"login": "boy2000-007man",
"id": 4197489,
"node_id": "MDQ6VXNlcjQxOTc0ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4197489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/boy2000-007man",
"html_url": "https://github.com/boy2000-007man",
"followers_url": "https://api.github.com/users/boy2000-007man/followers",
"following_url": "https://api.github.com/users/boy2000-007man/following{/other_user}",
"gists_url": "https://api.github.com/users/boy2000-007man/gists{/gist_id}",
"starred_url": "https://api.github.com/users/boy2000-007man/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/boy2000-007man/subscriptions",
"organizations_url": "https://api.github.com/users/boy2000-007man/orgs",
"repos_url": "https://api.github.com/users/boy2000-007man/repos",
"events_url": "https://api.github.com/users/boy2000-007man/events{/privacy}",
"received_events_url": "https://api.github.com/users/boy2000-007man/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This model was WIP. Fixed now."
] | 1,562 | 1,563 | 1,563 | CONTRIBUTOR | null | ```bash
File "env.xlnet/lib/python3.6/site-packages/pytorch_transformers/modeling_xlnet.py", line 397, in rel_shift
x = torch.index_select(x, 1, torch.arange(klen))
RuntimeError: Expected object of backend CUDA but got backend CPU for argument #3 'index'
```
I meet this issue when using `pytorch-transformers==0.7.0` with multi-gpus, is quickfixed by `x = torch.index_select(x, 1, torch.arange(klen).to(x.device))` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/769/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/768 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/768/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/768/comments | https://api.github.com/repos/huggingface/transformers/issues/768/events | https://github.com/huggingface/transformers/issues/768 | 465,861,420 | MDU6SXNzdWU0NjU4NjE0MjA= | 768 | GPT-2 language model decoding method | {
"login": "haozheji",
"id": 25786613,
"node_id": "MDQ6VXNlcjI1Nzg2NjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/25786613?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haozheji",
"html_url": "https://github.com/haozheji",
"followers_url": "https://api.github.com/users/haozheji/followers",
"following_url": "https://api.github.com/users/haozheji/following{/other_user}",
"gists_url": "https://api.github.com/users/haozheji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haozheji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haozheji/subscriptions",
"organizations_url": "https://api.github.com/users/haozheji/orgs",
"repos_url": "https://api.github.com/users/haozheji/repos",
"events_url": "https://api.github.com/users/haozheji/events{/privacy}",
"received_events_url": "https://api.github.com/users/haozheji/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"`run_gpt2` has top-K which is better than beam-search for high-entropy tasks like open-domain generation. The coming release example (currently on the xlnet branch to be merged with master on Monday) will have top-K and Nucleus sampling (see Holtzman et al. http://arxiv.org/abs/1904.09751)",
"Hi,\r\nIs it possible to include beam search decoding in ```run_generation.py``` ?",
"hope that beam search appears in run_generation.py",
"We'll add it.\r\ncc @rlouf ",
"@thomwolf I see that run_generation.py has disappeared and beam_search does not exist anymore, nor in transformers/generate. Where could we find the implementation of batch beam_search in this repo ?",
"You can’t... so far. We are reworking the API for greedy decoding and sampling, and will work on beam search afterwards."
] | 1,562 | 1,576 | 1,563 | CONTRIBUTOR | null | I am wondering what is the official decoding method when evaluating the language model? The doc says `run_gpt2.py` implement the beam-search. While to me, it seems it's still greedy search with sampling. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/768/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/767 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/767/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/767/comments | https://api.github.com/repos/huggingface/transformers/issues/767/events | https://github.com/huggingface/transformers/pull/767 | 465,828,120 | MDExOlB1bGxSZXF1ZXN0Mjk1NzkyNzM2 | 767 | Documentation | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,562 | 1,562 | 1,562 | MEMBER | null | Sphinx based documentation with Google style comments. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/767/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/767",
"html_url": "https://github.com/huggingface/transformers/pull/767",
"diff_url": "https://github.com/huggingface/transformers/pull/767.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/767.patch",
"merged_at": 1562684195000
} |
https://api.github.com/repos/huggingface/transformers/issues/766 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/766/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/766/comments | https://api.github.com/repos/huggingface/transformers/issues/766/events | https://github.com/huggingface/transformers/issues/766 | 465,778,432 | MDU6SXNzdWU0NjU3Nzg0MzI= | 766 | Fine tune Xlnet | {
"login": "AhmedBahaaElDinMohammed",
"id": 51789113,
"node_id": "MDQ6VXNlcjUxNzg5MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/51789113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AhmedBahaaElDinMohammed",
"html_url": "https://github.com/AhmedBahaaElDinMohammed",
"followers_url": "https://api.github.com/users/AhmedBahaaElDinMohammed/followers",
"following_url": "https://api.github.com/users/AhmedBahaaElDinMohammed/following{/other_user}",
"gists_url": "https://api.github.com/users/AhmedBahaaElDinMohammed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AhmedBahaaElDinMohammed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AhmedBahaaElDinMohammed/subscriptions",
"organizations_url": "https://api.github.com/users/AhmedBahaaElDinMohammed/orgs",
"repos_url": "https://api.github.com/users/AhmedBahaaElDinMohammed/repos",
"events_url": "https://api.github.com/users/AhmedBahaaElDinMohammed/events{/privacy}",
"received_events_url": "https://api.github.com/users/AhmedBahaaElDinMohammed/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"As far as I know, the pytorch code of XLNet is not completely ready now. But you could find it in the branch `xlnet` and the classifier code is nearly ready in the file `example/run_xlnet_classifier.py`. I have successfully fine-tuned it on the SST-2 task (which belongs to GLUE) with following args:\r\n\r\n```shell\r\npython run_xlnet_classifier.py \\\r\n--data_dir ..\\glue_data\\SST-2 \\\r\n--task_name sst-2 \\\r\n--output_dir sst_model \\\r\n--do_train \\\r\n--do_eval \\\r\n--max_seq_length 128 \\\r\n--train_batch_size 64 \\\r\n--learning_rate 5e-6\r\n```",
"@SivilTaram Cant be fine tuned on external data ? is the tensorflow version ready ?",
"@AhmedBahaaElDinMohammed Sure you could fine-tune it on external data, which means you should process your data and construct train/validate `examples` as SST-2 does. You could see `example/utlis_glue.py` for more details to handle your external data :)\r\n\r\nThe tensorflow is ready, you could refer to the original repo for help. This repo is only for pytorch version, thanks.",
"> I have successfully fine-tuned it on the SST-2 task (which belongs to GLUE) \r\n\r\n@SivilTaram Would it be possible to fine-tune it on SQuAD 2.0? or alternatively, convert a fine-tuned model from the original repo/tensorflow?",
"@edanweis Not ready now. Please wait the author to complete the awesome work :) Or you could watch the updates of PR [here](https://github.com/huggingface/pytorch-pretrained-BERT/pull/711).",
"Has anyone tried fp16 for xlnet?\r\nI tried it and found that the memory was half, but it was slower than fp32(even when I used the same GPU memory). \r\nEnvironment: v100, cuda 10.0, torch 1.1\r\nThe environment is ok, because I tried bert + fp16 and it was much faster than fp32.\r\nI thought it is the problem of torch.einsum, but I am not that sure. \r\nGuys, do you have the same problem ?",
"@SivilTaram Following the latest release 0.6.2, I am trying to convert my tf checkpoints:\r\n\r\n```\r\nexport TRANSFO_XL_CHECKPOINT_PATH=home/edanweis/xlnet/model/squad\r\nexport TRANSFO_XL_CONFIG_PATH=home/edanweis/xlnet/model/squad\r\nexport FINETUNING_TASK=squad\r\n\r\n\r\npytorch_transformers xlnet \\\r\n $TRANSFO_XL_CHECKPOINT_PATH \\\r\n $TRANSFO_XL_CONFIG_PATH \\\r\n $PYTORCH_DUMP_OUTPUT \\\r\n $FINETUNING_TASK \\\r\n```\r\nBut getting `pytorch_transformers/__main__.py\", line 111, in main FINETUNING_TASK) UnboundLocalError: local variable 'FINETUNING_TASK' referenced before assignment`",
"@SivilTaram Did you try to finetune XLNet with the last code (release 1.0) using examples/run_glue.py? Everything works but accuracy didn't change and every time is around 0.50? It looks like it didn't train at all. I used the following script:\r\n\r\n```\r\nexport GLUE_DIR=/path/to/glue\r\n\r\npython ./examples/run_glue.py \\\r\n --model_type xlnet \\\r\n --model_name_or_path xlnet-large-cased \\\r\n --do_train \\\r\n --do_eval \\\r\n --evaluate_during_training \\\r\n --logging_steps 500 \\\r\n --save_steps 1000 \\\r\n --task_name=sst-2 \\\r\n --data_dir=${GLUE_DIR}/SST-2 \\\r\n --output_dir=./proc_data/sst-2 \\\r\n --max_seq_length=128 \\\r\n --per_gpu_eval_batch_size=8 \\\r\n --per_gpu_train_batch_size=8 \\\r\n --gradient_accumulation_steps=1 \\\r\n --max_steps=8000 \\\r\n --model_name=xlnet-large-cased \\\r\n --overwrite_output_dir \\\r\n --overwrite_cache \\\r\n --warmup_steps=120\r\n```",
"@avostryakov I do not yet. I guess you could explore if the loss decrease as expected? There should be loss logs, along with tensorboard logs.",
"@SivilTaram Evaluation loss isn't changed, training loss is increased. It looks like something wrong with the optimization process during training.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,562 | 1,569 | 1,569 | NONE | null | Can anybody guide me on how to fine tune xlnet for simple text classification task or any reference code because i am lost. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/766/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/765 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/765/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/765/comments | https://api.github.com/repos/huggingface/transformers/issues/765/events | https://github.com/huggingface/transformers/issues/765 | 465,511,861 | MDU6SXNzdWU0NjU1MTE4NjE= | 765 | Is is possible to fine-tune GPT2 on downstream tasks currently? | {
"login": "shizhediao",
"id": 18120087,
"node_id": "MDQ6VXNlcjE4MTIwMDg3",
"avatar_url": "https://avatars.githubusercontent.com/u/18120087?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shizhediao",
"html_url": "https://github.com/shizhediao",
"followers_url": "https://api.github.com/users/shizhediao/followers",
"following_url": "https://api.github.com/users/shizhediao/following{/other_user}",
"gists_url": "https://api.github.com/users/shizhediao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shizhediao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shizhediao/subscriptions",
"organizations_url": "https://api.github.com/users/shizhediao/orgs",
"repos_url": "https://api.github.com/users/shizhediao/repos",
"events_url": "https://api.github.com/users/shizhediao/events{/privacy}",
"received_events_url": "https://api.github.com/users/shizhediao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Yes we could add this. You mean tasks like GLUE or SQuAD?",
"> Yes we could add this. You mean tasks like GLUE or SQuAD?\r\n\r\nYes! exactly!\r\nPlease add this, thanks!",
"@thomwolf Are you still working on the code to finetune the GPT2 language model (not classification task)? Thanks.",
"@experiencor @thomwolf also curious about GPT2 LM finetuning issue, thanks!",
"We'll add an example for fine-tuning the models (probably refactor the Bert's one at the same time) this month.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> We'll add an example for fine-tuning the models (probably refactor the Bert's one at the same time) this month.\r\n\r\nDid you add the example ? looking for an example of fine tuning gpt-2 for downstream tasks.",
"+1. Also interested."
] | 1,562 | 1,625 | 1,570 | NONE | null | Is is possible to fine-tune GPT2 on downstream tasks currently? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/765/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/764 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/764/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/764/comments | https://api.github.com/repos/huggingface/transformers/issues/764/events | https://github.com/huggingface/transformers/issues/764 | 465,337,246 | MDU6SXNzdWU0NjUzMzcyNDY= | 764 | Adding extra inputs when fine-tuning BERT | {
"login": "nadavborenstein",
"id": 15877500,
"node_id": "MDQ6VXNlcjE1ODc3NTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/15877500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nadavborenstein",
"html_url": "https://github.com/nadavborenstein",
"followers_url": "https://api.github.com/users/nadavborenstein/followers",
"following_url": "https://api.github.com/users/nadavborenstein/following{/other_user}",
"gists_url": "https://api.github.com/users/nadavborenstein/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nadavborenstein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nadavborenstein/subscriptions",
"organizations_url": "https://api.github.com/users/nadavborenstein/orgs",
"repos_url": "https://api.github.com/users/nadavborenstein/repos",
"events_url": "https://api.github.com/users/nadavborenstein/events{/privacy}",
"received_events_url": "https://api.github.com/users/nadavborenstein/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You could try stacking a linear layer over-top of BERT that takes as input the BERT sequence representation + your features. You would have to fine-tune through all of BERT.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> I am trying to fine-tune BERT for a sequence classification task where in addition to the sequences, I have extra features such as the writer age, tags, etc. I want to use those extra features, and I was thinking about concatenating them to the input of the final linear layer.\r\n> Is there a way of doing such a thing? If not, what is the best way for integrating extra features in the fine-tuning process?\r\n\r\nDid you find a way to add extra features then fine-tuning BERT?"
] | 1,562 | 1,601 | 1,568 | NONE | null | I am trying to fine-tune BERT for a sequence classification task where in addition to the sequences, I have extra features such as the writer age, tags, etc. I want to use those extra features, and I was thinking about concatenating them to the input of the final linear layer.
Is there a way of doing such a thing? If not, what is the best way for integrating extra features in the fine-tuning process?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/764/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/763 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/763/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/763/comments | https://api.github.com/repos/huggingface/transformers/issues/763/events | https://github.com/huggingface/transformers/issues/763 | 465,149,081 | MDU6SXNzdWU0NjUxNDkwODE= | 763 | ''bert-large-uncased-whole-word-masking-finetuned-squad' CAN'T be reached. | {
"login": "sinboyxx",
"id": 32317326,
"node_id": "MDQ6VXNlcjMyMzE3MzI2",
"avatar_url": "https://avatars.githubusercontent.com/u/32317326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sinboyxx",
"html_url": "https://github.com/sinboyxx",
"followers_url": "https://api.github.com/users/sinboyxx/followers",
"following_url": "https://api.github.com/users/sinboyxx/following{/other_user}",
"gists_url": "https://api.github.com/users/sinboyxx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sinboyxx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sinboyxx/subscriptions",
"organizations_url": "https://api.github.com/users/sinboyxx/orgs",
"repos_url": "https://api.github.com/users/sinboyxx/repos",
"events_url": "https://api.github.com/users/sinboyxx/events{/privacy}",
"received_events_url": "https://api.github.com/users/sinboyxx/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"also ran into this. I think they forgot to upload the file/make it public. You can find the vocab file on the original google repo\r\n\r\nhttps://github.com/google-research/bert",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,562 | 1,568 | 1,568 | NONE | null | 'bert-large-uncased-whole-word-masking-finetuned-squad' can't be reached from the addr in tokenization.py:
https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-vocab.txt
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/763/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/763/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/762 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/762/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/762/comments | https://api.github.com/repos/huggingface/transformers/issues/762/events | https://github.com/huggingface/transformers/issues/762 | 465,051,890 | MDU6SXNzdWU0NjUwNTE4OTA= | 762 | randrange() error when running pregenerate_training_data.py code in lm_finetuning | {
"login": "KavyaGujjala",
"id": 28920687,
"node_id": "MDQ6VXNlcjI4OTIwNjg3",
"avatar_url": "https://avatars.githubusercontent.com/u/28920687?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KavyaGujjala",
"html_url": "https://github.com/KavyaGujjala",
"followers_url": "https://api.github.com/users/KavyaGujjala/followers",
"following_url": "https://api.github.com/users/KavyaGujjala/following{/other_user}",
"gists_url": "https://api.github.com/users/KavyaGujjala/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KavyaGujjala/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KavyaGujjala/subscriptions",
"organizations_url": "https://api.github.com/users/KavyaGujjala/orgs",
"repos_url": "https://api.github.com/users/KavyaGujjala/repos",
"events_url": "https://api.github.com/users/KavyaGujjala/events{/privacy}",
"received_events_url": "https://api.github.com/users/KavyaGujjala/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,562 | 1,568 | 1,568 | NONE | null | Hi,
I am trying to run pregenerate_training_data.py code in lm_finetuning using a text file which has two documents ( each document has around 200 sentences )
I ran into this error:
```
Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex.
Loading Dataset: 399 lines [00:00, 2214.07 lines/s]
Epoch: 0%| | 0/3 [00:00<?, ?it/s]
Traceback (most recent call last): | 0/1 [00:00<?, ?it/s]
File "pregenerate_training_data.py", line 292, in <module>
main()
File "pregenerate_training_data.py", line 277, in main
vocab_list=vocab_list)
File "pregenerate_training_data.py", line 187, in create_instances_from_document
random_document = doc_database.sample_doc(current_idx=doc_idx, sentence_weighted=True)
File "pregenerate_training_data.py", line 52, in sample_doc
sentence_index = randint(rand_start, rand_end-1) % self.cumsum_max
File "/home/cloud/anaconda3/lib/python3.6/random.py", line 221, in randint
return self.randrange(a, b+1)
File "/home/cloud/anaconda3/lib/python3.6/random.py", line 199, in randrange
raise ValueError("empty range for randrange() (%d,%d, %d)" % (istart, istop, width))
ValueError: empty range for randrange() (198,198, 0)
```
command line prompt looks like
`python pregenerate_training_data.py --train_corpus=./ack_belief_training_testing/ack_belief_all_categories_data2.txt --bert_model=bert-base-uncased --do_lower_case --output_dir=./ack_belief_training_testing/pytorch_gen_data/ack_belief_all_categories_data2_train_data_3epochs/ --epochs_to_generate=3`
What can be the issue?
Can somebody help me through this?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/762/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/761 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/761/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/761/comments | https://api.github.com/repos/huggingface/transformers/issues/761/events | https://github.com/huggingface/transformers/issues/761 | 465,004,095 | MDU6SXNzdWU0NjUwMDQwOTU= | 761 | Help loading BioBERT weights | {
"login": "happypanda5",
"id": 48689790,
"node_id": "MDQ6VXNlcjQ4Njg5Nzkw",
"avatar_url": "https://avatars.githubusercontent.com/u/48689790?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/happypanda5",
"html_url": "https://github.com/happypanda5",
"followers_url": "https://api.github.com/users/happypanda5/followers",
"following_url": "https://api.github.com/users/happypanda5/following{/other_user}",
"gists_url": "https://api.github.com/users/happypanda5/gists{/gist_id}",
"starred_url": "https://api.github.com/users/happypanda5/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/happypanda5/subscriptions",
"organizations_url": "https://api.github.com/users/happypanda5/orgs",
"repos_url": "https://api.github.com/users/happypanda5/repos",
"events_url": "https://api.github.com/users/happypanda5/events{/privacy}",
"received_events_url": "https://api.github.com/users/happypanda5/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Not really providing a solution here, but have you considered https://github.com/allenai/scibert instead?\r\nAllenAI provides PyTorch weights, and through tests they claim their model is superior https://arxiv.org/pdf/1903.10676.pdf on their suite of tasks. For that and for ease of use, it may be a valid alternative.",
"Hello, I am trying to figure out how to load the SciBert weights. I see that you can use\r\n\r\n```\r\n# Simple serialization for models and tokenizers\r\nmodel.save_pretrained('./directory/to/save/') # save\r\nmodel = model_class.from_pretrained('./directory/to/save/') # re-load\r\ntokenizer.save_pretrained('./directory/to/save/') # save\r\n```\r\n\r\nSo my guess is to download them from here\r\nhttps://github.com/allenai/scibert#pytorch-models\r\n\r\nUntar them, then point to that directory\r\n\r\n```\r\nmodel = model_class.from_pretrained('DIRECTORY/TO/DOWNLOADED/UNZIPPED/SCIBERT/Pytorch.bin') \r\n```\r\n\r\nUnless there is more to that, the part I am confused about is that SciBert also has it's own vocab and 'vocab.txt' file. I am wondering how to point to that file, and not the default one. \r\n\r\nEdit\r\n\r\nFound the answer\r\n\r\nhttps://github.com/huggingface/pytorch-transformers/issues/69#issuecomment-443215315\r\n\r\nyou can just do a direct path to it",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,562 | 1,570 | 1,570 | NONE | null | I have completed the following:
**1. Downloaded pretrained BioBERT weights from their current release**
**2. Convert TensorFlow checkpoints into Pytorch weights bin file using the following code**
import os
os.system( ' pytorch_pretrained_bert convert_tf_checkpoint_to_pytorch \
"/content/biobert_v1.1_pubmed/model.ckpt.index" \
"/content/biobert_v1.1_pubmed/bert_config.json" \
"/content/biobert_pytorch.bin" ' )
`
**3. I then tried to test whether I can load these weights. In order to do so, I tried the following code**
state_dict = torch.load( "/content/biobert_pytorch.bin" )
model.load_state_dict(state_dict)
**but I get the error**
> IncompatibleKeys(missing_keys=[], unexpected_keys=[])
***Please guide me*** | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/761/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/760 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/760/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/760/comments | https://api.github.com/repos/huggingface/transformers/issues/760/events | https://github.com/huggingface/transformers/issues/760 | 464,768,544 | MDU6SXNzdWU0NjQ3Njg1NDQ= | 760 | Simple LM finetuning falls with RunTime Error: CUDA out of memory | {
"login": "MNCTTY",
"id": 37251686,
"node_id": "MDQ6VXNlcjM3MjUxNjg2",
"avatar_url": "https://avatars.githubusercontent.com/u/37251686?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MNCTTY",
"html_url": "https://github.com/MNCTTY",
"followers_url": "https://api.github.com/users/MNCTTY/followers",
"following_url": "https://api.github.com/users/MNCTTY/following{/other_user}",
"gists_url": "https://api.github.com/users/MNCTTY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MNCTTY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MNCTTY/subscriptions",
"organizations_url": "https://api.github.com/users/MNCTTY/orgs",
"repos_url": "https://api.github.com/users/MNCTTY/repos",
"events_url": "https://api.github.com/users/MNCTTY/events{/privacy}",
"received_events_url": "https://api.github.com/users/MNCTTY/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"How much memory does your GPU have. You can check this by running `nvidia-smi`.",
"I also has same phenomena. Also, the learning time become slower and much GPU consumption occur, both of which I think is natural, regarding parameters BERT has.\r\n\r\nThe substitutional way is that, no fine-tuning and dump. \r\nI mean, feed your sequence to Bert and dump your layers. In the training process of your task, simply load sequence with your dumped result of Bert.",
"Try `--reduce_memory `, it worked on mine with base multilingual uncased BERT with batch size= 4 on my single 2080Ti.\r\n\r\n\r\n```\r\npython3 finetune_on_pregenerated.py \r\n--pregenerated_data training/ \r\n--bert_model bert-base-multilingual-uncased \r\n--do_lower_case \r\n--output_dir finetuned_lm/ \r\n--epochs 3 \r\n--reduce_memory \r\n--train_batch_size 4\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,562 | 1,569 | 1,569 | NONE | null | I tried to run simple_lm_finetuning.py on my own data with multi lingual uncased model, and the script breaks down with error 'CUDA out of memory'. Can anyone say what should I do in this situation?
I've already decreased batch size from 32 to 2, but even then I get this error.

Offtop: does anybody know, if I can use my own pretrained model in this script instead of one of listed ones? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/760/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/759 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/759/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/759/comments | https://api.github.com/repos/huggingface/transformers/issues/759/events | https://github.com/huggingface/transformers/pull/759 | 464,576,856 | MDExOlB1bGxSZXF1ZXN0Mjk0ODI0NzIw | 759 | Release 0.7: pytorch-pretrained-bert => pytorch-transformers | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,562 | 1,566 | 1,562 | MEMBER | null | Name change: `pytorch-pretrained-bert` => `pytorch-transformers`
Standardize tokenization + tests
Refactor examples and add tests for examples as well | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/759/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/759",
"html_url": "https://github.com/huggingface/transformers/pull/759",
"diff_url": "https://github.com/huggingface/transformers/pull/759.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/759.patch",
"merged_at": 1562684764000
} |
https://api.github.com/repos/huggingface/transformers/issues/758 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/758/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/758/comments | https://api.github.com/repos/huggingface/transformers/issues/758/events | https://github.com/huggingface/transformers/pull/758 | 464,313,727 | MDExOlB1bGxSZXF1ZXN0Mjk0NjIwMTcx | 758 | Release 0.7 - Add doc | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,562 | 1,562 | 1,562 | MEMBER | null | Like #757 but let's point on the `xlnet` branch for now. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/758/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/758",
"html_url": "https://github.com/huggingface/transformers/pull/758",
"diff_url": "https://github.com/huggingface/transformers/pull/758.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/758.patch",
"merged_at": 1562318524000
} |
https://api.github.com/repos/huggingface/transformers/issues/757 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/757/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/757/comments | https://api.github.com/repos/huggingface/transformers/issues/757/events | https://github.com/huggingface/transformers/pull/757 | 464,312,700 | MDExOlB1bGxSZXF1ZXN0Mjk0NjE5MzM1 | 757 | Release 0.7 - Add a real doc | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,562 | 1,562 | 1,562 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/757/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/757",
"html_url": "https://github.com/huggingface/transformers/pull/757",
"diff_url": "https://github.com/huggingface/transformers/pull/757.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/757.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/756 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/756/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/756/comments | https://api.github.com/repos/huggingface/transformers/issues/756/events | https://github.com/huggingface/transformers/issues/756 | 464,282,735 | MDU6SXNzdWU0NjQyODI3MzU= | 756 | Invalid Syntax Error trying to run pregenerate_training_data.py | {
"login": "MNCTTY",
"id": 37251686,
"node_id": "MDQ6VXNlcjM3MjUxNjg2",
"avatar_url": "https://avatars.githubusercontent.com/u/37251686?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MNCTTY",
"html_url": "https://github.com/MNCTTY",
"followers_url": "https://api.github.com/users/MNCTTY/followers",
"following_url": "https://api.github.com/users/MNCTTY/following{/other_user}",
"gists_url": "https://api.github.com/users/MNCTTY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MNCTTY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MNCTTY/subscriptions",
"organizations_url": "https://api.github.com/users/MNCTTY/orgs",
"repos_url": "https://api.github.com/users/MNCTTY/repos",
"events_url": "https://api.github.com/users/MNCTTY/events{/privacy}",
"received_events_url": "https://api.github.com/users/MNCTTY/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Your snippet is too short to see what type of error there is, can you extract a larger one?",
"Hi \r\nyes\r\n\r\n",
"Pull the latest changes from master and report if that helps.",
"it has been disappeared in pregenerate script, and appeared in finetuning on pregenerated data script\r\n\r\n",
"You are getting these errors because you are using Python 3.5 and the code is making use of f-strings which are introduced in Python 3.6. You could try using Python 3.6 or change the source code replacing f-string with str.format syntax.\r\n\r\n@thomwolf The readme says repository supports Python 3.5+. Does that mean Python 3.5 is supported as well? If yes, I think we should change f-strings for the format syntax. \r\n",
"Well, only the library code supports Python 3.5+, I don't check the examples which are mostly contributed by the community.\r\nIf you want to fix Python 3.5 support for the examples I'm happy to welcome a PR.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,562 | 1,568 | 1,568 | NONE | null | I'm getting such error, can't understand what's wrong

Plus, is it possible to further fine tune once fine tuned model, that appears after simple finetuning .py in corresponding folder (pytorch_model.bin) ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/756/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/755 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/755/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/755/comments | https://api.github.com/repos/huggingface/transformers/issues/755/events | https://github.com/huggingface/transformers/pull/755 | 463,973,000 | MDExOlB1bGxSZXF1ZXN0Mjk0MzQ2NzA5 | 755 | TorchScript trace comparison with different sizes | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,562 | 1,566 | 1,565 | MEMBER | null | Adds a test to compare TorchScript traces with different batch sizes and sequences lengths. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/755/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/755",
"html_url": "https://github.com/huggingface/transformers/pull/755",
"diff_url": "https://github.com/huggingface/transformers/pull/755.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/755.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/754 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/754/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/754/comments | https://api.github.com/repos/huggingface/transformers/issues/754/events | https://github.com/huggingface/transformers/issues/754 | 463,967,141 | MDU6SXNzdWU0NjM5NjcxNDE= | 754 | Get Attention Values for Pretrained Model | {
"login": "Sparkier",
"id": 5690524,
"node_id": "MDQ6VXNlcjU2OTA1MjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5690524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sparkier",
"html_url": "https://github.com/Sparkier",
"followers_url": "https://api.github.com/users/Sparkier/followers",
"following_url": "https://api.github.com/users/Sparkier/following{/other_user}",
"gists_url": "https://api.github.com/users/Sparkier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sparkier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sparkier/subscriptions",
"organizations_url": "https://api.github.com/users/Sparkier/orgs",
"repos_url": "https://api.github.com/users/Sparkier/repos",
"events_url": "https://api.github.com/users/Sparkier/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sparkier/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You need to install the master version (not with pip or conda) : \r\n```\r\ngit clone https://github.com/huggingface/pytorch-pretrained-BERT.git\r\ncd pytorch-pretrained-BERT\r\npython setup.py install\r\n```\r\n\r\nThen you can use it like this : \r\n```\r\nmodel = BertModel.from_pretrained('bert-base-uncased',\r\n output_attentions=True,\r\n keep_multihead_output=True)\r\nmodel.eval() # turn off dropout layers\r\nattn = model(tokens)[0]\r\n```\r\n\r\nTell me if I'm misinterpreting your problem",
"Thank you a lot for the help, I didn't expect this to only work on the current release. \r\n\r\nHowever, I think with this I found a problem in the BERT encoder module:\r\n\r\n```\r\ndef forward(self, hidden_states, attention_mask, output_all_encoded_layers=True, head_mask=None):\r\n all_encoder_layers = []\r\n all_attentions = []\r\n for i, layer_module in enumerate(self.layer):\r\n hidden_states = layer_module(hidden_states, attention_mask, head_mask[i])\r\n```\r\nThe forward function by default gets `None` for the `head_mask` parameter. Then, however, it indexes it, which causes an error. I think it would be nice to handle this case.",
"Hi. I want to do something similar but with the **BertForQuestionAnswering** model.\r\n\r\nThe BertModel is the general BERT model that is used to classify whether a sentence is the next sentence or not. I want to get the attention values for QuestionAnswering while I pass a new paragraph and a question as inputs. I want to use the **BertForQuestionAnswering** model (which is pretrained on SQuAD if I am not wrong) and get the self-attention values on the question words. Is it possible to achieve this in a similar way as mentioned above?\r\n\r\n**NOTE:** I know the above method gives attention values of the pre-trained model. I want to get attention values of the model when I feed a new input question to the model. Something similar to what can be done using [BertViz](https://github.com/jessevig/bertviz) (although I do not want to visualize attention, just want to get the values).\r\n\r\nThanks.",
"Hi, this will be in the next release (release date sometime next week).\r\nThere will be attention/hidden-state output options for all the models.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,562 | 1,568 | 1,568 | NONE | null | When using BertModel.from_pretrained, I am not able to have it also return the attention layers. Why does that not word? Am I doing something wrong? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/754/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/753 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/753/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/753/comments | https://api.github.com/repos/huggingface/transformers/issues/753/events | https://github.com/huggingface/transformers/issues/753 | 463,959,301 | MDU6SXNzdWU0NjM5NTkzMDE= | 753 | `bert-base-uncased` works for CoLA, `bert-large-uncased` always predicts one class | {
"login": "rococode",
"id": 32279130,
"node_id": "MDQ6VXNlcjMyMjc5MTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/32279130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rococode",
"html_url": "https://github.com/rococode",
"followers_url": "https://api.github.com/users/rococode/followers",
"following_url": "https://api.github.com/users/rococode/following{/other_user}",
"gists_url": "https://api.github.com/users/rococode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rococode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rococode/subscriptions",
"organizations_url": "https://api.github.com/users/rococode/orgs",
"repos_url": "https://api.github.com/users/rococode/repos",
"events_url": "https://api.github.com/users/rococode/events{/privacy}",
"received_events_url": "https://api.github.com/users/rococode/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,562 | 1,567 | 1,567 | CONTRIBUTOR | null | I'm having an issue with CoLA where finetuning off of bert-large results in a model that only predicts one class.
I make one change in configs to train the large model - I set `train_batch_size` to `16` for `bert-large-uncased`.
These are the two training commands I use (missing do_lowercase, I know, but it's forced anyways):
```
python ./run_classifier.py \
--task_name CoLA \
--do_train \
--data_dir ./data/cola/ \
--bert_model bert-large-uncased \
--output_dir ./out/cola-finetune-large-uncased/
```
```
python ./run_classifier.py \
--task_name CoLA \
--do_train \
--data_dir ./data/cola/ \
--bert_model bert-base-uncased \
--output_dir ./out/cola-finetune-base-uncased/
```
---
Now here are the results I get with base vs large models:
Here are my results evaluating with plain bert-large-uncased
```
eval_loss = 0.6977422047745098
mcc = -0.05490997894843018
```
Here are the results with bert-base-uncased
```
eval_loss = 1.013142795273752
mcc = 0.02904813156816523
```
Now here are the results with fine-tuning on bert-base-uncased:
```
eval_loss = 0.590644522203189
mcc = 0.5313406823271718
```
Pretty much reflects what the paper says, nice. But when I do the exact same process, training on bert-large-uncased and with a slightly smaller batch size (16 instead of 32) b/c of GPU memory limitations, I get these results:
```
eval_loss = 0.61876411058686
mcc = 0.0
```
Just to be clear, in these examples I only change the `bert_model` flag from `bert-large-uncased` to `bert-base-uncased` and change the batch size when training, no other changes at all.
I feel I must be doing something wrong. I'm using `run_classifier.py` from this repo. Any ideas what could be the problem?
I've read there's some instability with BERT-Large for small datasets such as CoLA, but surely it doesn't degenerate this much? If I understand correctly mcc = 0 indicates random guessing... Or is that actually the case, and I just need to run it more times and cross my fingers for a non-degenerate run? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/753/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/753/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/752 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/752/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/752/comments | https://api.github.com/repos/huggingface/transformers/issues/752/events | https://github.com/huggingface/transformers/issues/752 | 463,578,039 | MDU6SXNzdWU0NjM1NzgwMzk= | 752 | how to set the init learning rate when use bertAdam? | {
"login": "mmxuan18",
"id": 6283983,
"node_id": "MDQ6VXNlcjYyODM5ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6283983?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmxuan18",
"html_url": "https://github.com/mmxuan18",
"followers_url": "https://api.github.com/users/mmxuan18/followers",
"following_url": "https://api.github.com/users/mmxuan18/following{/other_user}",
"gists_url": "https://api.github.com/users/mmxuan18/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmxuan18/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmxuan18/subscriptions",
"organizations_url": "https://api.github.com/users/mmxuan18/orgs",
"repos_url": "https://api.github.com/users/mmxuan18/repos",
"events_url": "https://api.github.com/users/mmxuan18/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmxuan18/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,562 | 1,567 | 1,567 | NONE | null | i set the BertAdam learning rate as the default value of args (3e-5), and i step in to the BertAdam step by step , and print lr_scheduled see that the acturly lr is very small over all the training process (between <0 ~ 1> * 3e-5), this cause the loss decrease very slow, when i set the init learning rate as 0.1, the loss decrease much more fast, so what's the right way to set the learning rate for BertAdam param? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/752/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/751 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/751/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/751/comments | https://api.github.com/repos/huggingface/transformers/issues/751/events | https://github.com/huggingface/transformers/issues/751 | 463,490,748 | MDU6SXNzdWU0NjM0OTA3NDg= | 751 | Slower and more memory hungry than the TensorFlow BERT? | {
"login": "nuwapi",
"id": 26151903,
"node_id": "MDQ6VXNlcjI2MTUxOTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/26151903?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nuwapi",
"html_url": "https://github.com/nuwapi",
"followers_url": "https://api.github.com/users/nuwapi/followers",
"following_url": "https://api.github.com/users/nuwapi/following{/other_user}",
"gists_url": "https://api.github.com/users/nuwapi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nuwapi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nuwapi/subscriptions",
"organizations_url": "https://api.github.com/users/nuwapi/orgs",
"repos_url": "https://api.github.com/users/nuwapi/repos",
"events_url": "https://api.github.com/users/nuwapi/events{/privacy}",
"received_events_url": "https://api.github.com/users/nuwapi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Yes, this library is not made for training a model from scratch.\r\n\r\nYou should use one of the libraries I referred to here: https://github.com/huggingface/pytorch-pretrained-BERT/issues/543#issuecomment-491207121\r\n\r\nI might give it a look one day but not in the short-term.",
"@thomwolf Thank you so much for the info! :)\r\n\r\nJust to share, I quickly did a benchmark of XLM (this one fits my needs the most out of your three recommendations). \r\n\r\n**Sentences/s (for the specs I mentioned above):**\r\n\r\nBatch size | Official TF BERT | HuggingFace PyTorch BERT | XLM PyTorch BERT\r\n-- | -- | -- | --\r\n128 over 1 GPU | 610 | 288 | 575\r\n250 over 1 GPU | 647 | OOM | 625\r\n500 over 1 GPU | 665 | OOM | 650\r\n700 over 1 GPU | N/A | OOM | OOM\r\n900 over 1 GPU | 667 | OOM | OOM\r\n1000 over 1 GPU | OOM | OOM | OOM\r\n128 over 4 GPUs | 889 (1.5x) | 779 (2.7x) | N/A\r\n512 over 4 GPUs | 1522 (2.3x) | 1018 (3.?x) | N/A\r\n1000 over 4 GPUs | 1798 (2.?x) | OOM | N/A\r\n2000 over 4 GPUs | 1946 (2.?x) | OOM | N/A\r\n3600 over 4 GPUs | 1991 (3.0x) | OOM | N/A\r\n4000 over 4 GPUs | OOM | OOM | N/A\r\n\r\nNote: Only spent 2 hours on XLM, not sure if I set the vocab to be exactly the same size as the others, but they should be in the same ballpark.\r\n\r\nI haven't got a chance to benchmark the multi-GPU XLM. But in general, it looks like:\r\n1. The TensorFlow implementation uses memory more efficiently.\r\n2. PyTorch's multi-GPU scaling seems better.\r\n3. PyTorch itself is not slower than TF.\r\n\r\nn",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi @thomwolf , \r\nI was trying to fine-tune pytorch-transformers's gpt2 (124M) on a V100 16GB GPU. But I am not able to accommodate more than the batch_size of 2. I am using seq-length of 1024 tokens. \r\n\r\nThis might be evident from above comments but I am new to training NNs so wanted to confirm if fine tuning would also cause OOM as in training from scratch? If so, then is only option available to finetune gpt2 is to use original tensorfolow implementation?\r\n\r\nThanks",
"Hi @SKRohit, with the GPT-2 model you can either fine-tune it with a batch size of 4 and a sequence of 512 tokens, or a batch size of 2 and a sequence of 1024 tokens, like what you've tried. We have had good results with a batch size of 4 and a sequence of 512 in our experiments.\r\n\r\nIf you want a bigger batch size, you can set up gradient accumulation, which would allow you to put larger to much larger batch sizes. You can find an example of gradient accumulation applied to fine-tuning in our [language model fine-tuning example](https://github.com/huggingface/pytorch-transformers/blob/master/examples/run_lm_finetuning.py).",
"Yes, @LysandreJik I am using gradient accumulation. I found max possible batch_size = 2 to be too small given this [comment](https://github.com/openai/gpt-2/issues/150#issuecomment-529153176) so asked to make sure there is no error in my code or any issue with my gcloud gpu. \r\nAlso, have you finetuned gpt2 architectures using mixed_precision (mp) training? Did you find any difference in performance of mp trained gpt2 in comparison to without mp?\r\nAnd I am referring to fine-tuning script provided in `pytorch_transformers` repo 👍 .\r\n\r\nThanks. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"mark",
"@thomwolf What is the bottleneck in HuggingFace transformers pretraining comparing to Tensorflow and other PyTorch implementations?",
"I also find the Transformers library to be more memory hungry. It seems to be even slower with Pytorch than TF, too.\r\n\r\nOn the flip side, it is really easy to use. I guess if you have big datasets and 2x slower is critically insufficient, it's not a good option. But if the difference is just half a day or less, it may not be that bad."
] | 1,562 | 1,615 | 1,573 | NONE | null | Hi pytorch-pretrained-BERT developers,
I have been using TensorFlow BERT since it came out, recently I wanted to switch to PyTorch because it is a great library. For this, I did a bunch of tests to compare training specs between Google's TF BERT and your implementation. To my surprise, this is a lot slower and can only afford small batch size before OOM error. I really want to know if this is a correct observation because I was really hoping to transition to PyTorch.
Here is my setup:
1. Custom size of 3 layer by 320 hidden dimension.
2. English uncased vocab.
3. Sequence length is set to be constant 125.
4. Running on Tesla P40.
5. Running finetune_on_pregenerated.py
6. I changed finetune_on_pregenerated.py a little to just initialize a blank model of my size.
Speed difference:
* TensorFlow: 809 sentences/s on 1 GPU.
* TensorFlow: 2350 sentences/s on 4 GPUs.
* PyTorch: 275 sentences/s on 1 GPU.
* PyTorch: 991 sentences/s on 4 GPUs.
Memory:
* My P40 has 22GB memory.
* TensorFlow can run batch size of 1000 or more (didn't probe upper limit).
* PyTorch is OOM for batch size 250 or above. OK with 125.
* I ran 30 epochs on a test data set of only 17MB. It shouldn't be a data loading problem.
I want to know if there is anything that I could have done wrong?
Thank you very much!
n | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/751/reactions",
"total_count": 13,
"+1": 13,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/751/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/750 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/750/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/750/comments | https://api.github.com/repos/huggingface/transformers/issues/750/events | https://github.com/huggingface/transformers/issues/750 | 463,425,410 | MDU6SXNzdWU0NjM0MjU0MTA= | 750 | Incorrect training loss scaling factor in examples/run_classifier.py? | {
"login": "ethanjperez",
"id": 6402205,
"node_id": "MDQ6VXNlcjY0MDIyMDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6402205?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ethanjperez",
"html_url": "https://github.com/ethanjperez",
"followers_url": "https://api.github.com/users/ethanjperez/followers",
"following_url": "https://api.github.com/users/ethanjperez/following{/other_user}",
"gists_url": "https://api.github.com/users/ethanjperez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ethanjperez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ethanjperez/subscriptions",
"organizations_url": "https://api.github.com/users/ethanjperez/orgs",
"repos_url": "https://api.github.com/users/ethanjperez/repos",
"events_url": "https://api.github.com/users/ethanjperez/events{/privacy}",
"received_events_url": "https://api.github.com/users/ethanjperez/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You are right Ethan.\r\nI'm refactoring the examples which were a bit rotten, let's include this fix as well.",
"Great! Either way the examples are a great starting point :)\r\n\r\nI'm also wondering if tensorboard is [only logging](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py#L345) the training loss for the last forward pass for a batch (if several are required / when using gradient accumulation)? A fix would be to maintain a variable ```tr_batch_loss``` (similar to ```tr_loss```) for each full training batch (reset after each parameter update) and log that instead.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,562 | 1,568 | 1,568 | CONTRIBUTOR | null | In [examples/run_classifier.py](https://github.com/huggingface/pytorch-pretrained-BERT/commit/87b9ec3843f7f9a81253075f92c9e6537ecefe1c), the overall 'loss' is produce as 'tr_loss/global_step' (instead of 'tr_loss/nb_tr_steps'). Is this behavior correct? @mprouveur made the change in this [commit](https://github.com/huggingface/pytorch-pretrained-BERT/commit/87b9ec3843f7f9a81253075f92c9e6537ecefe1c).
I'm wondering as 'global_step' is never reset after a training epoch, while tr_loss is reset every training epoch. So even if 'tr_loss' remains constant, 'loss' will decrease over more training iterations, given the increasingly large denominator ('global_step').
If this is correct, maybe the 'nb_tr_steps' variable should be removed? It looks unused throughout the code currently.
At any rate, it's only a 2-line fix, and it only affects the logging behavior I believe. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/750/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/750/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/749 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/749/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/749/comments | https://api.github.com/repos/huggingface/transformers/issues/749/events | https://github.com/huggingface/transformers/issues/749 | 463,356,518 | MDU6SXNzdWU0NjMzNTY1MTg= | 749 | Attribute Error : 'BertModel' object has no attribute 'bert' | {
"login": "PradyumnaGupta",
"id": 39255758,
"node_id": "MDQ6VXNlcjM5MjU1NzU4",
"avatar_url": "https://avatars.githubusercontent.com/u/39255758?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PradyumnaGupta",
"html_url": "https://github.com/PradyumnaGupta",
"followers_url": "https://api.github.com/users/PradyumnaGupta/followers",
"following_url": "https://api.github.com/users/PradyumnaGupta/following{/other_user}",
"gists_url": "https://api.github.com/users/PradyumnaGupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PradyumnaGupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PradyumnaGupta/subscriptions",
"organizations_url": "https://api.github.com/users/PradyumnaGupta/orgs",
"repos_url": "https://api.github.com/users/PradyumnaGupta/repos",
"events_url": "https://api.github.com/users/PradyumnaGupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/PradyumnaGupta/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You can only load from a tensorflow checkpoint in a `BertForPretraing` model.\r\nI will add a check.\r\nAlternatively, you should use the conversion script to make a pytorch model and then you can import the resulting pytorch model in any type of Bert model.",
"I use the BertForPretraining.from_pretrained().bert to get the BertModel from the tensorflow checkpoint, I think it is useful",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"You are probably using the \"wrong bert\" . Install bert-tensorflow. There are two packages with the same name\r\n ",
"same for distilbert. Using base_model.distilbert will solve the problem"
] | 1,562 | 1,666 | 1,568 | NONE | null | I am using Google's Bert tensorflow checkpoints to create a model from .from_pretrained as shown below-
`
model = BertModel.from_pretrained('/content/uncased_L-12_H-768_A-12',from_tf=True)
`
But I am getting the following error-
`
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-9-63e759d1aab4> in <module>()
1 bert_version = 'bert-base-uncased'
----> 2 model = BertModel.from_pretrained('/content/uncased_L-12_H-768_A-12',from_tf=True)
3 tokenizer = BertTokenizer.from_pretrained(bert_version)
4 sentence_a = "I went to the store."
5 sentence_b = "At the store, I bought fresh strawberries."
2 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __getattr__(self, name)
537 return modules[name]
538 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 539 type(self).__name__, name))
540
541 def __setattr__(self, name, value):
AttributeError: 'BertModel' object has no attribute 'bert'
`
The upper Attribute error code is follows after loading the bert layers like below -
`
Converting TensorFlow checkpoint from /content/uncased_L-12_H-768_A-12/model.ckpt
Loading TF weight bert/embeddings/LayerNorm/beta with shape [768]
Loading TF weight bert/embeddings/LayerNorm/gamma with shape [768]
Loading TF weight bert/embeddings/position_embeddings with shape [512, 768]
Loading TF weight bert/embeddings/token_type_embeddings with shape [2, 768]
Loading TF weight bert/embeddings/word_embeddings with shape [30522, 768]
Loading TF weight bert/encoder/layer_0/attention/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_0/attention/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_0/attention/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_0/attention/output/dense/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_0/attention/self/key/bias with shape [768]
Loading TF weight bert/encoder/layer_0/attention/self/key/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_0/attention/self/query/bias with shape [768]
Loading TF weight bert/encoder/layer_0/attention/self/query/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_0/attention/self/value/bias with shape [768]
Loading TF weight bert/encoder/layer_0/attention/self/value/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_0/intermediate/dense/bias with shape [3072]
Loading TF weight bert/encoder/layer_0/intermediate/dense/kernel with shape [768, 3072]
Loading TF weight bert/encoder/layer_0/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_0/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_0/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_0/output/dense/kernel with shape [3072, 768]
Loading TF weight bert/encoder/layer_1/attention/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_1/attention/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_1/attention/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_1/attention/output/dense/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_1/attention/self/key/bias with shape [768]
Loading TF weight bert/encoder/layer_1/attention/self/key/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_1/attention/self/query/bias with shape [768]
Loading TF weight bert/encoder/layer_1/attention/self/query/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_1/attention/self/value/bias with shape [768]
Loading TF weight bert/encoder/layer_1/attention/self/value/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_1/intermediate/dense/bias with shape [3072]
Loading TF weight bert/encoder/layer_1/intermediate/dense/kernel with shape [768, 3072]
Loading TF weight bert/encoder/layer_1/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_1/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_1/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_1/output/dense/kernel with shape [3072, 768]
Loading TF weight bert/encoder/layer_10/attention/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_10/attention/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_10/attention/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_10/attention/output/dense/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_10/attention/self/key/bias with shape [768]
Loading TF weight bert/encoder/layer_10/attention/self/key/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_10/attention/self/query/bias with shape [768]
Loading TF weight bert/encoder/layer_10/attention/self/query/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_10/attention/self/value/bias with shape [768]
Loading TF weight bert/encoder/layer_10/attention/self/value/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_10/intermediate/dense/bias with shape [3072]
Loading TF weight bert/encoder/layer_10/intermediate/dense/kernel with shape [768, 3072]
Loading TF weight bert/encoder/layer_10/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_10/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_10/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_10/output/dense/kernel with shape [3072, 768]
Loading TF weight bert/encoder/layer_11/attention/output/LayerNorm/beta with shape [768]
Loading TF weight bert/encoder/layer_11/attention/output/LayerNorm/gamma with shape [768]
Loading TF weight bert/encoder/layer_11/attention/output/dense/bias with shape [768]
Loading TF weight bert/encoder/layer_11/attention/output/dense/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_11/attention/self/key/bias with shape [768]
Loading TF weight bert/encoder/layer_11/attention/self/key/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_11/attention/self/query/bias with shape [768]
Loading TF weight bert/encoder/layer_11/attention/self/query/kernel with shape [768, 768]
Loading TF weight bert/encoder/layer_11/attention/self/value/bias with shape [768]
Loading TF weight bert/encoder/layer_11/attention/self/value/kernel with shape [768, 768]
`
Can somebody help find the problem ?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/749/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/748 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/748/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/748/comments | https://api.github.com/repos/huggingface/transformers/issues/748/events | https://github.com/huggingface/transformers/pull/748 | 463,270,020 | MDExOlB1bGxSZXF1ZXN0MjkzNzgyNjkz | 748 | Release 0.7 - Add Torchscript capabilities | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748?src=pr&el=h1) Report\n> Merging [#748](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748?src=pr&el=desc) into [xlnet](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/708877958a308a0f0e8fd199f8f327e4797f1583?src=pr&el=desc) will **increase** coverage by `0.21%`.\n> The diff coverage is `96.06%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## xlnet #748 +/- ##\n=========================================\n+ Coverage 71.5% 71.72% +0.21% \n=========================================\n Files 35 35 \n Lines 5587 5633 +46 \n=========================================\n+ Hits 3995 4040 +45 \n- Misses 1592 1593 +1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [...\\_pretrained\\_bert/tests/modeling\\_transfo\\_xl\\_test.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdGVzdHMvbW9kZWxpbmdfdHJhbnNmb194bF90ZXN0LnB5) | `94.23% <100%> (ø)` | :arrow_up: |\n| [pytorch\\_pretrained\\_bert/model\\_utils.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxfdXRpbHMucHk=) | `92.61% <100%> (+0.04%)` | :arrow_up: |\n| [pytorch\\_pretrained\\_bert/modeling.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmcucHk=) | `87.55% <100%> (+0.47%)` | :arrow_up: |\n| [pytorch\\_pretrained\\_bert/modeling\\_openai.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfb3BlbmFpLnB5) | `79.5% <100%> (+0.11%)` | :arrow_up: |\n| [pytorch\\_pretrained\\_bert/modeling\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfZ3B0Mi5weQ==) | `81.57% <100%> (+0.15%)` | :arrow_up: |\n| [pytorch\\_pretrained\\_bert/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfeGxuZXQucHk=) | `74.17% <83.33%> (+0.14%)` | :arrow_up: |\n| [...torch\\_pretrained\\_bert/tests/model\\_tests\\_commons.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdGVzdHMvbW9kZWxfdGVzdHNfY29tbW9ucy5weQ==) | `97.08% <97.22%> (-0.01%)` | :arrow_down: |\n| [pytorch\\_pretrained\\_bert/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `82.44% <0%> (-1.07%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748?src=pr&el=footer). Last update [7088779...b43b130](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/748?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,562 | 1,562 | 1,562 | MEMBER | null | Add Torchscript capabilities to all models. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/748/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/748",
"html_url": "https://github.com/huggingface/transformers/pull/748",
"diff_url": "https://github.com/huggingface/transformers/pull/748.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/748.patch",
"merged_at": 1562187124000
} |
https://api.github.com/repos/huggingface/transformers/issues/747 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/747/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/747/comments | https://api.github.com/repos/huggingface/transformers/issues/747/events | https://github.com/huggingface/transformers/issues/747 | 463,143,072 | MDU6SXNzdWU0NjMxNDMwNzI= | 747 | BERT pretraining routine | {
"login": "yhalk",
"id": 13349982,
"node_id": "MDQ6VXNlcjEzMzQ5OTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/13349982?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yhalk",
"html_url": "https://github.com/yhalk",
"followers_url": "https://api.github.com/users/yhalk/followers",
"following_url": "https://api.github.com/users/yhalk/following{/other_user}",
"gists_url": "https://api.github.com/users/yhalk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yhalk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yhalk/subscriptions",
"organizations_url": "https://api.github.com/users/yhalk/orgs",
"repos_url": "https://api.github.com/users/yhalk/repos",
"events_url": "https://api.github.com/users/yhalk/events{/privacy}",
"received_events_url": "https://api.github.com/users/yhalk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I would not advise to use them for training from scratch. See #751 for discussion and links."
] | 1,562 | 1,563 | 1,563 | NONE | null | Hi,
I was wondering whether the scripts for finetuning can be used to pretrain BERT from scratch on a small dataset that does not require TPUs - is there any difference with the TF pretrain code (different batch sampling or train loss evaluation) other than the TPU support?
Thank you very much in advance!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/747/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/746 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/746/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/746/comments | https://api.github.com/repos/huggingface/transformers/issues/746/events | https://github.com/huggingface/transformers/issues/746 | 463,118,282 | MDU6SXNzdWU0NjMxMTgyODI= | 746 | GPT2Tokenizer for Hindi Data | {
"login": "DEBADRIBASAK",
"id": 32904247,
"node_id": "MDQ6VXNlcjMyOTA0MjQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/32904247?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DEBADRIBASAK",
"html_url": "https://github.com/DEBADRIBASAK",
"followers_url": "https://api.github.com/users/DEBADRIBASAK/followers",
"following_url": "https://api.github.com/users/DEBADRIBASAK/following{/other_user}",
"gists_url": "https://api.github.com/users/DEBADRIBASAK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DEBADRIBASAK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DEBADRIBASAK/subscriptions",
"organizations_url": "https://api.github.com/users/DEBADRIBASAK/orgs",
"repos_url": "https://api.github.com/users/DEBADRIBASAK/repos",
"events_url": "https://api.github.com/users/DEBADRIBASAK/events{/privacy}",
"received_events_url": "https://api.github.com/users/DEBADRIBASAK/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I might be wrong, but I think GPT2Tokenizer uses byte pair encoding, a form of subword-level encoding. On an intuitive level, this is a between character-level and word level, and akin to breaking the word apart by syllable (in reality it's breaking the word apart by the highest frequency patterns). I know some people use Sentence-Piece tokenization for working with Chinese in BERT, so it might be worthwhile to see if there's a similar effort for a Sentence-Piece GPT-2",
"@DEBADRIBASAK \r\nCan you share the steps for fine-tuning on hindi dataset? Thanks.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,562 | 1,572 | 1,572 | NONE | null | I was trying to fine-tune GPT2LMHeadModel with Hindi data corpus. It is performing well. But when I looked at the tokens that are generated from the GPT2Tokenizer, I saw that they are containing tokens of almost character level. I am not understanding how is this kind of encoding handling Hindi data, or any form of non Roman script correctly. Can anyone explain the working of GPT2Tokenizer from this aspect? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/746/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/745 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/745/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/745/comments | https://api.github.com/repos/huggingface/transformers/issues/745/events | https://github.com/huggingface/transformers/pull/745 | 462,929,669 | MDExOlB1bGxSZXF1ZXN0MjkzNTEyODA5 | 745 | fix evaluation bug | {
"login": "leimao",
"id": 17606112,
"node_id": "MDQ6VXNlcjE3NjA2MTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/17606112?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leimao",
"html_url": "https://github.com/leimao",
"followers_url": "https://api.github.com/users/leimao/followers",
"following_url": "https://api.github.com/users/leimao/following{/other_user}",
"gists_url": "https://api.github.com/users/leimao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leimao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leimao/subscriptions",
"organizations_url": "https://api.github.com/users/leimao/orgs",
"repos_url": "https://api.github.com/users/leimao/repos",
"events_url": "https://api.github.com/users/leimao/events{/privacy}",
"received_events_url": "https://api.github.com/users/leimao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/745?src=pr&el=h1) Report\n> Merging [#745](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/745?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/dad3c7a485b7ffc6fd2766f349e6ee845ecc2eee?src=pr&el=desc) will **decrease** coverage by `0.05%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/745?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #745 +/- ##\n==========================================\n- Coverage 62.27% 62.22% -0.06% \n==========================================\n Files 18 18 \n Lines 3979 3979 \n==========================================\n- Hits 2478 2476 -2 \n- Misses 1501 1503 +2\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/745?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_pretrained\\_bert/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/745/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `82.44% <0%> (-1.07%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/745?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/745?src=pr&el=footer). Last update [dad3c7a...64b2a82](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/745?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,562 | 1,562 | 1,562 | CONTRIBUTOR | null | The original `run_squad.py` has a potential bug. If we only want to run the script to do evaluation, the model will not be properly loaded. The simple fix is provided. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/745/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/745",
"html_url": "https://github.com/huggingface/transformers/pull/745",
"diff_url": "https://github.com/huggingface/transformers/pull/745.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/745.patch",
"merged_at": 1562320805000
} |
https://api.github.com/repos/huggingface/transformers/issues/744 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/744/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/744/comments | https://api.github.com/repos/huggingface/transformers/issues/744/events | https://github.com/huggingface/transformers/issues/744 | 462,712,749 | MDU6SXNzdWU0NjI3MTI3NDk= | 744 | Recommended multilingual bert cased model returns similar embeddings | {
"login": "oleg-yaroshevskiy",
"id": 5859692,
"node_id": "MDQ6VXNlcjU4NTk2OTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5859692?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oleg-yaroshevskiy",
"html_url": "https://github.com/oleg-yaroshevskiy",
"followers_url": "https://api.github.com/users/oleg-yaroshevskiy/followers",
"following_url": "https://api.github.com/users/oleg-yaroshevskiy/following{/other_user}",
"gists_url": "https://api.github.com/users/oleg-yaroshevskiy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oleg-yaroshevskiy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oleg-yaroshevskiy/subscriptions",
"organizations_url": "https://api.github.com/users/oleg-yaroshevskiy/orgs",
"repos_url": "https://api.github.com/users/oleg-yaroshevskiy/repos",
"events_url": "https://api.github.com/users/oleg-yaroshevskiy/events{/privacy}",
"received_events_url": "https://api.github.com/users/oleg-yaroshevskiy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I second this issue #735 ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,561 | 1,567 | 1,567 | NONE | null | I'm trying to get embeddings for multilingual input:
```
tokenizer = BertTokenizer.from_pretrained("bert-base-multilingual-cased", do_lower_case=False)
class NeuralNet(BertPreTrainedModel):
def __init__(self, config):
super(NeuralNet, self).__init__(config)
self.bert = BertModel(config)
self.apply(self.init_bert_weights)
def forward(self, input_ids, token_type_ids=None, attention_mask=None):
_, pooled_output = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False)
return pooled_output
model = NeuralNet.from_pretrained("bert-base-multilingual-cased")
```
and for some reason all `pooled_output` vectors are very similar with 1e-3 cosine distance for semantically different inputs. Changing model to `bert-base-multilingual-UNcased` works just okay. Any ideas? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/744/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/743 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/743/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/743/comments | https://api.github.com/repos/huggingface/transformers/issues/743/events | https://github.com/huggingface/transformers/issues/743 | 462,410,393 | MDU6SXNzdWU0NjI0MTAzOTM= | 743 | Cannot reproduce results from version 0.4.0 | {
"login": "hguan6",
"id": 19914123,
"node_id": "MDQ6VXNlcjE5OTE0MTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/19914123?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hguan6",
"html_url": "https://github.com/hguan6",
"followers_url": "https://api.github.com/users/hguan6/followers",
"following_url": "https://api.github.com/users/hguan6/following{/other_user}",
"gists_url": "https://api.github.com/users/hguan6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hguan6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hguan6/subscriptions",
"organizations_url": "https://api.github.com/users/hguan6/orgs",
"repos_url": "https://api.github.com/users/hguan6/repos",
"events_url": "https://api.github.com/users/hguan6/events{/privacy}",
"received_events_url": "https://api.github.com/users/hguan6/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"`pip install pytorch-pretrained-bert==0.4.0` should work normally",
"Though if you did it with the latest release in March 2019 it was probably more 0.6.1 (see the list and dates here: https://github.com/huggingface/pytorch-pretrained-BERT/releases) so `pip install pytorch-pretrained-bert==0.6.1`",
"Thank you!\n\nOn Sun, Jun 30, 2019, 08:40 Thomas Wolf <[email protected]> wrote:\n\n> Though if you did it with the latest release in March 2019 it was probably\n> more 0.6.1 (see the list and dates here:\n> https://github.com/huggingface/pytorch-pretrained-BERT/releases) so pip\n> install pytorch-pretrained-bert==0.6.1\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/pytorch-pretrained-BERT/issues/743?email_source=notifications&email_token=AEX53CZ4RLMCBSQFO2445QDP5DAWZA5CNFSM4H4MVPT2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODY4NP4Y#issuecomment-507041779>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/AEX53C7IV2A7DRL7723V573P5DAWZANCNFSM4H4MVPTQ>\n> .\n>\n"
] | 1,561 | 1,561 | 1,561 | NONE | null | Hi, I have a research project that I did a few months ago. Now I have problem reproducing results of 0.4.0, and unfortunately, I lost version 0.4.0. Can you please send me the code of this version to [email protected]? In fact, I am not quite sure it's 0.4.0, but I remember I did it in March 2019. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/743/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/742 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/742/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/742/comments | https://api.github.com/repos/huggingface/transformers/issues/742/events | https://github.com/huggingface/transformers/pull/742 | 462,306,120 | MDExOlB1bGxSZXF1ZXN0MjkzMDQwNzA1 | 742 | When not loading a pretrained model, all layers are initialized with copies of the same weights | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/742?src=pr&el=h1) Report\n> Merging [#742](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/742?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/dad3c7a485b7ffc6fd2766f349e6ee845ecc2eee?src=pr&el=desc) will **decrease** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/742?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #742 +/- ##\n==========================================\n- Coverage 62.27% 62.26% -0.01% \n==========================================\n Files 18 18 \n Lines 3979 3978 -1 \n==========================================\n- Hits 2478 2477 -1 \n Misses 1501 1501\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/742?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_pretrained\\_bert/modeling.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/742/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmcucHk=) | `79.46% <100%> (-0.04%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/742?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/742?src=pr&el=footer). Last update [dad3c7a...2c03c10](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/742?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Oops, my bad - I just realized you initialize weights in `BertModel` after creating them. Never mind!"
] | 1,561 | 1,561 | 1,561 | MEMBER | null | Although this repo is mostly used for loading and training pre-trained BERT models, the code does support model initialization too! However, I found an issue with the initialization code - because it just makes one layer and copies it, the weights will be identical across all layers at initialization. This probably isn't fatal, since they'll hopefully diverge over time, but it seems a bit odd and it isn't how the [Google BERT repo does it](https://github.com/google-research/bert/blob/master/modeling.py#L827-L882).
I replaced the copies with separate layer initializations instead, which should fix this problem. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/742/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/742/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/742",
"html_url": "https://github.com/huggingface/transformers/pull/742",
"diff_url": "https://github.com/huggingface/transformers/pull/742.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/742.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/741 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/741/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/741/comments | https://api.github.com/repos/huggingface/transformers/issues/741/events | https://github.com/huggingface/transformers/issues/741 | 462,304,734 | MDU6SXNzdWU0NjIzMDQ3MzQ= | 741 | Using BertForNextSentencePrediction and GPT2LMHeadModel in a GAN setup. | {
"login": "jroakes",
"id": 10191545,
"node_id": "MDQ6VXNlcjEwMTkxNTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/10191545?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jroakes",
"html_url": "https://github.com/jroakes",
"followers_url": "https://api.github.com/users/jroakes/followers",
"following_url": "https://api.github.com/users/jroakes/following{/other_user}",
"gists_url": "https://api.github.com/users/jroakes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jroakes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jroakes/subscriptions",
"organizations_url": "https://api.github.com/users/jroakes/orgs",
"repos_url": "https://api.github.com/users/jroakes/repos",
"events_url": "https://api.github.com/users/jroakes/events{/privacy}",
"received_events_url": "https://api.github.com/users/jroakes/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,561 | 1,562 | 1,562 | NONE | null | I am using the following code (**Training Loop**) as the meat of the training loop whereby the discriminator is BertForNextSentencePrediction and the generator is GPT2LMHeadModel. I have also included the structure of the training data (**Input data:**). The loss in the generator and discriminator appear to be falling correctly, but I have been unable to test successfully whether the model weights are being updated each epoch.
This is the section that I am concerned about correctly updating the weights of the generator:
```
#g_loss is discriminator loss of real_sentence and generated next sentence
# Set generator to train mode
self.generator.train()
# Backward propagation
g_loss.backward()
if (step + 1) % self.accumulation_steps == 0:
self.gpt2_optimizer.step()
```
Would also love to know the most accurate way to test that the model weights (specifically, GPT2LMHeadModel) are being updated with each epoch.
**Input data:**
```
# Discriminator Train
#(pri_sent + nxt_sent), label=[0]
#(pri_sent + rdm_sent), label=[1]
# Generator input
#pri_sent --> gen_sent
# Generator Train (via Discriminator Loss)
#(pri_sent + gen_sent), label = [0]
#(pri_sent + rdm_sent), label = [1]
```
**Training Loop**
```
# Each iteration has a train_discriminator and train_generator phase
for phase in ['train_discriminator', 'train_generator']:
if phase == 'train_discriminator':
# Set discriminator to training mode
self.discriminator.train()
# Forward propagation
d_loss = self.discriminator(tdata['discriminator']['tokens_tensors'],
tdata['discriminator']['segments_tensors'],
tdata['discriminator']['masked_tensors'],
next_sentence_label=tdata['discriminator']['labels']).mean()
if self.accumulation_steps > 1:
d_loss = d_loss / self.accumulation_steps
# Backward propagation
d_loss.backward()
if (step + 1) % self.accumulation_steps == 0:
self.bert_optimizer.step()
# Zero the discriminator parameter gradients
self.bert_optimizer.zero_grad()
else:
# Set discriminator to evaluate mode
self.discriminator.eval()
# Forward propagation
g_loss = self.discriminator(tdata['generator']['tokens_tensors'],
tdata['generator']['segments_tensors'],
tdata['generator']['masked_tensors'],
next_sentence_label=tdata['generator']['labels']).mean()
if self.accumulation_steps > 1:
g_loss = g_loss / self.accumulation_steps
# Set generator to train mode
self.generator.train()
# Backward propagation
g_loss.backward()
if (step + 1) % self.accumulation_steps == 0:
self.gpt2_optimizer.step()
# Zero the generator parameter gradients
self.gpt2_optimizer.zero_grad()
d_epoch_loss += d_loss
g_epoch_loss += g_loss
# Flush cuda after epoch
torch.cuda.empty_cache()
d_epoch_loss = float(d_epoch_loss/epoch_batches)
g_epoch_loss = float(g_epoch_loss/epoch_batches)
g_epoch_loss_list.append(g_epoch_loss)
d_epoch_loss_list.append(d_epoch_loss)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/741/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/740 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/740/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/740/comments | https://api.github.com/repos/huggingface/transformers/issues/740/events | https://github.com/huggingface/transformers/issues/740 | 462,293,724 | MDU6SXNzdWU0NjIyOTM3MjQ= | 740 | How to get perplexity score of a sentence using anyone of the given Language Models? | {
"login": "vikas-baghel",
"id": 36075292,
"node_id": "MDQ6VXNlcjM2MDc1Mjky",
"avatar_url": "https://avatars.githubusercontent.com/u/36075292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vikas-baghel",
"html_url": "https://github.com/vikas-baghel",
"followers_url": "https://api.github.com/users/vikas-baghel/followers",
"following_url": "https://api.github.com/users/vikas-baghel/following{/other_user}",
"gists_url": "https://api.github.com/users/vikas-baghel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vikas-baghel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vikas-baghel/subscriptions",
"organizations_url": "https://api.github.com/users/vikas-baghel/orgs",
"repos_url": "https://api.github.com/users/vikas-baghel/repos",
"events_url": "https://api.github.com/users/vikas-baghel/events{/privacy}",
"received_events_url": "https://api.github.com/users/vikas-baghel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,561 | 1,567 | 1,567 | NONE | null | I want to find the perlexity score of sentence. I know that we can find the perplexity if we have the loss as perplexity = 2^(entropy loss). Can you tell me how to do it with the models you have listed?
It will be of great help. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/740/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/740/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/739 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/739/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/739/comments | https://api.github.com/repos/huggingface/transformers/issues/739/events | https://github.com/huggingface/transformers/issues/739 | 462,080,910 | MDU6SXNzdWU0NjIwODA5MTA= | 739 | where is "pytorch_model.bin"? | {
"login": "jufengada",
"id": 33510761,
"node_id": "MDQ6VXNlcjMzNTEwNzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/33510761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jufengada",
"html_url": "https://github.com/jufengada",
"followers_url": "https://api.github.com/users/jufengada/followers",
"following_url": "https://api.github.com/users/jufengada/following{/other_user}",
"gists_url": "https://api.github.com/users/jufengada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jufengada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jufengada/subscriptions",
"organizations_url": "https://api.github.com/users/jufengada/orgs",
"repos_url": "https://api.github.com/users/jufengada/repos",
"events_url": "https://api.github.com/users/jufengada/events{/privacy}",
"received_events_url": "https://api.github.com/users/jufengada/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@jufengada \r\n\r\nAssuming that you've installed pytorch_pretrained_bert package properly. If you load any of the `BERT` models ex: `BertForSequenceClassification` with `.from_pretrained` method with arguments for type of Bert architectures say `bert-base-uncased`; pytorch_model.bin will be downloaded from an s3 bucket to a temporary folder in your environment.\r\n\r\nAnother way is downloading entire set of pre-trained weights in to a folder from the github repository and \r\n pointing the `path to pre trained weights` in the call for `.from_pretrained` method would also do the trick.\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,561 | 1,567 | 1,567 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/739/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/738 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/738/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/738/comments | https://api.github.com/repos/huggingface/transformers/issues/738/events | https://github.com/huggingface/transformers/issues/738 | 461,845,230 | MDU6SXNzdWU0NjE4NDUyMzA= | 738 | BertTokenizer never_split issue | {
"login": "ardellelee",
"id": 19727258,
"node_id": "MDQ6VXNlcjE5NzI3MjU4",
"avatar_url": "https://avatars.githubusercontent.com/u/19727258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ardellelee",
"html_url": "https://github.com/ardellelee",
"followers_url": "https://api.github.com/users/ardellelee/followers",
"following_url": "https://api.github.com/users/ardellelee/following{/other_user}",
"gists_url": "https://api.github.com/users/ardellelee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ardellelee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ardellelee/subscriptions",
"organizations_url": "https://api.github.com/users/ardellelee/orgs",
"repos_url": "https://api.github.com/users/ardellelee/repos",
"events_url": "https://api.github.com/users/ardellelee/events{/privacy}",
"received_events_url": "https://api.github.com/users/ardellelee/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"which version of python do you use in these environments?",
"> which version of python do you use in these environments?\r\n\r\nHi Thomwolf,\r\n\r\nI'm using Python 3.6.8 for all these environments.\r\n\r\n",
"In case it's helpful, I create a gist to include some details of this issue: https://gist.github.com/ardellelee/4d80ee7a07166bb6d1a203fdd4d7cc07",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,561 | 1,567 | 1,567 | NONE | null | Hi,
I'm using the BertTokenizer to tokenize a piece of text where I use some entity markers to mark the beginning and end of entities, e.g.:
> This was among a batch of paperback [E1] Oxford World [/E1] ' s Classics
I've manually added such entity markers to _vocab file_ and the _never_split_ tuple in _BertTokenizer_. My purpose is to retain such markers as a token, not to be split into wordpieces.
However, when I test the code from command line in linux terminal, the _never_split_ does not work. Entity markers are split into wordpieces. Here is the printout:
```
06/28/2019 11:41:20 - INFO - __main__ - Writing example 0 of 1000
['In', '1983', ',', 'a', 'year', 'after', 'the', 'rally', ',', '[', 'E', '##1', ']', 'For', '##sberg', '[', '/', 'E', '##1', ']', 'received', 'the', 'so', '-', 'called', '`', '`', 'genius', 'award', "'", "'", 'from', 'the', '[', 'E', '##2', ']', 'John', 'D', '.', '[', '/', 'E', '##2', ']', 'and', 'Catherine', 'T', '.', 'MacArthur', 'Foundation', '.']
```
The strange thing is that, the _never_split_ works pretty fine when I test it in PyCharm, and I get my desired output:
```
06/28/2019 11:53:01 - INFO - __main__ - Writing example 0 of 1000
06/28/2019 11:53:01 - INFO - __main__ - *** Example ***
06/28/2019 11:53:01 - INFO - __main__ - guid: train-61b3a65fb9b7111c4ca4
06/28/2019 11:53:01 - INFO - __main__ - tokens: [CLS] In 1983 , a year after the rally , [E1] For ##sberg [/E1] received the so - called ` ` genius award ' ' from the [E2] John D . [/E2] and Catherine T . MacArthur Foundation . [SEP]
06/28/2019 11:53:01 - INFO - __main__ - input_ids: 101 1130 2278 117 170 1214 1170 1103 11158 117 20 1370 19945 21 1460 1103 1177 118 1270 169 169 13533 2574 112 112 1121 1103 22 1287 141 119 23 1105 6017 157 119 21045 2974 119 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
```
The tests are done on the same machine (my local PC) under same virtual environment. The code, pre-trained bert model and parameters used in Pycharm and terminal are the **same**.
Since I need to migrate the code to a server for model training, I really need to resolve this issue. I spent some time debugging but have no idea what could be the cause. Could anyone please provide some hints?
Thanks in advance.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/738/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/738/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/737 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/737/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/737/comments | https://api.github.com/repos/huggingface/transformers/issues/737/events | https://github.com/huggingface/transformers/issues/737 | 461,815,202 | MDU6SXNzdWU0NjE4MTUyMDI= | 737 | gpt-2 model doesn't output hidden states of all layers. | {
"login": "fabrahman",
"id": 22799593,
"node_id": "MDQ6VXNlcjIyNzk5NTkz",
"avatar_url": "https://avatars.githubusercontent.com/u/22799593?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fabrahman",
"html_url": "https://github.com/fabrahman",
"followers_url": "https://api.github.com/users/fabrahman/followers",
"following_url": "https://api.github.com/users/fabrahman/following{/other_user}",
"gists_url": "https://api.github.com/users/fabrahman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fabrahman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fabrahman/subscriptions",
"organizations_url": "https://api.github.com/users/fabrahman/orgs",
"repos_url": "https://api.github.com/users/fabrahman/repos",
"events_url": "https://api.github.com/users/fabrahman/events{/privacy}",
"received_events_url": "https://api.github.com/users/fabrahman/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Currently not indeed. This option will be in the coming release.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,561 | 1,567 | 1,567 | NONE | null | Using GPT2Model , it seems like it outputs the hidden states of only 1 layer. However, according to code and documentation it is expected to output hidden states features for each layer.
Am I making a mistake?
Thanks for the advise, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/737/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/736 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/736/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/736/comments | https://api.github.com/repos/huggingface/transformers/issues/736/events | https://github.com/huggingface/transformers/issues/736 | 461,789,234 | MDU6SXNzdWU0NjE3ODkyMzQ= | 736 | Question regarding crossentropy loss function for BERTMaskedLM | {
"login": "chithangduong",
"id": 22811551,
"node_id": "MDQ6VXNlcjIyODExNTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/22811551?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chithangduong",
"html_url": "https://github.com/chithangduong",
"followers_url": "https://api.github.com/users/chithangduong/followers",
"following_url": "https://api.github.com/users/chithangduong/following{/other_user}",
"gists_url": "https://api.github.com/users/chithangduong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chithangduong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chithangduong/subscriptions",
"organizations_url": "https://api.github.com/users/chithangduong/orgs",
"repos_url": "https://api.github.com/users/chithangduong/repos",
"events_url": "https://api.github.com/users/chithangduong/events{/privacy}",
"received_events_url": "https://api.github.com/users/chithangduong/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"30k is ok for a softmax, it's not that much and that because Bert is using a sub-word (open-)vocabulary.\r\n\r\nFull word (and closed-vocabulary) models like word2vec have to handle several 100k words hence the specific speed-ups. They are also older and the computation power available at the time was more constrained.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,561 | 1,568 | 1,568 | NONE | null | How does BERT handle large number of classes to predict? The number of classes is essentially the vocabulary size which is 30522 for the BERT-base model. When BERT tries to predict a word using CrossEntropy loss, it needs to compute the softmax for a large number of classes.
In shallow approach such as word2vec, negative sampling or hierarchical softmax is used. I wonder why it is not the case for BERT.
Thanks in advance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/736/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/736/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/735 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/735/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/735/comments | https://api.github.com/repos/huggingface/transformers/issues/735/events | https://github.com/huggingface/transformers/issues/735 | 461,648,786 | MDU6SXNzdWU0NjE2NDg3ODY= | 735 | BERT encoding layer produces same output for all inputs during evaluation | {
"login": "josephvalencia",
"id": 19215694,
"node_id": "MDQ6VXNlcjE5MjE1Njk0",
"avatar_url": "https://avatars.githubusercontent.com/u/19215694?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/josephvalencia",
"html_url": "https://github.com/josephvalencia",
"followers_url": "https://api.github.com/users/josephvalencia/followers",
"following_url": "https://api.github.com/users/josephvalencia/following{/other_user}",
"gists_url": "https://api.github.com/users/josephvalencia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/josephvalencia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/josephvalencia/subscriptions",
"organizations_url": "https://api.github.com/users/josephvalencia/orgs",
"repos_url": "https://api.github.com/users/josephvalencia/repos",
"events_url": "https://api.github.com/users/josephvalencia/events{/privacy}",
"received_events_url": "https://api.github.com/users/josephvalencia/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Unlike #695 and others regarding non-determinism, I am calling model.eval() ",
"Can you share your model initialization code as well?",
"My model is just a slight modification of BertForSequenceClassification for multilabel.\r\n\r\nclass BertForMultiLabelSequenceClassification(BertForSequenceClassification):\r\n \"\"\"BERT model for classification.\r\n This module is composed of the BERT model with a linear layer on top of\r\n the pooled output.\r\n \"\"\"\r\n\r\n def forward(self, input_ids, token_type_ids=None, attention_mask=None, labels=None):\r\n\r\n\r\n _, pooled_output = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False) \r\n # In evaluation mode, _ and pooled_output are already wrong by this point.\r\n\r\n pooled_output = self.dropout(pooled_output)\r\n logits = self.classifier(pooled_output)\r\n\r\n if labels is not None: # Supervised training mode\r\n loss_fct = BCEWithLogitsLoss()\r\n loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1, self.num_labels))\r\n return loss\r\n else:\r\n\r\n return logits #Evaluation mode`\r\n\r\n\r\n\r\nThis is the training loop where I am able to successfully get BERT embeddings. the call to self.evaluate() is the function in my original comment.\r\n\r\n def train(self):\r\n\r\n opt = optim.Adam(self.state.model.parameters(), lr=1e-3)\r\n\r\n\r\n max_epochs = 10\r\n \r\n\r\n nn.init.xavier_normal_(self.state.model.classifier.weight)\r\n\r\n print(\"Initial weights\",self.state.model.classifier.weight)\r\n\r\n for epoch in range(1, max_epochs + 1): # main training loop\r\n\r\n running_loss = 0.0\r\n\r\n self.state.model.train() # turn on training mode\r\n\r\n progress = tqdm.tqdm(total = len(self.state.train))\r\n\r\n for x, y in self.state.train: # thanks to our wrapper, we can intuitively iterate over our data\r\n\r\n\r\n segments = torch.zeros_like(x)\r\n\r\n #print(\"X: \",x.shape)\r\n #print(\"Y: \",y.shape)\r\n opt.zero_grad()\r\n loss = self.state.model(x,segments,labels = y)\r\n\r\n loss.backward() #compute gradients and backpropagate\r\n running_loss += loss.item()\r\n opt.step()\r\n epoch_loss = running_loss / len(self.state.train)\r\n batch_size = x.shape[0]\r\n progress.update(batch_size) #increment progress bar\r\n\r\n # calculate the validation loss for this epoch\r\n val_loss = 0.0\r\n\r\n roc_auc,f1 = self.evaluate(self.state.dev)\r\n\r\n score = print('Epoch: {}, ROC-AUC Score: {:.4f}, F1 Score: {:.4f}'.format(epoch, roc_auc,f1))\r\n #print(roc_auc)\r\n progress.close()\r\n",
"@josephvalencia \r\n\r\nI'm nowhere near a pro, but I've been playing with `BERTForSequenceClassification` since a week or so; I want to share my experience.\r\n\r\na) I feel that you're not applying the pre-trained weights to your BERT model, I've seen quite a few adaptations of `BertForSequenceClassification` actually implement `BertPreTrainedModel` and apply `pre-trained` weights released from google.\r\n\r\nI don't quite see that happening in your code, may be you haven't posted it here yet. I also had a similar issue ( `poor accuracy and heavy loss `) because I failed to use pre-trained weights properly.\r\n\r\nb) You're better of using `BertAdam` as an optimizer along any decent `learning rate scheduler` rather than AdamOptimizer.\r\n\r\nAnd you're running this for `10 epochs` !!? Running for 3 epochs on a 64 GB RAM with multi cores itself is taking me about 3 hours to train :D, may be you're using some sort of magic parallelization technique to speed up your training, I could use that info if that's the case. ",
"@amit8121 Thanks for the tips. I am calling BertForMultilabelSequenceClassification.from_pretrained() elsewhere. I don't actually plan to train for 10 epochs, I will probably implement early stopping once I have the semantics correct.",
"Can I see your from pretrained call? The \"embeddings\" that you're seeing in training stage might just be due to dropout.",
"` model = BertForMultiLabelSequenceClassification.from_pretrained('bert-base-uncased', \r\n num_labels=num_classes)\r\n\r\n state = TrainingState(model,test_dataset,dev_dataset,train_dataset)\r\n\r\n trainer = Trainer(state)\r\n\r\n trainer.train()\r\n`",
"Anyone have any ideas? I'm about to give up on this use case",
"Hi @josephvalencia, I don't have any hint, unfortunately. If the model works well during training, I can't really understand why it would produce always the same output during evaluation.\r\nDo you think you can post a full and self-contained example which exhibit the behavior?",
"I have determined that it was an error in my token indexing that happened earlier in my data pipeline / improper use of attention masking",
"@josephvalencia What was the solution here? I'm facing the same problem...",
"Hi, please open a new issue with a sample from your code and a detailed error log.",
"@thomwolf New issue has been opened here - https://github.com/huggingface/transformers/issues/1465",
"@thomwolf Is this problem solved",
"@thomwolf Is this problem solved",
"How the problem is solved?\r\n",
"i encountered this issue, \r\nwhen i change learning rate value 3e-5 to 5e-5. it worked. \r\nwhen i use learning rate 3e-5, I think it gone local minima. ",
"Observing same behaviour",
"Try experimenting with learning rate and optimizer. Adam with lr=5e-5 worked for me (batch size 64).",
"Have the same problem. How is it solved?\r\n",
"Try with pytorch optimizer not AdamW in transformer.",
"Changing my learning rate from 0.01 to 5e-5 worked for me.",
"@abdulsalam-bande I had the same output logits for a fine-tuned model, and your solution worked for me (even a slight decrease in the learning rate). Any idea why having a too large learning rate gives this result?\r\n\r\n@Jayaos did not work (adamw_torch vs adamw_hf). Were you suggesting to use an other optimizer?\r\n\r\nSpecifically, had the same logits for\r\n\r\n```\r\npython examples/pytorch/text-classification/run_glue.py --model_name_or_path roberta-large \\\r\n--task_name cola --do_train --do_eval \\\r\n--max_seq_length 128 --per_device_train_batch_size 8 --learning_rate 5e-5 \\\r\n--num_train_epochs 1 --output_dir run_glue_res_lr_5e5 \\\r\n```\r\n\r\nbut not for\r\n\r\n```\r\npython examples/pytorch/text-classification/run_glue.py --model_name_or_path roberta-large \\\r\n --task_name cola --do_train --do_eval \\\r\n--max_seq_length 128 --per_device_train_batch_size 8 --learning_rate 2e-5 \\\r\n--num_train_epochs 1 --output_dir run_glue_res_lr_2e5\r\n```\r\n\r\nThink it may be related to learning rate / batch size ratio.",
"But did anyone figured out why that happens?"
] | 1,561 | 1,695 | 1,562 | NONE | null | I am having issues with differences between the output of the BERT layer during training and evaluation time. I am fine-tuning BertForSequenceClassification, but have traced the problem to the pretrained BertModel. During training, the sequence_output within BertModel.forward() produces sensible output, for example :
[tensor([[[-0.0474, -0.3332, -0.2803, ..., -0.2278, 0.3694, 0.0433],
[ 0.1383, -0.2213, 0.1137, ..., 0.0103, 0.6756, 0.0800],
[ 0.0701, -0.4075, -0.4439, ..., 0.1196, 0.5344, 0.1538],
...,
[ 0.1345, -0.3650, -0.1050, ..., 0.0817, 0.3069, 0.2953],
[ 0.1033, -0.2574, -0.0028, ..., -0.1782, 0.4725, 0.0200],
[ 0.3067, -0.3785, -0.0043, ..., -0.1458, 0.6485, -0.0157]],
During evaluation time, however, it produces the same output for every input within a batch:
tensor([[[-0.2306, -0.4857, -0.8340, ..., -0.9609, 0.4153, -0.3388],
[-0.2306, -0.4857, -0.8340, ..., -0.9609, 0.4153, -0.3388],
[-0.2306, -0.4857, -0.8340, ..., -0.9609, 0.4153, -0.3388],
...,
[-0.2306, -0.4857, -0.8340, ..., -0.9609, 0.4153, -0.3388],
[-0.2306, -0.4857, -0.8340, ..., -0.9609, 0.4153, -0.3388],
[-0.2306, -0.4857, -0.8340, ..., -0.9609, 0.4153, -0.3388]],
My evaluation code is below.
`def evaluate(self,dataset):
self.state.model.eval() # turn on evaluation mode
with torch.no_grad():
for x, y in dataset:
# Shape of x is (Batch_size, sequence_length)
preds = torch.sigmoid(self.state.model(x, token_type_ids = torch.zeros_like(x),labels = None)).numpy()` # I have tried this line with and without .detach() and it makes no difference
Because of the uniform output from BertLayer, I also get identical output within preds. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/735/reactions",
"total_count": 7,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
} | https://api.github.com/repos/huggingface/transformers/issues/735/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/734 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/734/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/734/comments | https://api.github.com/repos/huggingface/transformers/issues/734/events | https://github.com/huggingface/transformers/issues/734 | 461,342,656 | MDU6SXNzdWU0NjEzNDI2NTY= | 734 | Erroneous Code | {
"login": "LeoLai930603",
"id": 15106070,
"node_id": "MDQ6VXNlcjE1MTA2MDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/15106070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LeoLai930603",
"html_url": "https://github.com/LeoLai930603",
"followers_url": "https://api.github.com/users/LeoLai930603/followers",
"following_url": "https://api.github.com/users/LeoLai930603/following{/other_user}",
"gists_url": "https://api.github.com/users/LeoLai930603/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LeoLai930603/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeoLai930603/subscriptions",
"organizations_url": "https://api.github.com/users/LeoLai930603/orgs",
"repos_url": "https://api.github.com/users/LeoLai930603/repos",
"events_url": "https://api.github.com/users/LeoLai930603/events{/privacy}",
"received_events_url": "https://api.github.com/users/LeoLai930603/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,561 | 1,567 | 1,567 | NONE | null | I guess there is a minor mistake in this line.
https://github.com/huggingface/pytorch-pretrained-BERT/blob/80684f6f86c13a89fc1e4feac248ef96b013765c/pytorch_pretrained_bert/modeling_transfo_xl.py#L1385
In `TransfoXLLMHeadModel`, the forward computation requires the target (if available) has the shape [batch_size, sequence_length], that is, it is rank-2 here. However, in the `ProjectedAdaptiveLogSoftmax` (case when `config.sample_softmax < 0`, and the default value
of `config.sample_softmax` was set to -1), the forward compuation (indicated in the above line) requires the target (if available) has the shape [batch_size * sequence_length], which means it is rank-1 here. This could not passed the assertion check in the forward computation and raise the error.
Luckily I have checked the computation logic for the Transformer-XL part, most of them are correct. Therefore I suggest just a minor change from `softmax_output = self.crit(pred_hid.view(-1, pred_hid.size(-1)), target) ` to `softmax_output = self.crit(pred_hid.view(-1, pred_hid.size(-1)), target.reshape(-1)) `
If you prefer PR, I could do this little favor;) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/734/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/733 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/733/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/733/comments | https://api.github.com/repos/huggingface/transformers/issues/733/events | https://github.com/huggingface/transformers/pull/733 | 461,222,144 | MDExOlB1bGxSZXF1ZXN0MjkyMTkzMDU5 | 733 | Added option to use multiple workers to create training data | {
"login": "ceremonious",
"id": 6596130,
"node_id": "MDQ6VXNlcjY1OTYxMzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6596130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ceremonious",
"html_url": "https://github.com/ceremonious",
"followers_url": "https://api.github.com/users/ceremonious/followers",
"following_url": "https://api.github.com/users/ceremonious/following{/other_user}",
"gists_url": "https://api.github.com/users/ceremonious/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ceremonious/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ceremonious/subscriptions",
"organizations_url": "https://api.github.com/users/ceremonious/orgs",
"repos_url": "https://api.github.com/users/ceremonious/repos",
"events_url": "https://api.github.com/users/ceremonious/events{/privacy}",
"received_events_url": "https://api.github.com/users/ceremonious/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/733?src=pr&el=h1) Report\n> Merging [#733](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/733?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/98dc30b21e3df6528d0dd17f0910ffea12bc0f33?src=pr&el=desc) will **increase** coverage by `0.05%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/733?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #733 +/- ##\n==========================================\n+ Coverage 62.22% 62.27% +0.05% \n==========================================\n Files 18 18 \n Lines 3979 3979 \n==========================================\n+ Hits 2476 2478 +2 \n+ Misses 1503 1501 -2\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/733?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_pretrained\\_bert/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/733/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `83.51% <0%> (+1.06%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/733?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/733?src=pr&el=footer). Last update [98dc30b...08ff056](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/733?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Nice, thanks!"
] | 1,561 | 1,562 | 1,562 | CONTRIBUTOR | null | Added a command line argument to allow using a multiprocessing pool to generate training data for all the epochs at once.
The shelve object isn't pickleable, so it can't be used with the Pool | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/733/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/733",
"html_url": "https://github.com/huggingface/transformers/pull/733",
"diff_url": "https://github.com/huggingface/transformers/pull/733.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/733.patch",
"merged_at": 1562321071000
} |
https://api.github.com/repos/huggingface/transformers/issues/732 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/732/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/732/comments | https://api.github.com/repos/huggingface/transformers/issues/732/events | https://github.com/huggingface/transformers/issues/732 | 461,131,491 | MDU6SXNzdWU0NjExMzE0OTE= | 732 | GPT & GPT2: binary classification fails | {
"login": "epsdg",
"id": 42873462,
"node_id": "MDQ6VXNlcjQyODczNDYy",
"avatar_url": "https://avatars.githubusercontent.com/u/42873462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/epsdg",
"html_url": "https://github.com/epsdg",
"followers_url": "https://api.github.com/users/epsdg/followers",
"following_url": "https://api.github.com/users/epsdg/following{/other_user}",
"gists_url": "https://api.github.com/users/epsdg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/epsdg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/epsdg/subscriptions",
"organizations_url": "https://api.github.com/users/epsdg/orgs",
"repos_url": "https://api.github.com/users/epsdg/repos",
"events_url": "https://api.github.com/users/epsdg/events{/privacy}",
"received_events_url": "https://api.github.com/users/epsdg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,561 | 1,567 | 1,567 | NONE | null | Using `OpenAIGPTDoubleHeadsModel` for binary classification fails.
`CrossEntropyLoss`, requires that the logits dim matches num_classes.
If `input_ids.size()` is (batch x 1 x seq_len) (only one copy of the input sequence) but mc_labels are {0, 1} (two classes), the loss fn returns a shape mismatch. The only way it seems to work is using two copies of the input sequence, one for class 0 and a second for class 1.
I tweaked the double heads model to use `BCEWithLogitsLoss` for binary:
https://github.com/epsdg/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling_openai.py#L955-L960
https://github.com/epsdg/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling_gpt2.py#L943-L948
...and it works fine, but did I miss an intended use pattern?
Thanks for this fantastic port of these models - much more user-friendly than the original code. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/732/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/731 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/731/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/731/comments | https://api.github.com/repos/huggingface/transformers/issues/731/events | https://github.com/huggingface/transformers/pull/731 | 461,122,810 | MDExOlB1bGxSZXF1ZXN0MjkyMTExNDc3 | 731 | merge | {
"login": "Eurus-Holmes",
"id": 34226570,
"node_id": "MDQ6VXNlcjM0MjI2NTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/34226570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Eurus-Holmes",
"html_url": "https://github.com/Eurus-Holmes",
"followers_url": "https://api.github.com/users/Eurus-Holmes/followers",
"following_url": "https://api.github.com/users/Eurus-Holmes/following{/other_user}",
"gists_url": "https://api.github.com/users/Eurus-Holmes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Eurus-Holmes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Eurus-Holmes/subscriptions",
"organizations_url": "https://api.github.com/users/Eurus-Holmes/orgs",
"repos_url": "https://api.github.com/users/Eurus-Holmes/repos",
"events_url": "https://api.github.com/users/Eurus-Holmes/events{/privacy}",
"received_events_url": "https://api.github.com/users/Eurus-Holmes/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/731?src=pr&el=h1) Report\n> Merging [#731](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/731?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/98dc30b21e3df6528d0dd17f0910ffea12bc0f33?src=pr&el=desc) will **increase** coverage by `0.05%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/731?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #731 +/- ##\n==========================================\n+ Coverage 62.22% 62.27% +0.05% \n==========================================\n Files 18 18 \n Lines 3979 3979 \n==========================================\n+ Hits 2476 2478 +2 \n+ Misses 1503 1501 -2\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/731?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_pretrained\\_bert/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/731/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `83.51% <0%> (+1.06%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/731?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/731?src=pr&el=footer). Last update [98dc30b...4633033](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/731?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,561 | 1,561 | 1,561 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/731/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/731/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/731",
"html_url": "https://github.com/huggingface/transformers/pull/731",
"diff_url": "https://github.com/huggingface/transformers/pull/731.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/731.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/730 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/730/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/730/comments | https://api.github.com/repos/huggingface/transformers/issues/730/events | https://github.com/huggingface/transformers/issues/730 | 460,948,702 | MDU6SXNzdWU0NjA5NDg3MDI= | 730 | bertForNextSentencePrediction | {
"login": "ankitsharma1999",
"id": 43710585,
"node_id": "MDQ6VXNlcjQzNzEwNTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/43710585?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ankitsharma1999",
"html_url": "https://github.com/ankitsharma1999",
"followers_url": "https://api.github.com/users/ankitsharma1999/followers",
"following_url": "https://api.github.com/users/ankitsharma1999/following{/other_user}",
"gists_url": "https://api.github.com/users/ankitsharma1999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ankitsharma1999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ankitsharma1999/subscriptions",
"organizations_url": "https://api.github.com/users/ankitsharma1999/orgs",
"repos_url": "https://api.github.com/users/ankitsharma1999/repos",
"events_url": "https://api.github.com/users/ankitsharma1999/events{/privacy}",
"received_events_url": "https://api.github.com/users/ankitsharma1999/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It's a classification task - is the given sentence the next sentence? It's not going to generate the next sentence for you, as BERT is not a classical language model",
"So, how do I know from these values whether the next sentence should be classified as the next sentence or not?",
"Softmax over it will give you the probabilities - i'm guessing the first is yes next sentence, but you can probably play around with some toy examples to know which dimension is which.",
"Thank you for your insight.",
"Here is a toy example using BertForNextSentencePrediction\r\n\r\n```\r\nimport torch\r\nimport pytorch_pretrained_bert\r\nfrom pytorch_pretrained_bert import BertTokenizer, BertAdam, BertForNextSentencePrediction\r\n\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\nmodel = BertForNextSentencePrediction.from_pretrained('bert-base-uncased')\r\n\r\n# Prepare tokenized input\r\ntext1 = \"what does a technical SEO do?\"\r\ntext2 = \"A technical seo optimizes websites blah.\"\r\n# 0=Good / 1 = Bad\r\nlabel = 0\r\n\r\ntext1_toks = [\"[CLS]\"] + tokenizer.tokenize(text1) + [\"[SEP]\"]\r\ntext2_toks = tokenizer.tokenize(text2) + [\"[SEP]\"]\r\n\r\nindexed_tokens = tokenizer.convert_tokens_to_ids(text1_toks + text2_toks)\r\nsegments_ids = [0]*len(text1_toks) + [1]*len(text2_toks)\r\n\r\ntokens_tensor = torch.tensor([indexed_tokens])\r\nsegments_tensors = torch.tensor([segments_ids])\r\n\r\n# Load bertForNextSentencePrediction\r\nbert_optimizer = BertAdam(model.parameters(), \r\n lr = 0.002, \r\n warmup = 0.1, \r\n max_grad_norm=-1, \r\n weight_decay=-0.0001,\r\n t_total = 1\r\n )\r\n\r\nprint(text1_toks + text2_toks)\r\nprint(segments_ids)\r\nprint()\r\n\r\n\r\n# Example Evaluate\r\nmodel.eval()\r\n# Predict the next sentence classification logits\r\nwith torch.no_grad():\r\n prediction = model(tokens_tensor, segments_tensors)\r\n\r\nsoftmax = torch.nn.Softmax(dim=1)\r\nprediction_sm = softmax(prediction)\r\nprint (\"Good/Bad:\", prediction_sm[0].tolist())\r\n\r\n# Example Train\r\nmodel.train()\r\nloss = model(tokens_tensor, segments_tensors, next_sentence_label=torch.tensor([label]))\r\nprint(\"Loss with label {}:\".format(label),loss.item())\r\nloss.backward()\r\nbert_optimizer.step()\r\n```",
"> Here is a toy example using BertForNextSentencePrediction\r\n> \r\n> ```\r\n> import torch\r\n> import pytorch_pretrained_bert\r\n> from pytorch_pretrained_bert import BertTokenizer, BertAdam, BertForNextSentencePrediction\r\n> \r\n> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n> model = BertForNextSentencePrediction.from_pretrained('bert-base-uncased')\r\n> \r\n> # Prepare tokenized input\r\n> text1 = \"what does a technical SEO do?\"\r\n> text2 = \"A technical seo optimizes websites blah.\"\r\n> # 0=Good / 1 = Bad\r\n> label = 0\r\n> \r\n> text1_toks = [\"[CLS]\"] + tokenizer.tokenize(text1) + [\"[SEP]\"]\r\n> text2_toks = tokenizer.tokenize(text2) + [\"[SEP]\"]\r\n> \r\n> indexed_tokens = tokenizer.convert_tokens_to_ids(text1_toks + text2_toks)\r\n> segments_ids = [0]*len(text1_toks) + [1]*len(text2_toks)\r\n> \r\n> tokens_tensor = torch.tensor([indexed_tokens])\r\n> segments_tensors = torch.tensor([segments_ids])\r\n> \r\n> # Load bertForNextSentencePrediction\r\n> bert_optimizer = BertAdam(model.parameters(), \r\n> lr = 0.002, \r\n> warmup = 0.1, \r\n> max_grad_norm=-1, \r\n> weight_decay=-0.0001,\r\n> t_total = 1\r\n> )\r\n> \r\n> print(text1_toks + text2_toks)\r\n> print(segments_ids)\r\n> print()\r\n> \r\n> \r\n> # Example Evaluate\r\n> model.eval()\r\n> # Predict the next sentence classification logits\r\n> with torch.no_grad():\r\n> prediction = model(tokens_tensor, segments_tensors)\r\n> \r\n> softmax = torch.nn.Softmax(dim=1)\r\n> prediction_sm = softmax(prediction)\r\n> print (\"Good/Bad:\", prediction_sm[0].tolist())\r\n> \r\n> # Example Train\r\n> model.train()\r\n> loss = model(tokens_tensor, segments_tensors, next_sentence_label=torch.tensor([label]))\r\n> print(\"Loss with label {}:\".format(label),loss.item())\r\n> loss.backward()\r\n> bert_optimizer.step()\r\n> ```\r\n\r\nThanks for this, any idea how do this in batches? How we are supposed to pad different input lenghts?"
] | 1,561 | 1,567 | 1,561 | NONE | null | I copied the code from [PyTorch's official site](https://pytorch.org/hub/huggingface_pytorch-pretrained-bert_bert/) for `bertForNextSentencePrediction`. I get the next_sent_classif_logits as `tensor([[ 5.2880, -6.0952]])`. How do I get the next sentence from these values?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/730/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/729 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/729/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/729/comments | https://api.github.com/repos/huggingface/transformers/issues/729/events | https://github.com/huggingface/transformers/issues/729 | 460,841,674 | MDU6SXNzdWU0NjA4NDE2NzQ= | 729 | Grover generator support | {
"login": "asafamr",
"id": 5182534,
"node_id": "MDQ6VXNlcjUxODI1MzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5182534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asafamr",
"html_url": "https://github.com/asafamr",
"followers_url": "https://api.github.com/users/asafamr/followers",
"following_url": "https://api.github.com/users/asafamr/following{/other_user}",
"gists_url": "https://api.github.com/users/asafamr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asafamr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asafamr/subscriptions",
"organizations_url": "https://api.github.com/users/asafamr/orgs",
"repos_url": "https://api.github.com/users/asafamr/repos",
"events_url": "https://api.github.com/users/asafamr/events{/privacy}",
"received_events_url": "https://api.github.com/users/asafamr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Maybe if they open source a model larger than the current GPT-2 large.\r\n\r\nI'm also happy to welcome PRs to port additional models (as long as they are provided with test/doc/example)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,561 | 1,567 | 1,567 | NONE | null | Grover released their trained model:
https://github.com/rowanz/grover
I think it should be similar to GPT-2 large. Any plans to support it? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/729/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/729/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/728 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/728/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/728/comments | https://api.github.com/repos/huggingface/transformers/issues/728/events | https://github.com/huggingface/transformers/issues/728 | 460,836,757 | MDU6SXNzdWU0NjA4MzY3NTc= | 728 | UnicodeDecodeError: | {
"login": "ZhaoxinRuc",
"id": 38198098,
"node_id": "MDQ6VXNlcjM4MTk4MDk4",
"avatar_url": "https://avatars.githubusercontent.com/u/38198098?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhaoxinRuc",
"html_url": "https://github.com/ZhaoxinRuc",
"followers_url": "https://api.github.com/users/ZhaoxinRuc/followers",
"following_url": "https://api.github.com/users/ZhaoxinRuc/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhaoxinRuc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhaoxinRuc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhaoxinRuc/subscriptions",
"organizations_url": "https://api.github.com/users/ZhaoxinRuc/orgs",
"repos_url": "https://api.github.com/users/ZhaoxinRuc/repos",
"events_url": "https://api.github.com/users/ZhaoxinRuc/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhaoxinRuc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@ZhaoxinRuc \r\n\r\nI'm assuming that some data in your `vocab.txt` file contains bad characters which your `codecs.py` can't decode properly. When I downloaded the pre-trained weights folder, it came with `vocab.txt` which didn't had this issue. Check if you've downloaded a wrong version or some how changed contents of `vocab.txt`.\r\n\r\nIt's also likely that your operating system actually messed with the encoding of certain characters in your `vocab.txt` file. For that, try saving the text file in 'utf8' format , just to make sure.\r\n\r\nHope it helps.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,561 | 1,567 | 1,567 | NONE | null | Traceback (most recent call last):
File "run_classifier_br.py", line 1061, in <module>
main()
File "run_classifier_br.py", line 772, in main
tokenizer = BertTokenizer.from_pretrained(args.bert_model, do_lower_case=args.do_lower_case)
File "/home/luwei/anaconda3/lib/python3.6/site-packages/pytorch_pretrained_bert/tokenization.py", line 197, in from_pretrained
tokenizer = cls(resolved_vocab_file, *inputs, **kwargs)
File "/home/luwei/anaconda3/lib/python3.6/site-packages/pytorch_pretrained_bert/tokenization.py", line 97, in __init__
self.vocab = load_vocab(vocab_file)
File "/home/luwei/anaconda3/lib/python3.6/site-packages/pytorch_pretrained_bert/tokenization.py", line 56, in load_vocab
token = reader.readline()
File "/home/luwei/anaconda3/lib/python3.6/codecs.py", line 321, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/728/reactions",
"total_count": 8,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/728/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/727 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/727/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/727/comments | https://api.github.com/repos/huggingface/transformers/issues/727/events | https://github.com/huggingface/transformers/issues/727 | 460,747,363 | MDU6SXNzdWU0NjA3NDczNjM= | 727 | Poor Training and evaluation accuracy even with low loss | {
"login": "amithadiraju1694",
"id": 23751321,
"node_id": "MDQ6VXNlcjIzNzUxMzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/23751321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amithadiraju1694",
"html_url": "https://github.com/amithadiraju1694",
"followers_url": "https://api.github.com/users/amithadiraju1694/followers",
"following_url": "https://api.github.com/users/amithadiraju1694/following{/other_user}",
"gists_url": "https://api.github.com/users/amithadiraju1694/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amithadiraju1694/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amithadiraju1694/subscriptions",
"organizations_url": "https://api.github.com/users/amithadiraju1694/orgs",
"repos_url": "https://api.github.com/users/amithadiraju1694/repos",
"events_url": "https://api.github.com/users/amithadiraju1694/events{/privacy}",
"received_events_url": "https://api.github.com/users/amithadiraju1694/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@amit8121 can you just sent the run_classifier.py python command that you are using in terminal to run this model",
"@himanshututeja1998 \r\n\r\nThanks for the response. This is a similar command to what I used for `run_classifier.py`\r\n\r\n```\r\nexport BERT_BASE_DIR=./path/to/uncasedweightsfolder/\r\n\r\npython bert/run_classifier.py \\\r\n--task_name=cola \\\r\n--do_train=true \\\r\n--do_eval=true \\\r\n--data_dir=./data \\\r\n--vocab_file=$BERT_BASE_DIR/vocab.txt \\\r\n--bert_config_file=$BERT_BASE_DIR/bert_config.json \\\r\n--init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt \\\r\n--max_seq_length=128 \\\r\n--train_batch_size=32 \\\r\n--learning_rate=2e-5 \\\r\n--num_train_epochs=3.0 \\\r\n--output_dir=./bert_output/\r\n```\r\n\r\nBTW, I also had to change the `get_labels` function of this code to support multi label classification ( 8 in my case). \r\n\r\nI believe the issue here has more to do with the kind of weights file I downloaded as a part of my uncased folder. From my knowledge, I believe task of `cola` might not be too suitable for multi label classification. And our team felt this command line methodology is not ideal for production grade scenario, so we decided to develop a `BERT wrapper` and train it.",
"@amit8121 Use this ::::::::\r\npython/python3 run_classifier.py --task_name cola --do_eval --do_lower_case --data_dir DATA_DIR PATH/ --bert_model PATH TO PRETRAINNED WEIGHT (uncased_L-12_H-768_A-12)/ --max_seq_length 128 --train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir MODEL_OUTPUT",
"@himanshututeja1998 \r\n\r\nI appreciate your willingness to help, but even if the above command runs and trains successfully, of which I have some reservations on, it still doesn't entirely solve our problem i.e., we ultimately want to make production grade code using the pre-trained weights rather than using command line tools. Thanks for the response.",
"> @amit8121 Use this ::::::::\r\n> python/python3 run_classifier.py --task_name cola --do_eval --do_lower_case --data_dir DATA_DIR PATH/ --bert_model PATH TO PRETRAINNED WEIGHT (uncased_L-12_H-768_A-12)/ --max_seq_length 128 --train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir MODEL_OUTPUT\r\n\r\n\r\nStill getting the same results:\r\n\r\n```\r\nINFO:tensorflow:***** Eval results *****\r\nINFO:tensorflow: eval_accuracy = 0.0\r\nINFO:tensorflow: eval_loss = 3.9577928\r\nINFO:tensorflow: global_step = 25\r\nINFO:tensorflow: loss = 3.9577928\r\n```\r\n\r\nfor the command:\r\n`\r\npython3 bert/run_classifier.py --task_name=cola --do_train=true --do_eval=true --data_dir=./data --vocab_file=$BERT_BASE_DIR/vocab.txt --bert_model=$BERT_BASE_DIR/pytorch_model.bin --bert_config_file=$BERT_BASE_DIR/bert_config.json --max_seq_length=128 --train_batch_size=32 --learning_rate=2e-5 --num_train_epochs=3.0 --output_dir=./bert_output/`\r\n\r\njust an FYI.",
"@spolu @cynthia @thomwolf \r\n\r\nI think I might have figured some parts of the issue out.\r\n\r\nWhen I load bert model using `BERT_CLASS.from_pretrained('bert-base-uncased')` only the `pytorch.bin` file gets downloaded form S3 bucket in to a temp folder under system's /tmp folder. Interestingly, `BertForSequenceClassification` class couldn't load pre-trained weights form this file. \r\n\r\nBut, when I copied the `pytorch.bin` file to folder `uncased_L-12_H-768_A-12`, with all the above mentioned files in tact and changed the path to `from_pretrained` file to point to uncased folder; I was able to load the model with pre-trained weights. My results are much better than previous rounds (I only ran it for 4 epochs though):\r\n\r\n```\r\nFor each batch in an epoch:\r\nTraining - ~ 70 % accuracy , with loss ~0.10; \r\nEvaluation - About ~ 10% accuracy, with loss ~ 0.35\r\n\r\n```\r\n\r\nI did use `BertAdam` along with decent learning rate scheduling. But the results are a bit under whelming and there's a huge difference in training and evaluation accuracies, does it have anything to do with the size of the data ? I only have ~ 350 rows of training data and ~ 100 rows of testing data.",
"@amit8121 on which dataset you are training this because this can also due to mismatched task name \"cola / mnli \" etc. ",
"@himanshututeja1998 \r\n\r\nUnderlying data set shouldn't matter as long as you've transformed it in a format required for `Sequence Classification`, I made sure that I `pre-processed` my data as per `BertForSeqeunceClassification` requirements ; my initial issue was not being able to use pre-trained weights which I figured out and results were much better, yet not close to SOTA. My training and evaluation losses go as low as 0.06 on average for each epoch, yet accuracy hovers around 65 - 70 % which is strange. This issue seems to be quite common for both GPT and BERT models, there are multiple `issues` on these topics .I believe the issue is with the way we're trying to transform our target variables. Hope the authors would find time to respond.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,561 | 1,567 | 1,567 | NONE | null | @spolu @cynthia @thomwolf @davidefiocco
I initially used command line arguments to run the `run_classifier.py`, using `cola` as a task, for `Sequence Classification` on a custom data set, I was able to execute and get the results , but they were very poor: an evaluation accuracy of 0.0 and loss of close to .9.
Then I decided to write a small wrapper class similar to `BertForSequenceClassificaiton` in `modeling.py`, to invoke a basic `BERT Model` from set of pre-trained classes using `BERT weights`:
ex:
`model = BERT_MULTILABEL_SEQ_Classify.from_pretrained('bert-base-uncased', num_labels = 8)`
- > BERT_MULTILABEL_SEQ_Classify is my wrapper class
up on executing this line of code I see the following log information:
```
06/25/2019 20:40:12 - INFO - pytorch_pretrained_bert.modeling - loading archive file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz from cache at /home/dbi/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba
06/25/2019 20:40:12 - INFO - pytorch_pretrained_bert.modeling - extracting archive file /home/dbi/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba to temp dir /tmp/tmp8hjgcb4r
06/25/2019 20:40:17 - INFO - pytorch_pretrained_bert.modeling - Model config {
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"type_vocab_size": 2,
"vocab_size": 30522
}
06/25/2019 20:40:34 - INFO - pytorch_pretrained_bert.modeling - Weights of BERT_MULTILABEL_SEQ_Clasfify not initialized from pretrained model: ['classifier.weight', 'classifier.bias']
06/25/2019 20:40:34 - INFO - pytorch_pretrained_bert.modeling - Weights from pretrained model not used in BERT_MULTILABEL_SEQ_Clasfify: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
```
I ignored this log information and proceeded with training my data, the results were similar to my initial attempts ( which were really poor ). Then I realized that, because this model is not using any of the pre-trained weights; it's performing poorly on my classification task.
I do have the `uncased_L-12_H-768_A-12` file in the same path as this program is running. It's contents look something like this

So, my question's how do I properly invoke the `BERTModel ` so that it's pre-trained weights are also loaded along with it ? I understand that I might have either messed up with signature of the BERT Model or may need to point the bert model to look for weights, but not sure how. Every time I use the `from_pretrained` method to load pre-trained weights, I can see some files being downloaded from s3 buckets but looks like those files do not contain pre-trained weights. It might also be the case that the conversion from `pytorch.bin` (downloaded from s3 buckets) to checkpoint files is not working as expected in `ubuntu 18.04`. Any help is much appreciated. TIA
Note: I thought including code for my entire wrapper class would be irrelevant to this question, so I didn't do so.
I have:
`
Python 3.6.8
Pytorch 1.1.0
` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/727/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/726 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/726/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/726/comments | https://api.github.com/repos/huggingface/transformers/issues/726/events | https://github.com/huggingface/transformers/issues/726 | 460,725,157 | MDU6SXNzdWU0NjA3MjUxNTc= | 726 | Examples does not work with apex optimizers | {
"login": "tc-yu",
"id": 33182836,
"node_id": "MDQ6VXNlcjMzMTgyODM2",
"avatar_url": "https://avatars.githubusercontent.com/u/33182836?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tc-yu",
"html_url": "https://github.com/tc-yu",
"followers_url": "https://api.github.com/users/tc-yu/followers",
"following_url": "https://api.github.com/users/tc-yu/following{/other_user}",
"gists_url": "https://api.github.com/users/tc-yu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tc-yu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tc-yu/subscriptions",
"organizations_url": "https://api.github.com/users/tc-yu/orgs",
"repos_url": "https://api.github.com/users/tc-yu/repos",
"events_url": "https://api.github.com/users/tc-yu/events{/privacy}",
"received_events_url": "https://api.github.com/users/tc-yu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can use # on line 316 and it can solved this problem."
] | 1,561 | 1,563 | 1,563 | NONE | null | Under fp16 option, optimizer is replaced by one from apex, which does not have the attribute ```get_lr()``` .
https://github.com/huggingface/pytorch-pretrained-BERT/blob/98dc30b21e3df6528d0dd17f0910ffea12bc0f33/examples/run_squad.py#L315-L317
Should be able to reproduce the error by running the example [here](https://github.com/huggingface/pytorch-pretrained-BERT#fine-tuning-bert-large-on-gpus) in the README, error message should be something like ```AttributeError, FP16_Optimizer does not have attribute get_lr()``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/726/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/726/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/725 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/725/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/725/comments | https://api.github.com/repos/huggingface/transformers/issues/725/events | https://github.com/huggingface/transformers/issues/725 | 460,533,863 | MDU6SXNzdWU0NjA1MzM4NjM= | 725 | BERT Input size reduced to half in forward function | {
"login": "xinsu626",
"id": 30940128,
"node_id": "MDQ6VXNlcjMwOTQwMTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/30940128?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xinsu626",
"html_url": "https://github.com/xinsu626",
"followers_url": "https://api.github.com/users/xinsu626/followers",
"following_url": "https://api.github.com/users/xinsu626/following{/other_user}",
"gists_url": "https://api.github.com/users/xinsu626/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xinsu626/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xinsu626/subscriptions",
"organizations_url": "https://api.github.com/users/xinsu626/orgs",
"repos_url": "https://api.github.com/users/xinsu626/repos",
"events_url": "https://api.github.com/users/xinsu626/events{/privacy}",
"received_events_url": "https://api.github.com/users/xinsu626/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe you have 2 GPUs?",
"@thomwolf Thanks a lot. I forgot I was running on two gpus. \r\n"
] | 1,561 | 1,561 | 1,561 | NONE | null | I was trying to modify your BertForSequenceClassification class for long sequence classification. Like below:
```
class MyBertForSequenceClassification(BertPreTrainedModel):
def __init__(self, config, num_labels=2, output_attentions=False):
super(MyBertForSequenceClassification, self).__init__(config)
self.output_attentions = output_attentions
self.num_labels = num_labels
self.bert = BertModel(config)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.classifier = nn.Linear(config.hidden_size*20, num_labels)
self.softmax = nn.Softmax()
self.apply(self.init_bert_weights)
def forward(self, input_ids, token_type_ids=None, attention_mask=None, labels=None, head_mask=None):
print(input_ids.shape) # half of actual passed size
outputs = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False)
_, pooled_output = outputs
pooled_output = self.dropout(pooled_output)
flat_pooled_output = pooled_output.view(-1)
print(flat_pooled_output.shape)
logits = self.classifier(flat_pooled_output)
logits = self.softmax(logits)
return logits
```
I found that when I passed the input_ids tensor with dimensions (40, 128) into the model, the actual input_ids I got in the forward function was (20, 128). It always reduce my input to half of original size.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/725/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/724 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/724/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/724/comments | https://api.github.com/repos/huggingface/transformers/issues/724/events | https://github.com/huggingface/transformers/pull/724 | 460,461,332 | MDExOlB1bGxSZXF1ZXN0MjkxNTg3MTM3 | 724 | fixing bugs in load_rocstories_dataset in run_openai_gpt.py | {
"login": "sajidrahman",
"id": 4258481,
"node_id": "MDQ6VXNlcjQyNTg0ODE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4258481?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sajidrahman",
"html_url": "https://github.com/sajidrahman",
"followers_url": "https://api.github.com/users/sajidrahman/followers",
"following_url": "https://api.github.com/users/sajidrahman/following{/other_user}",
"gists_url": "https://api.github.com/users/sajidrahman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sajidrahman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sajidrahman/subscriptions",
"organizations_url": "https://api.github.com/users/sajidrahman/orgs",
"repos_url": "https://api.github.com/users/sajidrahman/repos",
"events_url": "https://api.github.com/users/sajidrahman/events{/privacy}",
"received_events_url": "https://api.github.com/users/sajidrahman/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/724?src=pr&el=h1) Report\n> Merging [#724](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/724?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/98dc30b21e3df6528d0dd17f0910ffea12bc0f33?src=pr&el=desc) will **increase** coverage by `0.05%`.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/724?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #724 +/- ##\n==========================================\n+ Coverage 62.22% 62.27% +0.05% \n==========================================\n Files 18 18 \n Lines 3979 3979 \n==========================================\n+ Hits 2476 2478 +2 \n+ Misses 1503 1501 -2\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/724?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_pretrained\\_bert/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/724/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `83.51% <0%> (+1.06%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/724?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/724?src=pr&el=footer). Last update [98dc30b...63da86c](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/724?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Hi, what is the `run_openai_gpt2_custom.py` file for?",
"Hi Thomas, I customized the run_openai_gpt.py file to add support for gpt-2. That's why the name might be a bit confusing (run_openai_gpt2_custom.py). I'll better create a separate PR for this file, with a better name.Any suggestion?"
] | 1,561 | 1,562 | 1,562 | NONE | null | The csv reader requires a delimiter argument to read .tsv file in the given example dataset. I've also added link for the dataset and provided a sample eval results in comments. Also, the eval dataset needs to be different from the training dataset, which I've also fixed in the given command to run this script. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/724/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/724",
"html_url": "https://github.com/huggingface/transformers/pull/724",
"diff_url": "https://github.com/huggingface/transformers/pull/724.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/724.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/723 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/723/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/723/comments | https://api.github.com/repos/huggingface/transformers/issues/723/events | https://github.com/huggingface/transformers/pull/723 | 460,283,950 | MDExOlB1bGxSZXF1ZXN0MjkxNDQzMTYz | 723 | Update Adam optimizer to follow pytorch convention for betas parameter (#510) | {
"login": "tonianelope",
"id": 23743176,
"node_id": "MDQ6VXNlcjIzNzQzMTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/23743176?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tonianelope",
"html_url": "https://github.com/tonianelope",
"followers_url": "https://api.github.com/users/tonianelope/followers",
"following_url": "https://api.github.com/users/tonianelope/following{/other_user}",
"gists_url": "https://api.github.com/users/tonianelope/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tonianelope/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tonianelope/subscriptions",
"organizations_url": "https://api.github.com/users/tonianelope/orgs",
"repos_url": "https://api.github.com/users/tonianelope/repos",
"events_url": "https://api.github.com/users/tonianelope/events{/privacy}",
"received_events_url": "https://api.github.com/users/tonianelope/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/723?src=pr&el=h1) Report\n> Merging [#723](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/723?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/98dc30b21e3df6528d0dd17f0910ffea12bc0f33?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `83.33%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/723?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #723 +/- ##\n=======================================\n Coverage 62.22% 62.22% \n=======================================\n Files 18 18 \n Lines 3979 3979 \n=======================================\n Hits 2476 2476 \n Misses 1503 1503\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/723?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_pretrained\\_bert/optimization.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/723/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvb3B0aW1pemF0aW9uLnB5) | `74.26% <100%> (ø)` | :arrow_up: |\n| [pytorch\\_pretrained\\_bert/optimization\\_openai.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/723/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvb3B0aW1pemF0aW9uX29wZW5haS5weQ==) | `34.84% <66.66%> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/723?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/723?src=pr&el=footer). Last update [98dc30b...c988590](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/723?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Ok, can you update the examples that use these optimizers as well?",
"> Ok, can you update the examples that use these optimizers as well?\r\n\r\nI had a look at the examples, all seem to use the default values for b1/b2 so there shouldn't be any change required.",
"Perfect!"
] | 1,561 | 1,561 | 1,561 | CONTRIBUTOR | null | see #510
Update optimiser to follow pytorch convention ([Adam Optimiser](https://pytorch.org/docs/stable/optim.html#torch.optim.Adam)) instead of tensorflow, to allow for better integration with other pytorch libraries and frameworks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/723/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/723",
"html_url": "https://github.com/huggingface/transformers/pull/723",
"diff_url": "https://github.com/huggingface/transformers/pull/723.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/723.patch",
"merged_at": 1561735706000
} |
https://api.github.com/repos/huggingface/transformers/issues/722 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/722/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/722/comments | https://api.github.com/repos/huggingface/transformers/issues/722/events | https://github.com/huggingface/transformers/issues/722 | 460,144,005 | MDU6SXNzdWU0NjAxNDQwMDU= | 722 | low accuracy when fine tuning for the MRPC task with large model | {
"login": "syu0000",
"id": 52182461,
"node_id": "MDQ6VXNlcjUyMTgyNDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/52182461?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/syu0000",
"html_url": "https://github.com/syu0000",
"followers_url": "https://api.github.com/users/syu0000/followers",
"following_url": "https://api.github.com/users/syu0000/following{/other_user}",
"gists_url": "https://api.github.com/users/syu0000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/syu0000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/syu0000/subscriptions",
"organizations_url": "https://api.github.com/users/syu0000/orgs",
"repos_url": "https://api.github.com/users/syu0000/repos",
"events_url": "https://api.github.com/users/syu0000/events{/privacy}",
"received_events_url": "https://api.github.com/users/syu0000/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"The batch size will be 8 times smaller with only one GPU, increase it by a factor of 8 using gradient accumulation, e.g. `--train_batch_size 96 --gradient_accumulation_steps 8`",
"Thank you for your help. However, when I use this command: \r\npython run_classifier.py --bert_model bert-large-uncased-whole-word-masking --task_name MRPC –-do_train --do_eval --do_lower_case --data_dir E:\\Users\\...\\MRPC --max_seq_length 128 --train_batch_size 48 --gradient_accumulation_steps 8 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir E:\\Users\\...\\MRPC_result\r\nit still gives me a too low acc result.\r\nbtw I am not using the latest version of package, could that be the cause?\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,561 | 1,567 | 1,567 | NONE | null | I noticed that on the website you said:"Here is an example using distributed training on 8 V100 GPUs and Bert Whole Word Masking model to reach a F1 > 92 on MRPC." However, when I fine tuned the model with max_sequence_length=128 and batch_size=12 on a single 11G GPU, it gives the accuracy of 0.68.
acc = 0.6838235294117647
acc_and_f1 = 0.7480253018237863
eval_loss = 0.6240295206799227
f1 = 0.8122270742358079
global_step = 918
loss = None
I wonder what have let to that.
the command I used "python run_classifier.py --bert_model bert-large-uncased-whole-word-masking --task_name MRPC --do_train --do_eval --do_lower_case --data_dir E:\Users\..... --max_seq_length 64 --train_batch_size 12 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir E:\Users\...." | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/722/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/721 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/721/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/721/comments | https://api.github.com/repos/huggingface/transformers/issues/721/events | https://github.com/huggingface/transformers/issues/721 | 459,949,826 | MDU6SXNzdWU0NTk5NDk4MjY= | 721 | Usual loss when pretraining? | {
"login": "PedroUria",
"id": 43831167,
"node_id": "MDQ6VXNlcjQzODMxMTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/43831167?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PedroUria",
"html_url": "https://github.com/PedroUria",
"followers_url": "https://api.github.com/users/PedroUria/followers",
"following_url": "https://api.github.com/users/PedroUria/following{/other_user}",
"gists_url": "https://api.github.com/users/PedroUria/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PedroUria/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PedroUria/subscriptions",
"organizations_url": "https://api.github.com/users/PedroUria/orgs",
"repos_url": "https://api.github.com/users/PedroUria/repos",
"events_url": "https://api.github.com/users/PedroUria/events{/privacy}",
"received_events_url": "https://api.github.com/users/PedroUria/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@PedroUria \r\n\r\nWe're in a similar boat as you are. In our case, the problem was with accuracy though.\r\n\r\nWe used `BertForSequenceClassification` on a multi-label classification task. We've actually written a similar version of `BertForSequenceClassification` as written in `models.py` of this repository changing `CrossEntropyLoss` with `BCELogitLoss`.\r\n\r\nInitially we faced some issues with loading pre-trained weights, but once we did sucessfully and started training ( 8 label target ), even though our loss started off very small `0.24` in first step of an epoch and went down to `0.023` we never saw accuracy on training set more than `0.71` and validation `.81` in a specific batch. In comparision we have much smaller data set than yours , actually not even close : `350 rows training` and `120 rows` evaluation. So, it's likely that the poor results we're experiencing is because of the minimal data size.\r\n\r\nWe ran for only 4 epochs though, we also used `Cyclic_LR` for scheduling with `BertAdam` optimizer with same learning rate as yours. I did create an issue looking for answers. Hope they'll respond.\r\n\r\n",
"Are you finetuning on the data afterwards? If yes then I believe it is due to the very small dataset. Try to finetune on a larger dataset on a similiar task first and then finetune on your small dataset.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,561 | 1,568 | 1,568 | NONE | null | We are pretraining on our own corpus using the `pregenerate_training_data.py` and `finetune_on_pregenerated.py` scripts.
The input text to the first script follows the same format as `sample_text.txt` on the samples folder, and contains about 515000 lines of text. We run `finetune_on_pregenerated.py` for 60 epochs with the default learning rate (3e-5), 128 sequence length, 300 batch size using 8gpus. The loss got to 1.3. We used this model for a classification task and we didn't see any difference than using the original pretrained BERT. We also compared the weights of some attention layers and they are very similar.
Do you have an estimate as to what the loss should be in order to see improvements? We are also aware we should use 512 for some of the pretraining process because most of our input sequences are 512 long, but still we were expecting some kind of change.
Also, do you think it might be a problem with the small size of our corpus?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/721/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/720 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/720/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/720/comments | https://api.github.com/repos/huggingface/transformers/issues/720/events | https://github.com/huggingface/transformers/issues/720 | 459,909,407 | MDU6SXNzdWU0NTk5MDk0MDc= | 720 | Import Error: cannot import name 'warmup_linear' | {
"login": "GBR-613",
"id": 17687869,
"node_id": "MDQ6VXNlcjE3Njg3ODY5",
"avatar_url": "https://avatars.githubusercontent.com/u/17687869?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GBR-613",
"html_url": "https://github.com/GBR-613",
"followers_url": "https://api.github.com/users/GBR-613/followers",
"following_url": "https://api.github.com/users/GBR-613/following{/other_user}",
"gists_url": "https://api.github.com/users/GBR-613/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GBR-613/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GBR-613/subscriptions",
"organizations_url": "https://api.github.com/users/GBR-613/orgs",
"repos_url": "https://api.github.com/users/GBR-613/repos",
"events_url": "https://api.github.com/users/GBR-613/events{/privacy}",
"received_events_url": "https://api.github.com/users/GBR-613/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I probably was wrong and the issue was supposed to be fixed with PR #518... Anyway, the latest was merged a while ago and did not help. ",
"You have to wait for the next release or use the master branch",
"@thomwolf ,\r\n> You have to wait for the next release or use the master branch\r\n\r\nI cloned the git repo directly from here:\r\n\r\n`git clone https://github.com/huggingface/pytorch-pretrained-BERT.git`\r\n\r\nI think it takes master branch by default, isn't it?\r\n",
"Explicit \"git checkout master\" changes nothing.",
"The latest master branch gives the same issue. Version 0.5.1 works as of now without this issue",
"The version 0.4.0 doesn't give this issue.\r\npip install pytorch_pretrained_bert==0.4.0",
"> The version 0.4.0 doesn't give this issue.\r\n> pip install pytorch_pretrained_bert==0.4.0\r\n\r\nDowngrading to 0.4.0 solved my problem.",
"This issue has popped up again in 0.6.2 for me. Downgrading to 0.6.1 solved it\r\n```bash\r\npip install pytorch-pretrained-bert==0.6.1\r\n```"
] | 1,561 | 1,604 | 1,564 | NONE | null | I get the following error:
```
File "/Users/gregory/PROJECTS/MyML/MLClassification/TrainAndTest/Models/controller.py", line 11, in <module>
from Models.bert import BertModel
File "/Users/gregory/PROJECTS/MyML/MLClassification/TrainAndTest/Models/bert.py", line 9, in <module>
from pytorch_pretrained_bert.optimization import BertAdam, warmup_linear
ImportError: cannot import name 'warmup_linear'
```
This problem was already noted in comments to the following issue:
[https://github.com/huggingface/pytorch-pretrained-BERT/issues/499](https://github.com/huggingface/pytorch-pretrained-BERT/issues/499)
The response said that it was fixed with PR #506, and suggested to clone this Git repository and install the package from it.
However @goyalsaransh97 already mentioned that the problem persists for him (or her).
So it does for me too.
BTW, please notice, the code works fine on machine where all packages were installed about an year ago.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/720/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/719 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/719/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/719/comments | https://api.github.com/repos/huggingface/transformers/issues/719/events | https://github.com/huggingface/transformers/issues/719 | 459,734,720 | MDU6SXNzdWU0NTk3MzQ3MjA= | 719 | Embedding and predictions in one forward pass | {
"login": "lcswillems",
"id": 5437552,
"node_id": "MDQ6VXNlcjU0Mzc1NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5437552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lcswillems",
"html_url": "https://github.com/lcswillems",
"followers_url": "https://api.github.com/users/lcswillems/followers",
"following_url": "https://api.github.com/users/lcswillems/following{/other_user}",
"gists_url": "https://api.github.com/users/lcswillems/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lcswillems/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lcswillems/subscriptions",
"organizations_url": "https://api.github.com/users/lcswillems/orgs",
"repos_url": "https://api.github.com/users/lcswillems/repos",
"events_url": "https://api.github.com/users/lcswillems/events{/privacy}",
"received_events_url": "https://api.github.com/users/lcswillems/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, just make your own PyTorch model taking inspiration from BertModel and BertForMaskedLM.\r\nIf you sub-class `BertPreTrainedModel`, you'll be able to load the pretrained weights using the `from_pretrained()` method",
"Okay, thank you :) "
] | 1,561 | 1,561 | 1,561 | NONE | null | Is it possible to mix `BertModel` and `BertForMaskedLM`? i.e. is it possible to get the embedding and the predictions in one forward pass? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/719/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/718 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/718/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/718/comments | https://api.github.com/repos/huggingface/transformers/issues/718/events | https://github.com/huggingface/transformers/pull/718 | 459,600,873 | MDExOlB1bGxSZXF1ZXN0MjkwOTA2MDYy | 718 | Incorrect docstring for BertForMaskedLM | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/718?src=pr&el=h1) Report\n> Merging [#718](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/718?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/98dc30b21e3df6528d0dd17f0910ffea12bc0f33?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/718?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #718 +/- ##\n=======================================\n Coverage 62.22% 62.22% \n=======================================\n Files 18 18 \n Lines 3979 3979 \n=======================================\n Hits 2476 2476 \n Misses 1503 1503\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/718?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_pretrained\\_bert/modeling.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/718/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmcucHk=) | `79.49% <ø> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/718?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/718?src=pr&el=footer). Last update [98dc30b...8d6a118](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/718?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks, we'll make a big clean up of the docstrings for the coming release."
] | 1,561 | 1,561 | 1,561 | MEMBER | null | The docstring for the head_mask argument to the BertForMaskedLM class is repeated and one is incorrect - I presume it's just a copy-paste mistake. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/718/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/718",
"html_url": "https://github.com/huggingface/transformers/pull/718",
"diff_url": "https://github.com/huggingface/transformers/pull/718.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/718.patch",
"merged_at": 1561734531000
} |
https://api.github.com/repos/huggingface/transformers/issues/717 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/717/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/717/comments | https://api.github.com/repos/huggingface/transformers/issues/717/events | https://github.com/huggingface/transformers/issues/717 | 459,520,306 | MDU6SXNzdWU0NTk1MjAzMDY= | 717 | BPE vocab | {
"login": "sashank06",
"id": 8636933,
"node_id": "MDQ6VXNlcjg2MzY5MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8636933?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashank06",
"html_url": "https://github.com/sashank06",
"followers_url": "https://api.github.com/users/sashank06/followers",
"following_url": "https://api.github.com/users/sashank06/following{/other_user}",
"gists_url": "https://api.github.com/users/sashank06/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashank06/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashank06/subscriptions",
"organizations_url": "https://api.github.com/users/sashank06/orgs",
"repos_url": "https://api.github.com/users/sashank06/repos",
"events_url": "https://api.github.com/users/sashank06/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashank06/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Not speaking for the core developers, but `pytorch-pretrained-BERT` supports it, because:\r\n\r\n* GPT-1 use BPE, see code [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/tokenization_openai.py#L73)\r\n* GPT-2 use BPE on byte level, see code [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/tokenization_gpt2.py#L88)\r\n\r\nBERT uses a variant of BPE (word pieces) and the pretrained language model for Transformer-XL was trained on WikiText-103 (so it is a word-based model).\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,561 | 1,567 | 1,567 | NONE | null | Do you guys have the functionality to support BPE with the models? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/717/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/716 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/716/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/716/comments | https://api.github.com/repos/huggingface/transformers/issues/716/events | https://github.com/huggingface/transformers/pull/716 | 459,515,735 | MDExOlB1bGxSZXF1ZXN0MjkwODUwODYx | 716 | Add tie_weights to XLNetForSequenceClassification | {
"login": "Strideradu",
"id": 9002118,
"node_id": "MDQ6VXNlcjkwMDIxMTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9002118?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Strideradu",
"html_url": "https://github.com/Strideradu",
"followers_url": "https://api.github.com/users/Strideradu/followers",
"following_url": "https://api.github.com/users/Strideradu/following{/other_user}",
"gists_url": "https://api.github.com/users/Strideradu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Strideradu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Strideradu/subscriptions",
"organizations_url": "https://api.github.com/users/Strideradu/orgs",
"repos_url": "https://api.github.com/users/Strideradu/repos",
"events_url": "https://api.github.com/users/Strideradu/events{/privacy}",
"received_events_url": "https://api.github.com/users/Strideradu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/716?src=pr&el=h1) Report\n> Merging [#716](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/716?src=pr&el=desc) into [xlnet](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/c946bb51a61f67b0c9eaae1c9cf6f164a7748e37?src=pr&el=desc) will **increase** coverage by `0.03%`.\n> The diff coverage is `50%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/716?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## xlnet #716 +/- ##\n==========================================\n+ Coverage 62.18% 62.22% +0.03% \n==========================================\n Files 22 22 \n Lines 4742 4744 +2 \n==========================================\n+ Hits 2949 2952 +3 \n+ Misses 1793 1792 -1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/716?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_pretrained\\_bert/modeling\\_xlnet.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/716/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvbW9kZWxpbmdfeGxuZXQucHk=) | `65.16% <50%> (-0.06%)` | :arrow_down: |\n| [pytorch\\_pretrained\\_bert/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/716/diff?src=pr&el=tree#diff-cHl0b3JjaF9wcmV0cmFpbmVkX2JlcnQvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `83.51% <0%> (+1.06%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/716?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/716?src=pr&el=footer). Last update [c946bb5...00547bd](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/716?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks but we don't need that, the model was work in progress."
] | 1,561 | 1,561 | 1,561 | NONE | null | XLNetForSequenceClassification doesn't have tie_weights() but initialization will call it, or we can made a function in XLNetPretrainedModel? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/716/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/716",
"html_url": "https://github.com/huggingface/transformers/pull/716",
"diff_url": "https://github.com/huggingface/transformers/pull/716.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/716.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/715 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/715/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/715/comments | https://api.github.com/repos/huggingface/transformers/issues/715/events | https://github.com/huggingface/transformers/pull/715 | 459,477,532 | MDExOlB1bGxSZXF1ZXN0MjkwODI1Nzc5 | 715 | Include a reference for LM finetuning | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/715?src=pr&el=h1) Report\n> Merging [#715](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/715?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/c304593d8fa93f25febe1458c63497a846749c89?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/715?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #715 +/- ##\n=======================================\n Coverage 62.27% 62.27% \n=======================================\n Files 18 18 \n Lines 3979 3979 \n=======================================\n Hits 2478 2478 \n Misses 1501 1501\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/715?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/715?src=pr&el=footer). Last update [c304593...c7b2808](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/715?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Nice!"
] | 1,561 | 1,561 | 1,561 | MEMBER | null | @lopuhin recently made me aware of a published paper covering domain fine-tuning of BERT models, so I added a reference to the LM finetuning README. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/715/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/715",
"html_url": "https://github.com/huggingface/transformers/pull/715",
"diff_url": "https://github.com/huggingface/transformers/pull/715.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/715.patch",
"merged_at": 1561231760000
} |
https://api.github.com/repos/huggingface/transformers/issues/714 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/714/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/714/comments | https://api.github.com/repos/huggingface/transformers/issues/714/events | https://github.com/huggingface/transformers/pull/714 | 459,465,533 | MDExOlB1bGxSZXF1ZXN0MjkwODE4MDg2 | 714 | Correct a broken link on README | {
"login": "changukshin",
"id": 6247953,
"node_id": "MDQ6VXNlcjYyNDc5NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6247953?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/changukshin",
"html_url": "https://github.com/changukshin",
"followers_url": "https://api.github.com/users/changukshin/followers",
"following_url": "https://api.github.com/users/changukshin/following{/other_user}",
"gists_url": "https://api.github.com/users/changukshin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/changukshin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/changukshin/subscriptions",
"organizations_url": "https://api.github.com/users/changukshin/orgs",
"repos_url": "https://api.github.com/users/changukshin/repos",
"events_url": "https://api.github.com/users/changukshin/events{/privacy}",
"received_events_url": "https://api.github.com/users/changukshin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/714?src=pr&el=h1) Report\n> Merging [#714](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/714?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/c304593d8fa93f25febe1458c63497a846749c89?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/714?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #714 +/- ##\n=======================================\n Coverage 62.27% 62.27% \n=======================================\n Files 18 18 \n Lines 3979 3979 \n=======================================\n Hits 2478 2478 \n Misses 1501 1501\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/714?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/714?src=pr&el=footer). Last update [c304593...ada0d8f](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/714?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"👍"
] | 1,561 | 1,561 | 1,561 | CONTRIBUTOR | null | I've correct a broken link and its contexts on README. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/714/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/714/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/714",
"html_url": "https://github.com/huggingface/transformers/pull/714",
"diff_url": "https://github.com/huggingface/transformers/pull/714.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/714.patch",
"merged_at": 1561231781000
} |
https://api.github.com/repos/huggingface/transformers/issues/713 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/713/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/713/comments | https://api.github.com/repos/huggingface/transformers/issues/713/events | https://github.com/huggingface/transformers/issues/713 | 459,452,941 | MDU6SXNzdWU0NTk0NTI5NDE= | 713 | TypeError: expand_as() takes 1 positional argument but 5 were given | {
"login": "anxingle",
"id": 8489818,
"node_id": "MDQ6VXNlcjg0ODk4MTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8489818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anxingle",
"html_url": "https://github.com/anxingle",
"followers_url": "https://api.github.com/users/anxingle/followers",
"following_url": "https://api.github.com/users/anxingle/following{/other_user}",
"gists_url": "https://api.github.com/users/anxingle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anxingle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anxingle/subscriptions",
"organizations_url": "https://api.github.com/users/anxingle/orgs",
"repos_url": "https://api.github.com/users/anxingle/repos",
"events_url": "https://api.github.com/users/anxingle/events{/privacy}",
"received_events_url": "https://api.github.com/users/anxingle/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Oh yes, this will be fixed in the coming PR #711.\r\nHead mask is an option to explore the model internals, it's not for production.\r\nSee the `bertology.py` example script.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,561 | 1,566 | 1,566 | NONE | null | [Model.py](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py) line [870](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L870):
`head_mask = head_mask.expand_as(self.config.num_hidden_layers, -1, -1, -1, -1)`
got the mistake, I tried head_mask=torch.tensor([1, 2, 3]) or just like that. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/713/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/713/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/712 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/712/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/712/comments | https://api.github.com/repos/huggingface/transformers/issues/712/events | https://github.com/huggingface/transformers/issues/712 | 459,345,851 | MDU6SXNzdWU0NTkzNDU4NTE= | 712 | BERT Tokenizer not working! Failed to load the bert-base-uncased model. | {
"login": "Raghavendra15",
"id": 7957331,
"node_id": "MDQ6VXNlcjc5NTczMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7957331?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Raghavendra15",
"html_url": "https://github.com/Raghavendra15",
"followers_url": "https://api.github.com/users/Raghavendra15/followers",
"following_url": "https://api.github.com/users/Raghavendra15/following{/other_user}",
"gists_url": "https://api.github.com/users/Raghavendra15/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Raghavendra15/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Raghavendra15/subscriptions",
"organizations_url": "https://api.github.com/users/Raghavendra15/orgs",
"repos_url": "https://api.github.com/users/Raghavendra15/repos",
"events_url": "https://api.github.com/users/Raghavendra15/events{/privacy}",
"received_events_url": "https://api.github.com/users/Raghavendra15/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Do you have a good internet connection? The error messages will be improved in the coming release but usually, this comes from the library not being able to reach AWS S3 servers to download the pretrained weights.",
"@thomwolf Thank you so much for your quick response! I followed your advice to people on other posts where they can't load the model. What I did then is to try to download and test the model in the command line. \r\nSo I tried the following and it worked. \r\n\r\nWhat I couldn't understand is the fact that why I have to manually import BERT packages in a python shell when I already installed it using pip3?\r\n\r\nBelow is what I tried and it worked.\r\n>>> from pytorch_pretrained_bert.modeling import BertForNextSentencePrediction\r\nKeyboardInterrupt\r\n>>> model = BertForNextSentencePrediction.from_pretrained(\r\n... \"bert-base-uncased\"\r\n... ).to(device)\r\n100%|████████████████████████████████████████████| 407873900/407873900 [00:08<00:00, 48525133.57B/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 3, in <module>\r\nNameError: name 'device' is not defined\r\n############################################################\r\nI fixed the device thing and below is the proper output.\r\n>>> from pytorch_pretrained_bert.modeling import BertForNextSentencePrediction\r\n>>> model = BertForNextSentencePrediction.from_pretrained(\r\n... \"bert-base-uncased\"\r\n... ).to(device)\r\n\r\n",
"I solved the problem by removing 'cache_dir=PYTORCH_PRETRAINED_BERT_CACHE'. The function is trying to find the downloaded model in your cache_dir, but if you haven't downloaded anything. then you should remove this argument.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"If you are using Kaggle then make sure that the internet toggle button is switched on the right-hand side."
] | 1,561 | 1,593 | 1,571 | NONE | null | The sentence that is being tokenized is: "Weather: Summer’s Finally Here. So Where Is It?"
But it gives the following error:
Error message:
AttributeError Traceback (most recent call last)
<ipython-input-78-c51eef61e2b9> in <module>
----> 1 correct_pairs = convert_sentence_pair(df_full.title.tolist(), df_full.desc.tolist(), max_seq_length=200, tokenizer=tokenizer)
2
3
<ipython-input-76-da322eec2f23> in convert_sentence_pair(titles, descs, max_seq_length, tokenizer)
3 for (ex_index, (title, desc)) in enumerate(zip(titles, descs)):
4 print(title)
----> 5 tokens_a = tokenizer.tokenize(title)
6
7 tokens_b = None
AttributeError: 'NoneType' object has no attribute 'tokenize'
When I tried to load the module manually I got the following issue:
tokenizer = BertTokenizer.from_pretrained(
... "bert-base-uncased", do_lower_case=True,
... cache_dir=PYTORCH_PRETRAINED_BERT_CACHE)
Model name 'bert-base-uncased' was not found in model name list (bert-base-cased, bert-large-uncased, bert-large-cased, bert-base-multilingual-cased, bert-base-chinese, bert-base-uncased, bert-base-multilingual-uncased). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt' was a path or url but couldn't find any file associated to this path or url.
Can anyone please help? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/712/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/711 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/711/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/711/comments | https://api.github.com/repos/huggingface/transformers/issues/711/events | https://github.com/huggingface/transformers/pull/711 | 459,282,406 | MDExOlB1bGxSZXF1ZXN0MjkwNjc1OTEy | 711 | PyTorch-Transformers 1.0 - w. XLNet and XLM model - Standard API - Torchscript compatibility | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@thomwolf I could get the `XLNetLMHeadModel` running, but I have some issues with the \"normal\" `XLNetModel` implementation:\r\n\r\n```python\r\nimport torch\r\nfrom pytorch_pretrained_bert import XLNetTokenizer, XLNetModel\r\n\r\nimport logging\r\nlogging.basicConfig(level=logging.INFO)\r\n\r\ntokenizer = XLNetTokenizer.from_pretrained(\"xlnet-large-cased\")\r\n\r\ntext = \"Who was Jim Henson ? Jim Henson was a puppeteer\"\r\ntokenized_text = tokenizer.encode(text)\r\n\r\nindexed_tokens = torch.tensor([tokenized_text])\r\n\r\nmodel = XLNetModel.from_pretrained(\"xlnet-large-cased\")\r\nmodel.eval()\r\n\r\nwith torch.no_grad():\r\n hidden_states, mems = model(indexed_tokens)\r\n print(hidden_states)\r\n```\r\n\r\n-> is currently not working, `hidden_states = model(indexed_tokens)` always returns `nan`s for the embeddings. \r\n\r\nThanks :heart: ",
"Indeed `from_pretrained()` was not loading weights in `XLNetModel` (did you see that in the logs?). Should be fixed now. This is all very WIP so beware @stefan-it!",
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711?src=pr&el=h1) Report\n> Merging [#711](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/commit/c304593d8fa93f25febe1458c63497a846749c89?src=pr&el=desc) will **increase** coverage by `16.62%`.\n> The diff coverage is `81.32%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #711 +/- ##\n===========================================\n+ Coverage 62.27% 78.89% +16.62% \n===========================================\n Files 18 34 +16 \n Lines 3979 6180 +2201 \n===========================================\n+ Hits 2478 4876 +2398 \n+ Misses 1501 1304 -197\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/tests/modeling\\_openai\\_test.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfb3BlbmFpX3Rlc3QucHk=) | `84.21% <ø> (ø)` | |\n| [pytorch\\_transformers/tests/conftest.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvY29uZnRlc3QucHk=) | `90% <ø> (ø)` | |\n| [pytorch\\_transformers/tests/modeling\\_xlnet\\_test.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfeGxuZXRfdGVzdC5weQ==) | `95.89% <ø> (ø)` | |\n| [pytorch\\_transformers/tokenization\\_openai.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX29wZW5haS5weQ==) | `82.2% <ø> (ø)` | |\n| [pytorch\\_transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `96.66% <ø> (ø)` | |\n| [pytorch\\_transformers/tests/modeling\\_gpt2\\_test.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvbW9kZWxpbmdfZ3B0Ml90ZXN0LnB5) | `84.21% <ø> (ø)` | |\n| [pytorch\\_transformers/tokenization\\_xlm.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3hsbS5weQ==) | `82.05% <ø> (ø)` | |\n| [...torch\\_transformers/tests/tokenization\\_gpt2\\_test.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX2dwdDJfdGVzdC5weQ==) | `96.87% <ø> (ø)` | |\n| [...h\\_transformers/tests/tokenization\\_tests\\_commons.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX3Rlc3RzX2NvbW1vbnMucHk=) | `100% <ø> (ø)` | |\n| [...ytorch\\_transformers/tests/tokenization\\_xlm\\_test.py](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvdG9rZW5pemF0aW9uX3hsbV90ZXN0LnB5) | `96.77% <ø> (ø)` | |\n| ... and [55 more](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711/diff?src=pr&el=tree-more) | |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711?src=pr&el=footer). Last update [c304593...8ad7e5b](https://codecov.io/gh/huggingface/pytorch-pretrained-BERT/pull/711?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks for the fix - it's working now :)\r\n\r\nSorry, if that's not the right place here:\r\nI've implemented a new embedding class in `flair`. But when the complete model is going to be saved (via `torch.save`), the following error message is shown:\r\n\r\n```bash\r\nTraceback (most recent call last):\r\n File \"train_xlnet.py\", line 36, in <module>\r\n max_epochs=500)\r\n File \"/mnt/flair/flair/trainers/trainer.py\", line 341, in train\r\n self.model.save(base_path / \"best-model.pt\", pickle_module=self.pickle_module)\r\n File \"/mnt/flair/flair/nn.py\", line 86, in save\r\n self.save_torch_model(model_state, str(model_file), pickle_module)\r\n File \"/mnt/flair/flair/nn.py\", line 76, in save_torch_model\r\n torch.save(model_state, str(model_file), pickle_protocol=pickle_protocol)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/serialization.py\", line 224, in save\r\n return _with_file_like(f, \"wb\", lambda f: _save(obj, f, pickle_module, pickle_protocol))\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/serialization.py\", line 149, in _with_file_like\r\n return body(f)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/serialization.py\", line 224, in <lambda>\r\n return _with_file_like(f, \"wb\", lambda f: _save(obj, f, pickle_module, pickle_protocol))\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/serialization.py\", line 297, in _save\r\n pickler.dump(obj)\r\nTypeError: can't pickle SwigPyObject objects\r\n```\r\n\r\nThis error message does not appear, when e.g. using the GPT1 embeddings. Do you have any hint, what's going wrong here 🤔\r\n\r\n",
"Am I correct that model.eval() does not work with this yet? It seems all predictions generated are the same. \r\n\r\nThis is how im doing it:\r\n\r\n```\r\nconfig = XLNetConfig('config.json')\r\nmodel = XLNetForSequenceClassification(config, num_labels=3)\r\nmodel.load_state_dict(torch.load(\"xlnet_pytorch.bin\"))\r\nmodel.to(device)\r\nfor param in model.parameters():\r\n param.requires_grad = False\r\nmodel.eval()\r\n```\r\n\r\nOr am i doing something wrong?",
"@thomwolf I think this is branch still WIP yes? Because the `run_xlnet_squad.py` still refers to BertForQuestionAnswering. SHould I just change that to XLnetForQuestionAnswering and it should work? or maybe I should wait more?",
"Yeah, I'm still on it, finishing the tests of XLNetForSequenceClassification so you should wait more (or help me code the XLNetForQuestionAnswering, haha).\r\nI'll make the description of the PR more explicit that only the base model is up now and `XLNetForSequenceClassification` and `XLNetForQuestionAnswering` are not ready yet.",
"I think I found the root cause of the serialization problem (described in one of the previous comments here):\r\n\r\nThe sentencepiece processor object cannot be correctly serialized. I found a similar issue in the xnmt library:\r\n\r\nhttps://github.com/neulab/xnmt/pull/351",
"> Yeah, I'm still on it, finishing the tests of XLNetForSequenceClassification so you should wait more (or help me code the XLNetForQuestionAnswering, haha).\r\n\r\n\r\nI've started to work on it with whatever time I've :(\r\nSo far I'm getting an error in the `modeling_xlnet.py` in line 752: \r\n` attention_mask = attention_mask.transpose(0, 1).contiguous() if attention_mask is not None else None`\r\n\r\nWhoever finishes first should let here know..But appreciate your work here @thomwolf.",
"@stefan-it do you know if there is a workaround to the sentencepiece serialization issue?",
"@thomwolf I think this can be fixed with:\r\n\r\n```python\r\n def __getstate__(self):\r\n state = self.__dict__.copy()\r\n state[\"sp_model\"] = None\r\n return state\r\n\r\n def __setstate__(self, d):\r\n self.__dict__ = d\r\n try:\r\n import sentencepiece as spm\r\n except ImportError:\r\n logger.warning(\"You need to install SentencePiece to use XLNetTokenizer: https://github.com/google/sentencepiece\"\r\n \"pip install sentencepiece\")\r\n self.sp_model = spm.SentencePieceProcessor()\r\n self.sp_model.Load(self.vocab_file)\r\n```\r\n\r\nIn the `XLNetTokenizer` class. A nice test case would be:\r\n\r\n```python\r\nimport pickle\r\nfrom pytorch_pretrained_bert import XLNetTokenizer\r\n\r\ntokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased')\r\n\r\ntext = \"Munich and Berlin are nice cities\"\r\nfilename = \"tokenizer.bin\"\r\n\r\nsubwords = tokenizer.tokenize(text)\r\n\r\npickle.dump(tokenizer, open(filename, \"wb\"))\r\n\r\ntokenizer_new = pickle.load(open(filename, \"rb\"))\r\nsubwords_loaded = tokenizer_new.tokenize(text)\r\n\r\nassert subwords == subwords_loaded\r\n```",
"I'm hitting a bug where the head mask created here:\r\n\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT/blob/xlnet/pytorch_pretrained_bert/modeling_xlnet.py#L854\r\n\r\nis a list of None values instead of just None, which eventually results in an error: \r\n\r\n```\r\n File \"../pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling_xlnet.py\", line 397, in rel_attn_core\r\n attn_prob = attn_prob * head_mask\r\nTypeError: mul(): argument 'other' (position 1) must be Tensor, not list\r\n```\r\n\r\nI think the fix is to either make the mask a single None value, or to index it with the layer number when it's used.\r\n\r\n----\r\n\r\nI also ran into one more error:\r\n\r\n```\r\nFile \"../pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling_xlnet.py\", line 555, in forward\r\n outputs = [output_h, output_g] + outputs[2:] # Add again attentions if there are there\r\nTypeError: can only concatenate list (not \"tuple\") to list\r\n```\r\nNot sure what's causing this, but casting `outputs[2:]` to a list seems to fix things.",
"@stefan-it Great, thanks!",
"@nikitakit Yeah I'm refactoring the API among models to make it more consistent/simpler to switch among models. Will be finished soon.",
"While modifying `run_xlnet_squad.py` I looked through the `run_squad.py` of the xlnet repo. It seems he made a bunch of changes to `convert_examples_to_features` (adding lcs etc.) Also, I didn't see that he put the cls at the end; rather p before than q (I think; hope I'm not wrong). Have you started to work on it @thomwolf ? It's interesting :) ",
"One quick update in case someone else is also working on the Squad XLnet fine tuning. I got this error and working on it seems that the end logits in the QA layer is buggy.\r\n\r\n`Traceback (most recent call last):\r\n File \"/dccstor/avistor4/squad/expts/xlnet_branch_hf/pytorch-pretrained-BERT/examples/run_squad.py\", line 478, in <module>\r\n main()\r\n File \"/dccstor/avistor4/squad/expts/xlnet_branch_hf/pytorch-pretrained-BERT/examples/run_squad.py\", line 470, in main\r\n result = evaluate(args, model, tokenizer, prefix=global_step)\r\n File \"/dccstor/avistor4/squad/expts/xlnet_branch_hf/pytorch-pretrained-BERT/examples/run_squad.py\", line 219, in evaluate\r\n args.null_score_diff_threshold)\r\n File \"/dccstor/avistor4/squad/expts/xlnet_branch_hf/pytorch-pretrained-BERT/examples/utils_squad.py\", line 463, in write_predictions\r\n feature_null_score = result.start_logits[0] + result.end_logits[0]\r\nTypeError: unsupported operand type(s) for +: 'float' and `'list'`\r\n",
"Is there any rough estimation as to when \"a finetuning example with results close to TF\" will be available?",
"> \r\n> \r\n> Thanks for the fix - it's working now :)\r\n> \r\n> Sorry, if that's not the right place here:\r\n> I've implemented a new embedding class in `flair`. But when the complete model is going to be saved (via `torch.save`), the following error message is shown:\r\n> \r\n> ```shell\r\n> Traceback (most recent call last):\r\n> File \"train_xlnet.py\", line 36, in <module>\r\n> max_epochs=500)\r\n> File \"/mnt/flair/flair/trainers/trainer.py\", line 341, in train\r\n> self.model.save(base_path / \"best-model.pt\", pickle_module=self.pickle_module)\r\n> File \"/mnt/flair/flair/nn.py\", line 86, in save\r\n> self.save_torch_model(model_state, str(model_file), pickle_module)\r\n> File \"/mnt/flair/flair/nn.py\", line 76, in save_torch_model\r\n> torch.save(model_state, str(model_file), pickle_protocol=pickle_protocol)\r\n> File \"/usr/local/lib/python3.6/dist-packages/torch/serialization.py\", line 224, in save\r\n> return _with_file_like(f, \"wb\", lambda f: _save(obj, f, pickle_module, pickle_protocol))\r\n> File \"/usr/local/lib/python3.6/dist-packages/torch/serialization.py\", line 149, in _with_file_like\r\n> return body(f)\r\n> File \"/usr/local/lib/python3.6/dist-packages/torch/serialization.py\", line 224, in <lambda>\r\n> return _with_file_like(f, \"wb\", lambda f: _save(obj, f, pickle_module, pickle_protocol))\r\n> File \"/usr/local/lib/python3.6/dist-packages/torch/serialization.py\", line 297, in _save\r\n> pickler.dump(obj)\r\n> TypeError: can't pickle SwigPyObject objects\r\n> ```\r\n> \r\n> This error message does not appear, when e.g. using the GPT1 embeddings. Do you have any hint, what's going wrong here 🤔\r\n\r\nHi, have you solved this issue? I'm trying to implement XLMRoberta in my model based on flair and meet the same issue when saving the model. "
] | 1,561 | 1,576 | 1,563 | MEMBER | null | Current status:
- [x] model with commented code and pretrained loading logic
- [x] tokenizer
- [x] tests for model and tokenizer
- [x] checking standard deviation of hidden states with TF model is ok (max dev btw 1e-4 & 1e-5 until last layer, last layer 1e-3, higher than bert but should be ok, investigated this in details, comes from the conjunction of layer_norm and slight differences in internal PT vs. TF ops. Add some graphs to readme)
- [x] converting and uploading model to S3
Model/tokenizer are usable, now just need to
- [x] check the model behave well under various conditions and in a few corner cases
- [ ] add `XLNetForQuestionAnswering` classes variants
- [x] add `XLNetForSequenceClassification` classes variants
- [ ] add a finetuning example with results close to TF
- [ ] add models in README
- [ ] add models on torch.hub | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/711/reactions",
"total_count": 79,
"+1": 40,
"-1": 0,
"laugh": 0,
"hooray": 13,
"confused": 0,
"heart": 0,
"rocket": 15,
"eyes": 11
} | https://api.github.com/repos/huggingface/transformers/issues/711/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/711",
"html_url": "https://github.com/huggingface/transformers/pull/711",
"diff_url": "https://github.com/huggingface/transformers/pull/711.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/711.patch",
"merged_at": 1563270683000
} |
https://api.github.com/repos/huggingface/transformers/issues/710 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/710/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/710/comments | https://api.github.com/repos/huggingface/transformers/issues/710/events | https://github.com/huggingface/transformers/issues/710 | 459,174,943 | MDU6SXNzdWU0NTkxNzQ5NDM= | 710 | A way to increase input length limitation? | {
"login": "lcswillems",
"id": 5437552,
"node_id": "MDQ6VXNlcjU0Mzc1NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5437552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lcswillems",
"html_url": "https://github.com/lcswillems",
"followers_url": "https://api.github.com/users/lcswillems/followers",
"following_url": "https://api.github.com/users/lcswillems/following{/other_user}",
"gists_url": "https://api.github.com/users/lcswillems/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lcswillems/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lcswillems/subscriptions",
"organizations_url": "https://api.github.com/users/lcswillems/orgs",
"repos_url": "https://api.github.com/users/lcswillems/repos",
"events_url": "https://api.github.com/users/lcswillems/events{/privacy}",
"received_events_url": "https://api.github.com/users/lcswillems/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"No way as far as I can tell, this is a fundamental limitation for absolute position pre-trained models (i.e. BERT, GPT, GPT-2)",
"Okay thank you!"
] | 1,561 | 1,561 | 1,561 | NONE | null | Hi,
Is there a way to increase input length limitation of 512 tokens?
Maybe something to change in the code? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/710/timeline | completed | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.