url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/909 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/909/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/909/comments | https://api.github.com/repos/huggingface/transformers/issues/909/events | https://github.com/huggingface/transformers/pull/909 | 473,448,735 | MDExOlB1bGxSZXF1ZXN0MzAxNjEzOTM4 | 909 | [develop] Convenience args.{train/dev}_file arguments. | {
"login": "kshitij12345",
"id": 19503980,
"node_id": "MDQ6VXNlcjE5NTAzOTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/19503980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kshitij12345",
"html_url": "https://github.com/kshitij12345",
"followers_url": "https://api.github.com/users/kshitij12345/followers",
"following_url": "https://api.github.com/users/kshitij12345/following{/other_user}",
"gists_url": "https://api.github.com/users/kshitij12345/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kshitij12345/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kshitij12345/subscriptions",
"organizations_url": "https://api.github.com/users/kshitij12345/orgs",
"repos_url": "https://api.github.com/users/kshitij12345/repos",
"events_url": "https://api.github.com/users/kshitij12345/events{/privacy}",
"received_events_url": "https://api.github.com/users/kshitij12345/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Why don't give the full path of the train/dev file instead of giving data_dir?",
"My thought was not the change much of the argument interface. If we support full path, then the `data_dir` will not be required and considered. So I wasn't sure if that change is the way to go. Sure, we can change it like that as well.\r\n\r\nPersonally, I do agree that full path for train/dev is more convenient while using."
] | 1,564 | 1,568 | 1,568 | CONTRIBUTOR | null | Adds Arguments
```
--train_file any_train_file.tsv \
--dev_file any_dev_file.tsv \
```
to use any file for training/dev in the pointed data directory.
Especially handy for evaluation.
Allows for
```
python run_glue.py \
--model_type bert \
--model_name_or_path bert-base-uncased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--train_file any_train_file.tsv \
--dev_file any_dev_file.tsv \
--do_lower_case \
--data_dir $GLUE_DIR/$TASK_NAME \
--max_seq_length 128 \
--per_gpu_eval_batch_size=8 \
--per_gpu_train_batch_size=8 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir ./Data/Test/
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/909/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/909",
"html_url": "https://github.com/huggingface/transformers/pull/909",
"diff_url": "https://github.com/huggingface/transformers/pull/909.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/909.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/908 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/908/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/908/comments | https://api.github.com/repos/huggingface/transformers/issues/908/events | https://github.com/huggingface/transformers/issues/908 | 473,442,900 | MDU6SXNzdWU0NzM0NDI5MDA= | 908 | Cannot inherit from BertPretrainedModel anymore after migrating to pytorch-transformers | {
"login": "ereday",
"id": 13196191,
"node_id": "MDQ6VXNlcjEzMTk2MTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/13196191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ereday",
"html_url": "https://github.com/ereday",
"followers_url": "https://api.github.com/users/ereday/followers",
"following_url": "https://api.github.com/users/ereday/following{/other_user}",
"gists_url": "https://api.github.com/users/ereday/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ereday/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ereday/subscriptions",
"organizations_url": "https://api.github.com/users/ereday/orgs",
"repos_url": "https://api.github.com/users/ereday/repos",
"events_url": "https://api.github.com/users/ereday/events{/privacy}",
"received_events_url": "https://api.github.com/users/ereday/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You should do `from pytorch_transformers.modeling_bert import BertPreTrainedModel`\r\n\r\nI'll add these to the main `__init__.py`",
"Thank you for the answer @thomwolf . It solved that error but now I'm getting another one (which wasn't there when I was using previous versions of the repository): `TypeError: unhashable type: 'BertConfig'` what could be wrong ? ",
"We need a full error log and more details."
] | 1,564 | 1,566 | 1,564 | NONE | null | Hi,
After I updated my environment today, I cannot run my old code anymore. I think I followed all the steps in migration section of README but still the following code gives me the `NameError: name 'BertPreTrainedModel' is not defined` error. To migrate latest version, I cloned the repository and run `pip install --editable .` command within the directory.
Here is the code:
```python
from pytorch_transformers import *
class BertForMultiLabelSequenceClassification(BertPreTrainedModel):
def __init__(self, config, num_labels=2):
super(BertForMultiLabelSequenceClassification, self).__init__(config)
self.num_labels = num_labels
self.bert = BertModel("bert-base-multilingual-cased")
self.dropout = torch.nn.Dropout(config.hidden_dropout_prob)
self.classifier = torch.nn.Linear(config.hidden_size, num_labels)
self.apply(self.init_bert_weights)
def forward(self, input_ids, token_type_ids=None, attention_mask=None, labels=None):
_, pooled_output = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False)
pooled_output = outputs[-1]
pooled_output = self.dropout(pooled_output)
logits = self.classifier(pooled_output)
return logits
args = {
"train_size": -1,
"val_size": -1,
"bert_model": "bert-base-multilingual-cased",
"do_lower_case":False,
"max_seq_length": 100,
"do_train": True,
"do_eval": True,
"train_batch_size": 32,
"eval_batch_size": 32,
"learning_rate": 3e-5,
"num_train_epochs": 20,
"warmup_proportion": 0.1,
"no_cuda": False,
"local_rank": -1,
"seed": 42,
}
num_labels = 2
model = BertForMultiLabelSequenceClassification.from_pretrained(args['bert-model'],num_labels)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/908/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/907 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/907/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/907/comments | https://api.github.com/repos/huggingface/transformers/issues/907/events | https://github.com/huggingface/transformers/pull/907 | 473,361,809 | MDExOlB1bGxSZXF1ZXN0MzAxNTQzNTcy | 907 | Fix convert to tf | {
"login": "dhpollack",
"id": 368699,
"node_id": "MDQ6VXNlcjM2ODY5OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/368699?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhpollack",
"html_url": "https://github.com/dhpollack",
"followers_url": "https://api.github.com/users/dhpollack/followers",
"following_url": "https://api.github.com/users/dhpollack/following{/other_user}",
"gists_url": "https://api.github.com/users/dhpollack/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhpollack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhpollack/subscriptions",
"organizations_url": "https://api.github.com/users/dhpollack/orgs",
"repos_url": "https://api.github.com/users/dhpollack/repos",
"events_url": "https://api.github.com/users/dhpollack/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhpollack/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/907?src=pr&el=h1) Report\n> Merging [#907](https://codecov.io/gh/huggingface/pytorch-transformers/pull/907?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/46cc9dd2b51a152b2e262ec12e40dddd13235aba?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/907?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #907 +/- ##\n=======================================\n Coverage 79.03% 79.03% \n=======================================\n Files 34 34 \n Lines 6234 6234 \n=======================================\n Hits 4927 4927 \n Misses 1307 1307\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/907?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/907?src=pr&el=footer). Last update [46cc9dd...09ecf22](https://codecov.io/gh/huggingface/pytorch-transformers/pull/907?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Ok thanks David!"
] | 1,564 | 1,565 | 1,565 | CONTRIBUTOR | null | I struggled with this same problem for a long time. The naive `assign` op way, puts all of the weights into both the checkpoint file (`.ckpt.data-XXXXX-of-YYYYY`) and the meta file (`.ckpt.meta`). This is because assign adds an operation to the graph. So basically, you have two instructions in your meta file, one that initializes the variable with random values and then another that assigns the pytorch values to these tensors. But really, you want to initialize everything once with the meta file and then read the data file which should have your pytorch weights in it. Tensorflow hides this functionality deep within it's source code and every answer on stackoverflow tells one to use `assign`. But the `tf.keras.backend.set_value` function does simply replace the weights of a variable. However, this function makes some assumptions about your session and your graph so I had to change your code a bit. Long story short, doing easy things in tensorflow is hard.
So what's the difference?
1. the meta file will be about 1mb instead of 400+ mb
2. the script runs in about 10 seconds instead of 3 minutes
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/907/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/907/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/907",
"html_url": "https://github.com/huggingface/transformers/pull/907",
"diff_url": "https://github.com/huggingface/transformers/pull/907.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/907.patch",
"merged_at": 1565002505000
} |
https://api.github.com/repos/huggingface/transformers/issues/906 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/906/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/906/comments | https://api.github.com/repos/huggingface/transformers/issues/906/events | https://github.com/huggingface/transformers/issues/906 | 473,233,676 | MDU6SXNzdWU0NzMyMzM2NzY= | 906 | cuda out of memory | {
"login": "Ravikiran2611",
"id": 40524495,
"node_id": "MDQ6VXNlcjQwNTI0NDk1",
"avatar_url": "https://avatars.githubusercontent.com/u/40524495?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ravikiran2611",
"html_url": "https://github.com/Ravikiran2611",
"followers_url": "https://api.github.com/users/Ravikiran2611/followers",
"following_url": "https://api.github.com/users/Ravikiran2611/following{/other_user}",
"gists_url": "https://api.github.com/users/Ravikiran2611/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ravikiran2611/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ravikiran2611/subscriptions",
"organizations_url": "https://api.github.com/users/Ravikiran2611/orgs",
"repos_url": "https://api.github.com/users/Ravikiran2611/repos",
"events_url": "https://api.github.com/users/Ravikiran2611/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ravikiran2611/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Try to implement gradient accumulation during training, instead of updating parameters in each iteration. Please check this nice and easy-to-follow tutorial by @thomwolf [here](https://medium.com/huggingface/training-larger-batches-practical-tips-on-1-gpu-multi-gpu-distributed-setups-ec88c3e51255) . I used this technique with GPT-2 small, with a dataset of ~350k, with single GPU and it worked completely fine.",
"thanks @sajidrahman \r\ni will go through it ",
"Edit: There is a parameter now for `gradient_accumulation_steps`... this can be adjusted to achieve gradient accumulation? ",
"The problem is about batch size 20. Batch sizes more than 4 are something that doesn't fit most of (single) gpu's for many models. Check this: https://github.com/huggingface/transformers/issues/2016#issuecomment-561093186 . Some cases you cannot make fit even 1 batch to memory. As @sajidrahman mentioned, [this](https://medium.com/huggingface/training-larger-batches-practical-tips-on-1-gpu-multi-gpu-distributed-setups-ec88c3e51255) is a good point to start.\r\nThe issue can be closed if everything is clear?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Even in my case problem was my batch size of 8, worked after changing it to 2."
] | 1,564 | 1,598 | 1,581 | NONE | null | `import torch
from pytorch_transformers import BertTokenizer, BertForSequenceClassification
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
model = BertForSequenceClassification.from_pretrained("bert-base-uncased")
import csv
data = []
label = []
with open('Training.csv','r') as file:
reader = csv.reader(file)
for row in reader:
data.append("[CLS] "+row[1]+" [SEP]")
label.append(int(row[2]))
def tokenize_data(data): # for numericalizing the text
for sub in range(len(data)):
data_tokenized = tokenizer.encode(data[sub])
data[sub] = data_tokenized
return data
def make_batches(data): # for making all the sentences into same length
max_len = len(data[-1])
for i in range(len(data)):
if(len(data[i]) < max_len):
iter = max_len - len(data[i])
for j in range(iter):
data[i].append(102)
return data
optim = torch.optim.Adam(model.parameters(), lr=2e-05, betas=(0.9, 0.98), eps=1e-9)
import numpy as np
model = model.cuda()
model.train()
#model = torch.nn.DataParallel(model)
batch_size = 20
for i in range(0,len(data),batch_size):
print(i)
if True:
batch = data[i:i+batch_size]
batch = tokenize_data(batch)
batch.sort(key = lambda x : len(x))
batch = make_batches(batch)
batch = torch.tensor(batch)
target = torch.tensor(label[i:i+batch_size])
inp = batch.cuda()
target = target.cuda()
output = model(inp)
loss = torch.nn.functional.cross_entropy(output[0].view(-1,output[0].size()[-1]),target.contiguous().view(-1))
print(loss)
optim.zero_grad()
model.zero_grad()
loss.backward()
optim.step()
print("success")
`
so the above is my code and whenever i run it ,it give me error saying
`Traceback (most recent call last):
File "classification_using_bert.py", line 49, in <module>
loss.backward()
File "/home/zlabs-nlp/miniconda3/envs/ravienv/lib/python3.7/site-packages/torch/tensor.py", line 107, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/zlabs-nlp/miniconda3/envs/ravienv/lib/python3.7/site-packages/torch/autograd/__init__.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: CUDA out of memory. Tried to allocate 42.00 MiB (GPU 0; 10.92 GiB total capacity; 6.34 GiB already allocated; 28.50 MiB free; 392.76 MiB cached)`
CAN ANYONE TEL ME WHAT IS MISTAKE
THANKS IN ADVANCE !!!!!!!!!! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/906/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/905 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/905/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/905/comments | https://api.github.com/repos/huggingface/transformers/issues/905/events | https://github.com/huggingface/transformers/pull/905 | 473,064,372 | MDExOlB1bGxSZXF1ZXN0MzAxMzA1NjEw | 905 | Bugfix for encoding error during GPT2Tokenizer.from_pretrained('local… | {
"login": "DrStoop",
"id": 19177740,
"node_id": "MDQ6VXNlcjE5MTc3NzQw",
"avatar_url": "https://avatars.githubusercontent.com/u/19177740?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DrStoop",
"html_url": "https://github.com/DrStoop",
"followers_url": "https://api.github.com/users/DrStoop/followers",
"following_url": "https://api.github.com/users/DrStoop/following{/other_user}",
"gists_url": "https://api.github.com/users/DrStoop/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DrStoop/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DrStoop/subscriptions",
"organizations_url": "https://api.github.com/users/DrStoop/orgs",
"repos_url": "https://api.github.com/users/DrStoop/repos",
"events_url": "https://api.github.com/users/DrStoop/events{/privacy}",
"received_events_url": "https://api.github.com/users/DrStoop/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/905?src=pr&el=h1) Report\n> Merging [#905](https://codecov.io/gh/huggingface/pytorch-transformers/pull/905?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/46cc9dd2b51a152b2e262ec12e40dddd13235aba?src=pr&el=desc) will **increase** coverage by `<.01%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/905?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #905 +/- ##\n==========================================\n+ Coverage 79.03% 79.03% +<.01% \n==========================================\n Files 34 34 \n Lines 6234 6235 +1 \n==========================================\n+ Hits 4927 4928 +1 \n Misses 1307 1307\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/905?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/tokenization\\_gpt2.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/905/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX2dwdDIucHk=) | `96.69% <100%> (+0.02%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/905?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/905?src=pr&el=footer). Last update [46cc9dd...f8d9977](https://codecov.io/gh/huggingface/pytorch-transformers/pull/905?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,564 | 1,565 | 1,565 | NONE | null | …/path/to/mode')
BUG DESCRIPTION: Loading GPT2-tokenizer from local path with
GPT2Tokenizer.from_pretrained(pretrained_model_name_or_path='local/path/to/model')
returns following error due to encoding error for json.load():
Traceback (most recent call last):
File "/opt/pycharm-2019.1.3/helpers/pydev/pydevd.py", line 1758, in <module>
main()
File "/opt/pycharm-2019.1.3/helpers/pydev/pydevd.py", line 1752, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "/opt/pycharm-2019.1.3/helpers/pydev/pydevd.py", line 1147, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/opt/pycharm-2019.1.3/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/developer/AmI/transfer-learning-conv-ai/pytorch_transformer_evaluation.py", line 24, in <module>
cache_dir=None)
File "/home/developer/AmI/pytorch-transformers/pytorch_transformers/tokenization_utils.py", line 151, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "/home/developer/AmI/pytorch-transformers/pytorch_transformers/tokenization_utils.py", line 240, in _from_pretrained
tokenizer = cls(*inputs, **kwargs)
File "/home/developer/AmI/pytorch-transformers/pytorch_transformers/tokenization_gpt2.py", line 110, in __init__
self.encoder = json.load(open(vocab_file))
File "/conda/envs/rapids/lib/python3.6/json/__init__.py", line 296, in load
return loads(fp.read(),
File "/conda/envs/rapids/lib/python3.6/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 840: ordinal not in range(128) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/905/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/905",
"html_url": "https://github.com/huggingface/transformers/pull/905",
"diff_url": "https://github.com/huggingface/transformers/pull/905.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/905.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/904 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/904/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/904/comments | https://api.github.com/repos/huggingface/transformers/issues/904/events | https://github.com/huggingface/transformers/issues/904 | 473,037,070 | MDU6SXNzdWU0NzMwMzcwNzA= | 904 | AssertionError while using DataParallelModel | {
"login": "sajidrahman",
"id": 4258481,
"node_id": "MDQ6VXNlcjQyNTg0ODE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4258481?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sajidrahman",
"html_url": "https://github.com/sajidrahman",
"followers_url": "https://api.github.com/users/sajidrahman/followers",
"following_url": "https://api.github.com/users/sajidrahman/following{/other_user}",
"gists_url": "https://api.github.com/users/sajidrahman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sajidrahman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sajidrahman/subscriptions",
"organizations_url": "https://api.github.com/users/sajidrahman/orgs",
"repos_url": "https://api.github.com/users/sajidrahman/repos",
"events_url": "https://api.github.com/users/sajidrahman/events{/privacy}",
"received_events_url": "https://api.github.com/users/sajidrahman/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You don't need to use this method here because the models have built-in losses computation.\r\nJust feed the labels and you will get the loss back (see the doc/docstrings of the models).",
"Hi @thomwolf, thanks for the suggestion. After following your advice, I'm not getting the error anymore, but now I'm a bit confused about the `backward()` pass. Now the **losses** variable contains a list of tensors of loss calculations per gpu and I'm not sure how I can enforce each individual model sitting in each gpu to perform backprop. Following is a sample code of what I've done so far and the sample output:\r\n\r\n```\r\nlosses:[[tensor(98.5968, device='cuda:0', grad_fn=<NllLossBackward>), tensor(0.7206, device='cuda:0', grad_fn=<BinaryCrossEntropyWithLogitsBackward>)], [tensor(100.5673, device='cuda:1', grad_fn=<NllLossBackward>), tensor(0.6629, device='cuda:1', grad_fn=<BinaryCrossEntropyWithLogitsBackward>)]] \r\n lm_loss: (tensor(98.5968, device='cuda:0', grad_fn=<NllLossBackward>), tensor(100.5673, device='cuda:1', grad_fn=<NllLossBackward>))\r\nclf_loss:(tensor(0.7206, device='cuda:0', grad_fn=<BinaryCrossEntropyWithLogitsBackward>), tensor(0.6629, device='cuda:1', grad_fn=<BinaryCrossEntropyWithLogitsBackward>))\r\n\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n<ipython-input-55-10ac591c2408> in <module>\r\n 26 lm_loss, clf_loss = zip(*losses)\r\n 27 print('losses:{} \\n lm_loss: {}\\nclf_loss:{}\\n'.format(losses, lm_loss, clf_loss))\r\n 28 \r\n---> 29 loss = (args.lm_coef * lm_loss.to(device) + clf_loss.to(device)).to(device)\r\n 30 \r\n 31 print(loss)\r\n\r\nAttributeError: 'tuple' object has no attribute 'to'\r\n```\r\nObviously I can deal with this 'tuple' error, but I'm confused what should I do next with this? Should I call `loss.backward()` per each cuda devices? How will I then gather gradient values in that case? Please excuse me for any naive assumptions I'm making here. Your input would be highly appreciated :)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi @thomwolf , I also experience this imbalanced GPU usage when using the trainer function. Can I ask the reason that the DataParallelModel that you discussed on the Medium post, is not applied as default in the trainer function? Thank you."
] | 1,564 | 1,617 | 1,570 | NONE | null | Hi,
I'm trying to use _Load Balancing during multi-GPU_ environment. I'm following the tutorial by @thomwolf published at [medium](https://medium.com/huggingface/training-larger-batches-practical-tips-on-1-gpu-multi-gpu-distributed-setups-ec88c3e51255). I'm fine-tuning GPT-2 small for a classification task. Here're the steps I've followed so far:
1. Copy [parallel.py](https://gist.github.com/thomwolf/7e2407fbd5945f07821adae3d9fd1312?source=post_page---------------------------) in local directory
2. Add `from torch.nn.parallel.distributed import DistributedDataParallel` in parallel.py file (otherwise getting an error 'DistributedDataParallel' not found)
3. After loading GPT2Pretrained model, define the parallel model:
```
model = DataParallelModel(model, device_ids=[0, 1])
parallel_loss = DataParallelCriterion(model, device_ids=[0,1])
```
4. Now during training, got the following error. Complete stacktrace is as follows:
> AssertionError Traceback (most recent call last)
> <ipython-input-135-05384873e022> in <module>
> 19
> 20 # losses = model(input_ids, mc_token_ids, lm_labels=lm_labels, mc_labels=mc_labels)
> ---> 21 losses = parallel_loss(input_ids, mc_token_ids, lm_labels=lm_labels, mc_labels=mc_labels)
> 22
> 23 lm_loss, clf_loss = losses
>
> ~/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
>
> 491 result = self._slow_forward(*input, **kwargs)
> 492 else:
> --> 493 result = self.forward(*input, **kwargs)
> 494 for hook in self._forward_hooks.values():
> 495 hook_result = hook(self, input, result)
>
> ~/github_repos/pytorch-pretrained-BERT/examples/parallel.py in forward(self, inputs, *targets, **kwargs)
>
> 158 return self.module(inputs, *targets[0], **kwargs[0])
> 159 replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
> --> 160 outputs = _criterion_parallel_apply(replicas, inputs, targets, kwargs)
> 161 #return Reduce.apply(*outputs) / len(outputs)
> 162 #return self.gather(outputs, self.output_device).mean()
>
> ~/github_repos/pytorch-pretrained-BERT/examples/parallel.py in _criterion_parallel_apply(modules, inputs, targets, kwargs_tup, devices)
>
> 165
> 166 def _criterion_parallel_apply(modules, inputs, targets, kwargs_tup=None, devices=None):
> --> 167 assert len(modules) == len(inputs)
> 168 assert len(targets) == len(inputs)
> 169 if kwargs_tup:
>
> AssertionError:
From the stacttrace, I'm not sure why module length needs to be equal of inputs length. Am I missing something here? I'm using Python 3.6 with PyTorch version 1.1.0. Any help/pointers will be highly appreciated. Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/904/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/903 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/903/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/903/comments | https://api.github.com/repos/huggingface/transformers/issues/903/events | https://github.com/huggingface/transformers/issues/903 | 472,817,449 | MDU6SXNzdWU0NzI4MTc0NDk= | 903 | why the acc of chinese model(bert) is just 0.438 | {
"login": "zsk423200",
"id": 18025765,
"node_id": "MDQ6VXNlcjE4MDI1NzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/18025765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zsk423200",
"html_url": "https://github.com/zsk423200",
"followers_url": "https://api.github.com/users/zsk423200/followers",
"following_url": "https://api.github.com/users/zsk423200/following{/other_user}",
"gists_url": "https://api.github.com/users/zsk423200/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zsk423200/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zsk423200/subscriptions",
"organizations_url": "https://api.github.com/users/zsk423200/orgs",
"repos_url": "https://api.github.com/users/zsk423200/repos",
"events_url": "https://api.github.com/users/zsk423200/events{/privacy}",
"received_events_url": "https://api.github.com/users/zsk423200/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"i have the same issue as you. Do you have a good solution for it?",
"I met the same problem in multi-labels classification task, have no idea about the problem!",
"@zsk423200 Maybe you can try the \"bert-base-multilingual-cased-pytorch_model\" , it's performance seems better a lot than the pure Chinese ver in my task, just a temporal solu.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,564 | 1,571 | 1,571 | NONE | null | dataset: XNLI-1.0
i run the dataset of xnli-1.0, but the result is `acc = 0.43855421686746987`, and i run on google bert in tf, the result is `eval_accuracy = 0.7674699`. i use the same epochs and lr, i really don't know why.
i add the dataprocess of xnli ,the same with the version of tf bert:
```
class XnliProcessor(DataProcessor):
"""Processor for the XNLI data set."""
def __init__(self):
self.language = "zh"
def get_train_examples(self, data_dir):
"""See base class."""
lines = self._read_tsv(
os.path.join(data_dir, "multinli",
"multinli.train.%s.tsv" % self.language))
examples = []
for (i, line) in enumerate(lines):
if i == 0:
continue
guid = "train-%d" % (i)
text_a = line[0]
text_b = line[1]
label = line[2]
if label == "contradictory":
label = "contradiction"
examples.append(
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
return examples
def get_dev_examples(self, data_dir):
"""See base class."""
lines = self._read_tsv(os.path.join(data_dir, "xnli.dev.tsv"))
examples = []
for (i, line) in enumerate(lines):
if i == 0:
continue
guid = "dev-%d" % (i)
language = line[0]
if language != self.language:
continue
text_a = line[6]
text_b = line[7]
label = line[1]
examples.append(
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
return examples
def get_labels(self):
"""See base class."""
return ["contradiction", "entailment", "neutral"]
```
and my pytorch command is :
```
python run_glue.py --model_type bert --model_name_or_path bert-base-chinese --task_name XNLI --do_train --do_eval --do_lower_case --data_dir $XNLI_DIR --max_seq_length 128 --per_gpu_eval_batch_size=8 --per_gpu_train_batch_size=8 --learning_rate 5e-5 --num_train_epochs 2.0 --output_dir /tmp/MRPC4/ --overwrite_output_dir --save_steps=1000
```
and my tf common is:
```
python run_classifier.py --task_name=XNLI --do_train=true --do_eval=true --data_dir=$XNLI_DIR --vocab_file=$BERT_BASE_DIR/vocab.txt --bert_config_file=$BERT_BASE_DIR/bert_config.json --init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt --max_seq_length=128 --train_batch_size=32 --learning_rate=5e-5 --num_train_epochs=2.0 --output_dir=/tmp/xnli_output
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/903/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/902 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/902/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/902/comments | https://api.github.com/repos/huggingface/transformers/issues/902/events | https://github.com/huggingface/transformers/issues/902 | 472,804,028 | MDU6SXNzdWU0NzI4MDQwMjg= | 902 | Torchscript Trace slower with C++ runtime environment. | {
"login": "sukuya",
"id": 4861350,
"node_id": "MDQ6VXNlcjQ4NjEzNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4861350?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sukuya",
"html_url": "https://github.com/sukuya",
"followers_url": "https://api.github.com/users/sukuya/followers",
"following_url": "https://api.github.com/users/sukuya/following{/other_user}",
"gists_url": "https://api.github.com/users/sukuya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sukuya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sukuya/subscriptions",
"organizations_url": "https://api.github.com/users/sukuya/orgs",
"repos_url": "https://api.github.com/users/sukuya/repos",
"events_url": "https://api.github.com/users/sukuya/events{/privacy}",
"received_events_url": "https://api.github.com/users/sukuya/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"2 possible reasons:\r\n1. the first time you run `forward` will do some preheating work, maybe you should exclude the first run.\r\n2. try exclude `toTuple`\r\n\r\nAccording to my experience, jit with python or c++ will cost almost the same time.",
"@Meteorix Forward is called once before the loop, are you talking about something else. \r\nExcluding `toTuple` doesn't help. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,564 | 1,571 | 1,571 | CONTRIBUTOR | null | I traced the BERT model from PyTorchTransformers library and getting the following results for 10 iterations.
a) Using Python runtime for running the forward: 979,292 µs
```
import time
model = torch.jit.load('models_backup/2_2.pt')
x = torch.randint(2000, (1, 14), dtype=torch.long, device='cpu')
start = time.time()
for i in range(10):
model(x)
end = time.time()
print((end - start)*1000000, "µs")
```
b) Using C++ runtime for running the forward: 3,333,758 µs which is almost 3x of what Python
```
torch::Tensor x = torch::randint(index_max, {1, inputsize}, torch::dtype(torch::kInt64).device(torch::kCPU));
input.push_back(x);
#endif
// Execute the model and turn its output into a tensor.
auto outputs = module->forward(input).toTuple();
auto start = chrono::steady_clock::now();
for (int16_t i = 0; i<10; ++i)
{
outputs = module->forward(input).toTuple();
}
auto end = chrono::steady_clock::now();
cout << "Elapsed time in microseconds : "
<< chrono::duration_cast<chrono::microseconds>(end - start).count()
<< " µs" << endl;
```
@thomwolf any suggestions on what am I missing ? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/902/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/901 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/901/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/901/comments | https://api.github.com/repos/huggingface/transformers/issues/901/events | https://github.com/huggingface/transformers/issues/901 | 472,768,061 | MDU6SXNzdWU0NzI3NjgwNjE= | 901 | bug: it is broken to use tokenizer path | {
"login": "zsk423200",
"id": 18025765,
"node_id": "MDQ6VXNlcjE4MDI1NzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/18025765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zsk423200",
"html_url": "https://github.com/zsk423200",
"followers_url": "https://api.github.com/users/zsk423200/followers",
"following_url": "https://api.github.com/users/zsk423200/following{/other_user}",
"gists_url": "https://api.github.com/users/zsk423200/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zsk423200/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zsk423200/subscriptions",
"organizations_url": "https://api.github.com/users/zsk423200/orgs",
"repos_url": "https://api.github.com/users/zsk423200/repos",
"events_url": "https://api.github.com/users/zsk423200/events{/privacy}",
"received_events_url": "https://api.github.com/users/zsk423200/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Had the same issue when passing the exact path of the vocabulary file. Fixed it by just passing the name of the directory that contains the vocabulary file (in my case it was `vocab.txt`).",
"Good catch.\r\n\r\nFor non-BPE models with a single vocabulary file (Bert, XLNet, Transformer-XL) we can fix this workflow so you can provide a direct path.\r\n\r\nUpdating this."
] | 1,564 | 1,564 | 1,564 | NONE | null | run run_glue.py with the parameter of tokenizer_name:
`--tokenizer_name=/path/bert-base-chinese-vocab.txt`
but get the error:
```
Traceback (most recent call last):
File "run_glue.py", line 485, in <module>
main()
File "run_glue.py", line 418, in main
tokenizer = tokenizer_class.from_pretrained(args.tokenizer_name if args.tokenizer_name else args.model_name_or_path, do_lower_case=args.do_lower_case)
File "/opt/conda/lib/python3.6/site-packages/pytorch_transformers/tokenization_bert.py", line 200, in from_pretrained
return super(BertTokenizer, cls)._from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/pytorch_transformers/tokenization_utils.py", line 234, in _from_pretrained
special_tokens_map = json.load(open(special_tokens_map_file, encoding="utf-8"))
File "/opt/conda/lib/python3.6/json/__init__.py", line 299, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "/opt/conda/lib/python3.6/json/__init__.py", line 354, in loads
return _default_decoder.decode(s)
File "/opt/conda/lib/python3.6/json/decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/opt/conda/lib/python3.6/json/decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 2 (char 1)
```
i debug the variable of resolved_vocab_files, has the same value:
```
{'added_tokens_file': '/home/zhoushengkai/script/NLP/pytorch-transformers/pytorch_transformers/vocab_files/bert-base-chinese-vocab.txt', 'special_tokens_map_file': '/home/zhoushengkai/script/NLP/pytorch-transformers/pytorch_transformers/vocab_files/bert-base-chinese-vocab.txt', 'vocab_file': '/home/zhoushengkai/script/NLP/pytorch-transformers/pytorch_transformers/vocab_files/bert-base-chinese-vocab.txt'}
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/901/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/900 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/900/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/900/comments | https://api.github.com/repos/huggingface/transformers/issues/900/events | https://github.com/huggingface/transformers/issues/900 | 472,748,401 | MDU6SXNzdWU0NzI3NDg0MDE= | 900 | SpanBERT support | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"are we going to get this? :) thanks :)",
"Fyi https://github.com/mandarjoshi90/coref#pretrained-coreference-models describes how to obtain the coreference models that should contain SpanBERT.\r\n",
"@ArneBinder Thanks for that hint!\r\n\r\nI downloaded the *SpanBERT* (base) model. Unfortunately, the TF checkpoint conversion throws the following error message:\r\n\r\n```bash\r\nINFO:pytorch_transformers.modeling_bert:Loading TF weight width_scores/output_weights/Adam_1 with shape [3000, 1]\r\nINFO:pytorch_transformers.modeling_bert:Skipping antecedent_distance_emb\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/pytorch_transformers\", line 11, in <module>\r\n load_entry_point('pytorch-transformers', 'console_scripts', 'pytorch_transformers')()\r\n File \"/mnt/pytorch-transformers/pytorch_transformers/__main__.py\", line 30, in main\r\n convert_tf_checkpoint_to_pytorch(TF_CHECKPOINT, TF_CONFIG, PYTORCH_DUMP_OUTPUT)\r\n File \"/mnt/pytorch-transformers/pytorch_transformers/convert_tf_checkpoint_to_pytorch.py\", line 36, in convert_tf_checkpoint_to_pytorch\r\n load_tf_weights_in_bert(model, config, tf_checkpoint_path)\r\n File \"/mnt/pytorch-transformers/pytorch_transformers/modeling_bert.py\", line 111, in load_tf_weights_in_bert\r\n assert pointer.shape == array.shape\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 591, in __getattr__\r\n type(self).__name__, name))\r\nAttributeError: 'BertForPreTraining' object has no attribute 'shape'\r\n```\r\n\r\nI think some variables must be skipped, so a debugging session is unavoidable 😅 ",
"Hi @stefan-it, the SpanBERT authors shared their (~`pytorch-transformers`-compatible) weights with us, so if you'd be interested we can send them your way so you can experiment/integrate them here.\r\n\r\nLet me know!",
"@julien-c this would be awesome 🤗 I would really like to do some experiments (mainly NER and PoS tagging) - would be great if you can share the weights (my mail is `[email protected]`) - thank you in advance :heart: ",
"Hi @julien-c, I would also like to receive the spanbert pytorch-compatible weights for semantic tasks like coref. could you send it to me too? my mail is [email protected]. many thanks.",
"You can have a look here, the official implementation has just been released: https://github.com/facebookresearch/SpanBERT",
"Well, two preliminary experiments (SpanBERT base) on CoNLL-2003 show a difference of ~7.8% compared to a BERT (base, cased) model 😱 So maybe this has something to do with the named entity masking 🤔 But I'll investigate that further this weekend...",
"Update on that: I tried SpanBERT for PoS tagging and the results are pretty close to DistilBERT. Here's one run over the Universal Dependencies v1.2:\r\n\r\n| Model | Dev | Test\r\n| ---------------------------------------------------------- | --------- | ---------\r\n| RoBERTa (large) | **97.80** | **97.75**\r\n| SpanBERT (large) | 96.48 | 96.61\r\n| BERT (large, cased) | 97.35 | 97.20\r\n| DistilBERT (uncased) | 96.64 | 96.70\r\n| [Plank et. al (2016)](https://arxiv.org/abs/1604.05529) | - | 95.52\r\n| [Yasunaga et. al (2017)](https://arxiv.org/abs/1711.04903) | - | 95.82",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,564 | 1,574 | 1,574 | COLLABORATOR | null | Hi,
I think the new *SpanBERT* model should also be supported in `pytorch-transformers` 😅
> We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text.
Paper can be found [here](https://arxiv.org/abs/1907.10529).
Model is currently not released yet, I'll update this issue here whenever the model is available :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/900/reactions",
"total_count": 14,
"+1": 14,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/900/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/899 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/899/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/899/comments | https://api.github.com/repos/huggingface/transformers/issues/899/events | https://github.com/huggingface/transformers/pull/899 | 472,745,736 | MDExOlB1bGxSZXF1ZXN0MzAxMDQ1OTQ2 | 899 | Fixed import to use torchscript flag. | {
"login": "sukuya",
"id": 4861350,
"node_id": "MDQ6VXNlcjQ4NjEzNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4861350?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sukuya",
"html_url": "https://github.com/sukuya",
"followers_url": "https://api.github.com/users/sukuya/followers",
"following_url": "https://api.github.com/users/sukuya/following{/other_user}",
"gists_url": "https://api.github.com/users/sukuya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sukuya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sukuya/subscriptions",
"organizations_url": "https://api.github.com/users/sukuya/orgs",
"repos_url": "https://api.github.com/users/sukuya/repos",
"events_url": "https://api.github.com/users/sukuya/events{/privacy}",
"received_events_url": "https://api.github.com/users/sukuya/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/899?src=pr&el=h1) Report\n> Merging [#899](https://codecov.io/gh/huggingface/pytorch-transformers/pull/899?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/067923d3267325f525f4e46f357360c191ba562e?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/899?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #899 +/- ##\n=======================================\n Coverage 79.03% 79.03% \n=======================================\n Files 34 34 \n Lines 6234 6234 \n=======================================\n Hits 4927 4927 \n Misses 1307 1307\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/899?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/899?src=pr&el=footer). Last update [067923d...e1e2ab3](https://codecov.io/gh/huggingface/pytorch-transformers/pull/899?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,564 | 1,564 | 1,564 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/899/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/899",
"html_url": "https://github.com/huggingface/transformers/pull/899",
"diff_url": "https://github.com/huggingface/transformers/pull/899.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/899.patch",
"merged_at": 1564059802000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/898 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/898/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/898/comments | https://api.github.com/repos/huggingface/transformers/issues/898/events | https://github.com/huggingface/transformers/issues/898 | 472,699,140 | MDU6SXNzdWU0NzI2OTkxNDA= | 898 | fp16 is still has the problem | {
"login": "zsk423200",
"id": 18025765,
"node_id": "MDQ6VXNlcjE4MDI1NzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/18025765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zsk423200",
"html_url": "https://github.com/zsk423200",
"followers_url": "https://api.github.com/users/zsk423200/followers",
"following_url": "https://api.github.com/users/zsk423200/following{/other_user}",
"gists_url": "https://api.github.com/users/zsk423200/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zsk423200/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zsk423200/subscriptions",
"organizations_url": "https://api.github.com/users/zsk423200/orgs",
"repos_url": "https://api.github.com/users/zsk423200/repos",
"events_url": "https://api.github.com/users/zsk423200/events{/privacy}",
"received_events_url": "https://api.github.com/users/zsk423200/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"yes, I fixed it in #896 and waiting for author to merge..\r\nbut I still can't figure out why fp16 didn't save memory and didn't speed up....",
"Merged",
"close"
] | 1,564 | 1,564 | 1,564 | NONE | null | hello, as mentioned in #868 and #871 ,fp16 is broken, and you have fixed in the master once, but i am afraid it also has problem, DP is need after the amp.initialize() too. i review the code of apex, found that amp do not support the model of parallel type:
```
def check_models(models):
for model in models:
parallel_type = None
if isinstance(model, torch.nn.parallel.DistributedDataParallel):
parallel_type = "torch.nn.parallel.DistributedDataParallel"
if isinstance(model, apex_DDP):
parallel_type = "apex.parallel.DistributedDataParallel"
if isinstance(model, torch.nn.parallel.DataParallel):
parallel_type = "torch.nn.parallel.DataParallel"
if parallel_type is not None:
raise RuntimeError("Incoming model is an instance of {}. ".format(parallel_type) +
"Parallel wrappers should only be applied to the model(s) AFTER \n"
"the model(s) have been returned from amp.initialize.")
```
and the other question is that after i fixed the problem and can run with fp16, but i found it speed the same time and gpu memory, why?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/898/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/897 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/897/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/897/comments | https://api.github.com/repos/huggingface/transformers/issues/897/events | https://github.com/huggingface/transformers/pull/897 | 472,682,991 | MDExOlB1bGxSZXF1ZXN0MzAwOTk2NDM3 | 897 | Fix FileNotFoundError when running on SQuAD-v1.1 | {
"login": "bzantium",
"id": 19511788,
"node_id": "MDQ6VXNlcjE5NTExNzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/19511788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bzantium",
"html_url": "https://github.com/bzantium",
"followers_url": "https://api.github.com/users/bzantium/followers",
"following_url": "https://api.github.com/users/bzantium/following{/other_user}",
"gists_url": "https://api.github.com/users/bzantium/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bzantium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bzantium/subscriptions",
"organizations_url": "https://api.github.com/users/bzantium/orgs",
"repos_url": "https://api.github.com/users/bzantium/repos",
"events_url": "https://api.github.com/users/bzantium/events{/privacy}",
"received_events_url": "https://api.github.com/users/bzantium/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Duplicate to #882 "
] | 1,564 | 1,564 | 1,564 | CONTRIBUTOR | null | At "utils_squad_evaluate.py" line 291, no matter version_2_with_negative is True or False, it tries to load "output_null_log_odds_file" which is not saved when version_2_with_negative is False. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/897/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/897",
"html_url": "https://github.com/huggingface/transformers/pull/897",
"diff_url": "https://github.com/huggingface/transformers/pull/897.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/897.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/896 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/896/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/896/comments | https://api.github.com/repos/huggingface/transformers/issues/896/events | https://github.com/huggingface/transformers/pull/896 | 472,668,372 | MDExOlB1bGxSZXF1ZXN0MzAwOTg0Nzk0 | 896 | fix multi-gpu training bug when using fp16 | {
"login": "zijunsun",
"id": 20966464,
"node_id": "MDQ6VXNlcjIwOTY2NDY0",
"avatar_url": "https://avatars.githubusercontent.com/u/20966464?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijunsun",
"html_url": "https://github.com/zijunsun",
"followers_url": "https://api.github.com/users/zijunsun/followers",
"following_url": "https://api.github.com/users/zijunsun/following{/other_user}",
"gists_url": "https://api.github.com/users/zijunsun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijunsun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijunsun/subscriptions",
"organizations_url": "https://api.github.com/users/zijunsun/orgs",
"repos_url": "https://api.github.com/users/zijunsun/repos",
"events_url": "https://api.github.com/users/zijunsun/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijunsun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks, can you update `run_squad` similarly?",
"> Thanks, can you update `run_squad` similarly?\r\n\r\nupdated already.",
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/896?src=pr&el=h1) Report\n> Merging [#896](https://codecov.io/gh/huggingface/pytorch-transformers/pull/896?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/067923d3267325f525f4e46f357360c191ba562e?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/896?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #896 +/- ##\n=======================================\n Coverage 79.03% 79.03% \n=======================================\n Files 34 34 \n Lines 6234 6234 \n=======================================\n Hits 4927 4927 \n Misses 1307 1307\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/896?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/896?src=pr&el=footer). Last update [067923d...f0aeb7a](https://codecov.io/gh/huggingface/pytorch-transformers/pull/896?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks a lot!"
] | 1,564 | 1,564 | 1,564 | CONTRIBUTOR | null | multi-gpu training (orch.nn.DataParallel) should also be after apex fp16 initialization. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/896/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/896",
"html_url": "https://github.com/huggingface/transformers/pull/896",
"diff_url": "https://github.com/huggingface/transformers/pull/896.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/896.patch",
"merged_at": 1564162262000
} |
https://api.github.com/repos/huggingface/transformers/issues/895 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/895/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/895/comments | https://api.github.com/repos/huggingface/transformers/issues/895/events | https://github.com/huggingface/transformers/pull/895 | 472,662,788 | MDExOlB1bGxSZXF1ZXN0MzAwOTgwMzg2 | 895 | fix a bug of saving added tokens | {
"login": "askerlee",
"id": 1575461,
"node_id": "MDQ6VXNlcjE1NzU0NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1575461?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/askerlee",
"html_url": "https://github.com/askerlee",
"followers_url": "https://api.github.com/users/askerlee/followers",
"following_url": "https://api.github.com/users/askerlee/following{/other_user}",
"gists_url": "https://api.github.com/users/askerlee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/askerlee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/askerlee/subscriptions",
"organizations_url": "https://api.github.com/users/askerlee/orgs",
"repos_url": "https://api.github.com/users/askerlee/repos",
"events_url": "https://api.github.com/users/askerlee/events{/privacy}",
"received_events_url": "https://api.github.com/users/askerlee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/895?src=pr&el=h1) Report\n> Merging [#895](https://codecov.io/gh/huggingface/pytorch-transformers/pull/895?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/067923d3267325f525f4e46f357360c191ba562e?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `0%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/895?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #895 +/- ##\n=======================================\n Coverage 79.03% 79.03% \n=======================================\n Files 34 34 \n Lines 6234 6234 \n=======================================\n Hits 4927 4927 \n Misses 1307 1307\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/895?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/895/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3V0aWxzLnB5) | `86.53% <0%> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/895?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/895?src=pr&el=footer). Last update [067923d...c9a7b29](https://codecov.io/gh/huggingface/pytorch-transformers/pull/895?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks, that was fixed in #893 "
] | 1,564 | 1,564 | 1,564 | NONE | null | Refer to the code that loads `added_tokens.json`:
`added_tok_encoder = json.load(open(added_tokens_file, encoding="utf-8"))`
We can see that `added_tokens_encoder` should be saved in `added_tokens.json`. But the original code saved `added_tokens_decoder`.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/895/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/895",
"html_url": "https://github.com/huggingface/transformers/pull/895",
"diff_url": "https://github.com/huggingface/transformers/pull/895.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/895.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/894 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/894/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/894/comments | https://api.github.com/repos/huggingface/transformers/issues/894/events | https://github.com/huggingface/transformers/issues/894 | 472,642,820 | MDU6SXNzdWU0NzI2NDI4MjA= | 894 | Sequence length more than 512 | {
"login": "nayakt",
"id": 15123057,
"node_id": "MDQ6VXNlcjE1MTIzMDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/15123057?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nayakt",
"html_url": "https://github.com/nayakt",
"followers_url": "https://api.github.com/users/nayakt/followers",
"following_url": "https://api.github.com/users/nayakt/following{/other_user}",
"gists_url": "https://api.github.com/users/nayakt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nayakt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nayakt/subscriptions",
"organizations_url": "https://api.github.com/users/nayakt/orgs",
"repos_url": "https://api.github.com/users/nayakt/repos",
"events_url": "https://api.github.com/users/nayakt/events{/privacy}",
"received_events_url": "https://api.github.com/users/nayakt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"https://github.com/google-research/bert/issues/27#issuecomment-435265194",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,564 | 1,569 | 1,569 | NONE | null | Hi,
My dataset has sequence with more than 512 words and when use wordpieces sequence length goes beyond 512. How to handle this issue with BERT ?
Regards
Tapas | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/894/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/893 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/893/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/893/comments | https://api.github.com/repos/huggingface/transformers/issues/893/events | https://github.com/huggingface/transformers/pull/893 | 472,602,474 | MDExOlB1bGxSZXF1ZXN0MzAwOTM0NDgy | 893 | make save_pretrained do the right thing with added tokens | {
"login": "joelgrus",
"id": 1308313,
"node_id": "MDQ6VXNlcjEzMDgzMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1308313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joelgrus",
"html_url": "https://github.com/joelgrus",
"followers_url": "https://api.github.com/users/joelgrus/followers",
"following_url": "https://api.github.com/users/joelgrus/following{/other_user}",
"gists_url": "https://api.github.com/users/joelgrus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joelgrus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joelgrus/subscriptions",
"organizations_url": "https://api.github.com/users/joelgrus/orgs",
"repos_url": "https://api.github.com/users/joelgrus/repos",
"events_url": "https://api.github.com/users/joelgrus/events{/privacy}",
"received_events_url": "https://api.github.com/users/joelgrus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/893?src=pr&el=h1) Report\n> Merging [#893](https://codecov.io/gh/huggingface/pytorch-transformers/pull/893?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/067923d3267325f525f4e46f357360c191ba562e?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `0%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/893?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #893 +/- ##\n=======================================\n Coverage 79.03% 79.03% \n=======================================\n Files 34 34 \n Lines 6234 6234 \n=======================================\n Hits 4927 4927 \n Misses 1307 1307\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/893?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/893/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3V0aWxzLnB5) | `86.53% <0%> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/893?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/893?src=pr&el=footer). Last update [067923d...ae152ce](https://codecov.io/gh/huggingface/pytorch-transformers/pull/893?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Indeed, thanks Joel!"
] | 1,564 | 1,564 | 1,564 | CONTRIBUTOR | null | right now it's dumping the *decoder* when it should be dumping the *encoder*. and then (for obvious reasons) you get an error when you try to load "from_pretrained" using that dump.
this PR fixes that. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/893/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/893",
"html_url": "https://github.com/huggingface/transformers/pull/893",
"diff_url": "https://github.com/huggingface/transformers/pull/893.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/893.patch",
"merged_at": 1564059529000
} |
https://api.github.com/repos/huggingface/transformers/issues/892 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/892/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/892/comments | https://api.github.com/repos/huggingface/transformers/issues/892/events | https://github.com/huggingface/transformers/issues/892 | 472,594,468 | MDU6SXNzdWU0NzI1OTQ0Njg= | 892 | How to add new special token | {
"login": "dchang56",
"id": 24575558,
"node_id": "MDQ6VXNlcjI0NTc1NTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/24575558?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dchang56",
"html_url": "https://github.com/dchang56",
"followers_url": "https://api.github.com/users/dchang56/followers",
"following_url": "https://api.github.com/users/dchang56/following{/other_user}",
"gists_url": "https://api.github.com/users/dchang56/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dchang56/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dchang56/subscriptions",
"organizations_url": "https://api.github.com/users/dchang56/orgs",
"repos_url": "https://api.github.com/users/dchang56/repos",
"events_url": "https://api.github.com/users/dchang56/events{/privacy}",
"received_events_url": "https://api.github.com/users/dchang56/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"How did you add a new vocab.txt file? ",
"I actually figured it out. I manually replaced one of the unused tokens in the vocab file with [NEW] and added \"additiona_special_tokens\": \"[NEW]\" to the special_tokens.json file in the same directory as the vocab.txt file. It works, but I realized that adding new tokens without the ability to do further pretraining isn't all that useful, especially given small dataset size. I decided not to do it."
] | 1,564 | 1,564 | 1,564 | NONE | null | I noticed the never_split functionality is no longer used to keep track of special tokens to never split on. If I wanted to add a new special token like '[NEW]' so the tokenizer never splits it, how should I go about doing that? (I've already manually added it to vocab.txt by replacing an unused token with [NEW]. Now I just need to not split it) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/892/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/891 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/891/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/891/comments | https://api.github.com/repos/huggingface/transformers/issues/891/events | https://github.com/huggingface/transformers/issues/891 | 472,594,256 | MDU6SXNzdWU0NzI1OTQyNTY= | 891 | BERT: run_squad.py falling over after eval | {
"login": "jforbes14",
"id": 29598836,
"node_id": "MDQ6VXNlcjI5NTk4ODM2",
"avatar_url": "https://avatars.githubusercontent.com/u/29598836?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jforbes14",
"html_url": "https://github.com/jforbes14",
"followers_url": "https://api.github.com/users/jforbes14/followers",
"following_url": "https://api.github.com/users/jforbes14/following{/other_user}",
"gists_url": "https://api.github.com/users/jforbes14/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jforbes14/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jforbes14/subscriptions",
"organizations_url": "https://api.github.com/users/jforbes14/orgs",
"repos_url": "https://api.github.com/users/jforbes14/repos",
"events_url": "https://api.github.com/users/jforbes14/events{/privacy}",
"received_events_url": "https://api.github.com/users/jforbes14/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Fixed in #882"
] | 1,564 | 1,564 | 1,564 | NONE | null | I'm having an issue fine-tuning BERT with run.squad.py, as it falls over at the end of the evaluation stage. I'm fine tuning on SQuAD v1.1. Has anyone else encountered the same issue, or is able to point out where I'm going wrong?
`python run_squad.py \
--model_type bert \
--model_name_or_path bert-base-uncased \
--do_train \
--do_eval \
--do_lower_case \
--overwrite_output_dir \
--train_file $TRAIN_FILE \
--predict_file $PREDICT_FILE \
--learning_rate 2e-5 \
--num_train_epochs 1.0 \
--max_seq_length 384 \
--per_gpu_eval_batch_size=12 \
--per_gpu_train_batch_size=12 \
--output_dir /content/SQuAD_for_bert/models/bert_base_uncased_finetuned_script/`
Which completes fine-tuning the model, and then fits the eval set but returns:
`Evaluating: 100% 257/257 [01:29<00:00, 2.86it/s]
07/22/2019 02:17:18 - INFO - utils_squad - Writing predictions to: /content/SQuAD_for_bert/models/bert_base_uncased_finetuned_script/predictions_.json
07/22/2019 02:17:18 - INFO - utils_squad - Writing nbest to: /content/SQuAD_for_bert/models/bert_base_uncased_finetuned_script/nbest_predictions_.json
Traceback (most recent call last):
File "run_squad.py", line 521, in <module>
main()
File "run_squad.py", line 510, in main
result = evaluate(args, model, tokenizer, prefix=global_step)
File "run_squad.py", line 257, in evaluate
results = evaluate_on_squad(evaluate_options)
File "/content/SQuAD_for_bert/utils_squad_evaluate.py", line 291, in main
with open(OPTS.na_prob_file) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/content/SQuAD_for_bert/models/bert_base_uncased_finetuned_script/null_odds_.json'`
(Running this in Google Colab - in case that is of any relevance).
Any help would be greatly appreciated - thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/891/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/890 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/890/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/890/comments | https://api.github.com/repos/huggingface/transformers/issues/890/events | https://github.com/huggingface/transformers/issues/890 | 472,522,763 | MDU6SXNzdWU0NzI1MjI3NjM= | 890 | PreTrainedTokenizer.from_pretrained should be more general | {
"login": "matt-gardner",
"id": 3291951,
"node_id": "MDQ6VXNlcjMyOTE5NTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3291951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matt-gardner",
"html_url": "https://github.com/matt-gardner",
"followers_url": "https://api.github.com/users/matt-gardner/followers",
"following_url": "https://api.github.com/users/matt-gardner/following{/other_user}",
"gists_url": "https://api.github.com/users/matt-gardner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matt-gardner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matt-gardner/subscriptions",
"organizations_url": "https://api.github.com/users/matt-gardner/orgs",
"repos_url": "https://api.github.com/users/matt-gardner/repos",
"events_url": "https://api.github.com/users/matt-gardner/events{/privacy}",
"received_events_url": "https://api.github.com/users/matt-gardner/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Yes, this is a nice idea, I was thinking about implementing something like this for another reason (simplifying the task of maintaining `torch.hub` configuration files).\r\n\r\nRegarding the library architecture, I think it's better to make a new (very simple) class, something like `AutoTokenizer` in a new file `tokenizer_auto.py` deriving from `PreTrainedTokenizer` (no need to avoid circular dependencies in this case).\r\n\r\nThen the idea would be to make a new file `modeling_auto.py` as well with something like `AutoModel`, pretty much like `AutoTokenizer`, and `AutoModelForSequenceClassification`, `AutoModelForQuestionAnswering` that would encapsulate standard architectures on top of each model.\r\n\r\nMaybe the `AutoXXX` is not the best name, also thought about `GenericXXX` or `UniversalXXX` but they convey meanings that could be misleading.",
"`{Generic,Universal,Auto}` Tokenizer and Model interfaces would be awesome (I'm also highly interested in that, as I'm currently working on Flair to add support for all six architectures) :heart: ",
"Ok, I've hacked something together for an internal hackathon this week. I'll see if I can pick this up the way you suggest next week, if no one else gets to it first. I also don't know much about your PR requirements, so if someone who's more familiar with this repo wants to pick it up, I wouldn't complain =).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,564 | 1,569 | 1,569 | NONE | null | I'm trying to implement a general interface to any of these transformer models in AllenNLP. I would love to be able to do something like `PreTrainedTokenizer.from_pretrained(model_name)`, and have this work for any model name across any of your implemented models. It looks like what needs to happen for this is to detect which underlying model is being requested, and pass off to that class's `_from_pretrained` method. Does this make sense? In particular, I think the thing that needs to change is here: https://github.com/huggingface/pytorch-transformers/blob/067923d3267325f525f4e46f357360c191ba562e/pytorch_transformers/tokenization_utils.py#L149-L151
It could be changed to something like:
```python
@classmethod
def from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs):
if 'bert' in pretrained_model_name_or_path:
# import BertTokenizer here to avoid circular dependencies
return BertTokenizer._from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
# other cases here
# default to existing behavior if we can't detect the model class
return cls._from_pretrained(*inputs, **kwargs)
```
If this looks right to you, I can put together an initial PR for this.
It would be super helpful if there were similar functionality for models, too, but I haven't gotten far enough yet to worry about how exactly that would work =). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/890/reactions",
"total_count": 5,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/890/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/889 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/889/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/889/comments | https://api.github.com/repos/huggingface/transformers/issues/889/events | https://github.com/huggingface/transformers/issues/889 | 472,485,595 | MDU6SXNzdWU0NzI0ODU1OTU= | 889 | Increased number of hidden states returned from transformers in latest release | {
"login": "anlsh",
"id": 2720400,
"node_id": "MDQ6VXNlcjI3MjA0MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2720400?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anlsh",
"html_url": "https://github.com/anlsh",
"followers_url": "https://api.github.com/users/anlsh/followers",
"following_url": "https://api.github.com/users/anlsh/following{/other_user}",
"gists_url": "https://api.github.com/users/anlsh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anlsh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anlsh/subscriptions",
"organizations_url": "https://api.github.com/users/anlsh/orgs",
"repos_url": "https://api.github.com/users/anlsh/repos",
"events_url": "https://api.github.com/users/anlsh/events{/privacy}",
"received_events_url": "https://api.github.com/users/anlsh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, this is the initial embedding layer (i.e. this is layers \"0\" through 12 or 24). There are a number of small changes that haven't been migrated into the documentation yet. "
] | 1,563 | 1,564 | 1,564 | NONE | null | I noticed an (undocumented?) change in the latest release: namely that transformers now include the pre-encoder input vector in the list returned when `output_hidden_states` is True.
For example, the `hidden_states` output from `BertEncoder` now returns a length-13 list of tensors, whereas it used to return a length-12 list for each of Bert's 12 encoder modules
The change does seem to be intentional, as we have the following line in the tests
https://github.com/huggingface/pytorch-transformers/blob/067923d3267325f525f4e46f357360c191ba562e/pytorch_transformers/tests/modeling_common_test.py#L244
Personally I'm not affected, I just wanted to double-check that this was intended behavior | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/889/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/888 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/888/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/888/comments | https://api.github.com/repos/huggingface/transformers/issues/888/events | https://github.com/huggingface/transformers/pull/888 | 472,431,708 | MDExOlB1bGxSZXF1ZXN0MzAwODQzMDk0 | 888 | Update docs for parameter rename | {
"login": "rococode",
"id": 32279130,
"node_id": "MDQ6VXNlcjMyMjc5MTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/32279130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rococode",
"html_url": "https://github.com/rococode",
"followers_url": "https://api.github.com/users/rococode/followers",
"following_url": "https://api.github.com/users/rococode/following{/other_user}",
"gists_url": "https://api.github.com/users/rococode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rococode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rococode/subscriptions",
"organizations_url": "https://api.github.com/users/rococode/orgs",
"repos_url": "https://api.github.com/users/rococode/repos",
"events_url": "https://api.github.com/users/rococode/events{/privacy}",
"received_events_url": "https://api.github.com/users/rococode/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/888?src=pr&el=h1) Report\n> Merging [#888](https://codecov.io/gh/huggingface/pytorch-transformers/pull/888?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/067923d3267325f525f4e46f357360c191ba562e?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/888?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #888 +/- ##\n=======================================\n Coverage 79.03% 79.03% \n=======================================\n Files 34 34 \n Lines 6234 6234 \n=======================================\n Hits 4927 4927 \n Misses 1307 1307\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/888?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/modeling\\_openai.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/888/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvbW9kZWxpbmdfb3BlbmFpLnB5) | `74.76% <ø> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/888?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/888?src=pr&el=footer). Last update [067923d...66b15f7](https://codecov.io/gh/huggingface/pytorch-transformers/pull/888?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Yes, thanks @rococode!"
] | 1,563 | 1,564 | 1,564 | CONTRIBUTOR | null | small fix: OpenAIGPTLMHeadModel now accepts `labels` instead of `lm_labels`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/888/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/888",
"html_url": "https://github.com/huggingface/transformers/pull/888",
"diff_url": "https://github.com/huggingface/transformers/pull/888.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/888.patch",
"merged_at": 1564059683000
} |
https://api.github.com/repos/huggingface/transformers/issues/887 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/887/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/887/comments | https://api.github.com/repos/huggingface/transformers/issues/887/events | https://github.com/huggingface/transformers/issues/887 | 472,368,073 | MDU6SXNzdWU0NzIzNjgwNzM= | 887 | No gradient clipping in AdamW | {
"login": "OlegPlatonov",
"id": 32016523,
"node_id": "MDQ6VXNlcjMyMDE2NTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/32016523?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OlegPlatonov",
"html_url": "https://github.com/OlegPlatonov",
"followers_url": "https://api.github.com/users/OlegPlatonov/followers",
"following_url": "https://api.github.com/users/OlegPlatonov/following{/other_user}",
"gists_url": "https://api.github.com/users/OlegPlatonov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OlegPlatonov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OlegPlatonov/subscriptions",
"organizations_url": "https://api.github.com/users/OlegPlatonov/orgs",
"repos_url": "https://api.github.com/users/OlegPlatonov/repos",
"events_url": "https://api.github.com/users/OlegPlatonov/events{/privacy}",
"received_events_url": "https://api.github.com/users/OlegPlatonov/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Yes, the LM fine-tuning example will be refactored.\r\n\r\nAdding the removal of gradient clipping to the list of breaking changes, thanks.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,570 | 1,570 | NONE | null | Hi!
After moving from pretrained-bert to transformers I've noticed that the new AdamW optimizer does not perform gradient clipping, even though both BertAdam and OpenAIAdam used to do it.
Also, in finetune_on_pregenerated example bias correction is turned off only for FusedAdam, but not for AdamW. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/887/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/886 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/886/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/886/comments | https://api.github.com/repos/huggingface/transformers/issues/886/events | https://github.com/huggingface/transformers/issues/886 | 472,326,469 | MDU6SXNzdWU0NzIzMjY0Njk= | 886 | BERT uncased model outputs a tuple instead of a normal pytorch tensor | {
"login": "rajaswa",
"id": 34607601,
"node_id": "MDQ6VXNlcjM0NjA3NjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/34607601?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajaswa",
"html_url": "https://github.com/rajaswa",
"followers_url": "https://api.github.com/users/rajaswa/followers",
"following_url": "https://api.github.com/users/rajaswa/following{/other_user}",
"gists_url": "https://api.github.com/users/rajaswa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajaswa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajaswa/subscriptions",
"organizations_url": "https://api.github.com/users/rajaswa/orgs",
"repos_url": "https://api.github.com/users/rajaswa/repos",
"events_url": "https://api.github.com/users/rajaswa/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajaswa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, I was wondering how you managed to resolve this issue? I'm running into a similar problem. :) ",
"Hi, the model outputs are well documented, they're *always* tuples, even if there's a single return value. You can check the documentation [here](https://huggingface.co/transformers/main_classes/output.html).",
"@jacqueline-he did you resolve the issue?",
"@WeeHyongTok Yes I have! I only needed to access what's inside the returned tuple. @LysandreJik's recommendation was very helpful. "
] | 1,563 | 1,606 | 1,563 | NONE | null | While finetuning the BERT uncased model for sequence classification as follows:
```
config = BertConfig.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForSequenceClassification(config)
for layer, child in model.named_children():
if layer not in ['classifier']:
for param in child.parameters():
param.requires_grad = False
optimizer = optim.SGD(model.parameters(), lr=1e-3)
criterion = nn.CrossEntropyLoss()
for (data, target) in (train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
target = target.float()
loss = criterion(output, target)
loss.backward()
optimizer.step()
```
The following error comes up :
```
1348 dim = _get_softmax_dim('log_softmax', input.dim(), _stacklevel)
1349 if dtype is None:
-> 1350 ret = input.log_softmax(dim)
1351 else:
1352 ret = input.log_softmax(dim, dtype=dtype)
AttributeError: 'tuple' object has no attribute 'log_softmax'
```
Here are the model output and target tensors :
```
--> output
(tensor([[-0.2530, 0.0788],
[-0.1457, -0.0624],
[-0.3478, -0.2125],
[-0.1337, 0.2051],
[ 0.0963, 0.3762],
[-0.0910, -0.0527],
[-0.1743, 0.2566],
[-0.2223, 0.4083],
[-0.1602, -0.0012],
[-0.0059, 0.2334],
[-0.3407, -0.1703],
[-0.1359, 0.0776],
[-0.2117, 0.1641],
[-0.3365, -0.1266],
[-0.1682, 0.0504],
[-0.2346, 0.2380]], device='cuda:0', grad_fn=<AddmmBackward>),)
--> target
tensor([0., 0., 1., 1., 1., 0., 0., 0., 1., 1., 1., 0., 0., 0., 1., 0.],
device='cuda:0')
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/886/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/885 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/885/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/885/comments | https://api.github.com/repos/huggingface/transformers/issues/885/events | https://github.com/huggingface/transformers/issues/885 | 472,307,117 | MDU6SXNzdWU0NzIzMDcxMTc= | 885 | Can lm_finetuning be used with non-english data ? | {
"login": "ereday",
"id": 13196191,
"node_id": "MDQ6VXNlcjEzMTk2MTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/13196191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ereday",
"html_url": "https://github.com/ereday",
"followers_url": "https://api.github.com/users/ereday/followers",
"following_url": "https://api.github.com/users/ereday/following{/other_user}",
"gists_url": "https://api.github.com/users/ereday/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ereday/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ereday/subscriptions",
"organizations_url": "https://api.github.com/users/ereday/orgs",
"repos_url": "https://api.github.com/users/ereday/repos",
"events_url": "https://api.github.com/users/ereday/events{/privacy}",
"received_events_url": "https://api.github.com/users/ereday/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ereday Take the `simple_lm_finetuning.py` script for example. It has a `--bert_model` argument. When your target domain is German, then you should use the recently introduced [BERT model for german](https://github.com/huggingface/pytorch-transformers/pull/688) via passing `bert-base-german-cased`. \r\n\r\nThis should fine-tune the German BERT model :)",
"@stefan-it thanks alot. I wasn't aware of german specific bert model. Awesome!"
] | 1,563 | 1,564 | 1,564 | NONE | null | Hi,
My target domain is in German. Can I still use the scripts &codes under `lm_finetuning` folder to finetune pre-trained Bert models or are those only for English target domains? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/885/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/884 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/884/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/884/comments | https://api.github.com/repos/huggingface/transformers/issues/884/events | https://github.com/huggingface/transformers/issues/884 | 472,224,934 | MDU6SXNzdWU0NzIyMjQ5MzQ= | 884 | Customized BertForTokenClassification Model | {
"login": "lixin4ever",
"id": 18526640,
"node_id": "MDQ6VXNlcjE4NTI2NjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/18526640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lixin4ever",
"html_url": "https://github.com/lixin4ever",
"followers_url": "https://api.github.com/users/lixin4ever/followers",
"following_url": "https://api.github.com/users/lixin4ever/following{/other_user}",
"gists_url": "https://api.github.com/users/lixin4ever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lixin4ever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lixin4ever/subscriptions",
"organizations_url": "https://api.github.com/users/lixin4ever/orgs",
"repos_url": "https://api.github.com/users/lixin4ever/repos",
"events_url": "https://api.github.com/users/lixin4ever/events{/privacy}",
"received_events_url": "https://api.github.com/users/lixin4ever/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@lixin4ever I have the same question. Do you solve it?",
"I have solve it. Thank you!",
"@searchlink How do you solve this problem?",
"For reference, the updated resource link mentioned in the original post can be now found [here](https://huggingface.co/transformers/_modules/transformers/modeling_bert.html#BertForTokenClassification); It was affected by the renaming into `transformers`, too.",
"But I can't see any difference between \"pytorch-transformers\" and \"transformers\" except the line initializing BERT parameters. Have you met the same problem? @dennlinger ",
"I was just pointing to the up-to-date reference. I'm currently looking into token classification using BERT (or in my case, I would prefer RoBERTa or other iterations of BERT, but unfortunately they seem not available yet).",
"So Token classification using BERT does not work?\r\n",
"As you can see below, `BertForTokenClassification` works as expected with **PyTorch 1.3.1** and **Transformers 2.2.2** installed with `pip install transformers`.\r\n```\r\nPython 3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) \r\n[GCC 7.3.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import transformers\r\n>>> from transformers import BertForTokenClassification\r\n>>> from transformers import BertTokenizer\r\n>>> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n>>> model = BertForTokenClassification.from_pretrained('bert-base-uncased')\r\n>>> text='Hello, my dog is cute'\r\n>>> import torch\r\n>>> input_ids = torch.tensor(tokenizer.encode(text, add_special_tokens=True)).unsqueeze(0) # Batch size 1\r\n>>> input_ids\r\ntensor([[ 101, 7592, 1010, 2026, 3899, 2003, 10140, 102]])\r\n>>> labels = torch.tensor([1] * input_ids.size(1)).unsqueeze(0) # Batch size 1\r\n>>> labels\r\ntensor([[1, 1, 1, 1, 1, 1, 1, 1]])\r\n>>> outputs = model(input_ids, labels=labels)\r\n>>> outputs\r\n(tensor(0.7529, grad_fn=<NllLossBackward>), tensor([[[ 0.5078, 0.1628],\r\n [-0.0593, 0.0163],\r\n [ 0.0308, -0.2312],\r\n [ 0.0863, -0.1000],\r\n [-0.2833, -0.2656],\r\n [-0.2014, -0.5225],\r\n [-0.2912, -0.1220],\r\n [-0.2781, -0.2919]]], grad_fn=<AddBackward0>))\r\n>>> len(outputs)\r\n2\r\n>>> loss=outputs[0]\r\n>>> scores=outputs[1]\r\n>>> loss\r\ntensor(0.7529, grad_fn=<NllLossBackward>)\r\n>>> scores\r\ntensor([[[ 0.5078, 0.1628],\r\n [-0.0593, 0.0163],\r\n [ 0.0308, -0.2312],\r\n [ 0.0863, -0.1000],\r\n [-0.2833, -0.2656],\r\n [-0.2014, -0.5225],\r\n [-0.2912, -0.1220],\r\n [-0.2781, -0.2919]]], grad_fn=<AddBackward0>)\r\n>>>\r\n```\r\n\r\nIt's working with **TensorFlow 2.0.0** and **Transformers 2.2.2** too!\r\n```\r\nPython 3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) \r\n[GCC 7.3.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import tensorflow as tf\r\n/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\r\n/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\r\n/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\r\n/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\r\n/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\r\n/home/vidiemme/anaconda3/envs/huggingface/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\r\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\r\n>>> import transformers\r\n>>> from transformers import BertTokenizer\r\n>>> from transformers import TFBertForTokenClassification\r\n>>> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n>>> model = TFBertForTokenClassification.from_pretrained('bert-base-uncased')\r\n2019-12-17 12:54:37.120123: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA\r\n2019-12-17 12:54:37.320081: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3408000000 Hz\r\n2019-12-17 12:54:37.320815: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55a42296edd0 executing computations on platform Host. Devices:\r\n2019-12-17 12:54:37.320841: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version\r\n>>> text='Hello, my dog is cute'\r\n>>> input_ids = tf.constant(tokenizer.encode(text))[None, :] # Batch size 1\r\n>>> input_ids\r\n<tf.Tensor: id=6056, shape=(1, 8), dtype=int32, numpy=\r\narray([[ 101, 7592, 1010, 2026, 3899, 2003, 10140, 102]],\r\n dtype=int32)>\r\n>>> outputs = model(input_ids)\r\n>>> len(outputs)\r\n1\r\n>>> outputs\r\n(<tf.Tensor: id=7961, shape=(1, 8, 2), dtype=float32, numpy=\r\narray([[[ 0.09657212, -0.51087016],\r\n [ 0.28020248, -0.25160134],\r\n [-0.09995201, 0.0843759 ],\r\n [-0.12110823, 0.20886022],\r\n [-0.03617962, 0.00401567],\r\n [-0.03330922, 0.01042 ],\r\n [-0.21674895, -0.1601235 ],\r\n [ 0.1076538 , 0.19144017]]], dtype=float32)>,)\r\n>>> scores = outputs[0]\r\n>>> scores\r\n<tf.Tensor: id=7961, shape=(1, 8, 2), dtype=float32, numpy=\r\narray([[[ 0.09657212, -0.51087016],\r\n [ 0.28020248, -0.25160134],\r\n [-0.09995201, 0.0843759 ],\r\n [-0.12110823, 0.20886022],\r\n [-0.03617962, 0.00401567],\r\n [-0.03330922, 0.01042 ],\r\n [-0.21674895, -0.1601235 ],\r\n [ 0.1076538 , 0.19144017]]], dtype=float32)>\r\n```\r\n\r\n> So Token classification using BERT does not work?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,587 | 1,587 | NONE | null | I try to customize BertForTokenClassification model by myself to perform sequence tagging and strictly follow the [original implementation](https://huggingface.co/pytorch-transformers/_modules/pytorch_transformers/modeling_bert.html#BertForTokenClassification). However, I cannot obtain the same results (lower scores) with those produced by BertForTokenClassification when I simply set the top-most tagging component/model as an Linear layer (i.e., the current model is identical to BertForTokenClassification). My code is below:
```python
class BertTagger(BertPreTrainedModel):
def __init__(self, bert_config):
super(BertTagger, self).__init__(bert_config)
self.num_labels = bert_config.num_labels
#self.tagger_config = TaggerConfig()
self.bert = BertModel(bert_config)
self.bert_dropout = nn.Dropout(bert_config.hidden_dropout_prob)
self.classifier = nn.Linear(bert_config.hidden_size, bert_config.num_labels)
self.apply(self.init_weights)
def forward(self, input_ids, token_type_ids=None, attention_mask=None, labels=None,
position_ids=None, head_mask=None):
outputs = self.bert(input_ids, position_ids=position_ids, token_type_ids=token_type_ids,
attention_mask=attention_mask, head_mask=head_mask)
# the hidden states of the last Bert Layer, shape: (bsz, seq_len, hsz)
tagger_input = outputs[0]
tagger_input = self.bert_dropout(tagger_input)
logits = self.classifier(tagger_input)
outputs = (logits,) + outputs[2:]
if labels is not None:
loss_fct = CrossEntropyLoss()
# Only keep active parts of the loss
if attention_mask is not None:
active_loss = attention_mask.view(-1) == 1
active_logits = logits.view(-1, self.num_labels)[active_loss]
active_labels = labels.view(-1)[active_loss]
loss = loss_fct(active_logits, active_labels)
else:
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
outputs = (loss,) + outputs
return outputs
```
Has anyone encountered the same issue? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/884/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/883 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/883/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/883/comments | https://api.github.com/repos/huggingface/transformers/issues/883/events | https://github.com/huggingface/transformers/issues/883 | 472,197,750 | MDU6SXNzdWU0NzIxOTc3NTA= | 883 | Upgrade to new FP16 | {
"login": "bhavsarpratik",
"id": 23080576,
"node_id": "MDQ6VXNlcjIzMDgwNTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/23080576?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavsarpratik",
"html_url": "https://github.com/bhavsarpratik",
"followers_url": "https://api.github.com/users/bhavsarpratik/followers",
"following_url": "https://api.github.com/users/bhavsarpratik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavsarpratik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavsarpratik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavsarpratik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavsarpratik/orgs",
"repos_url": "https://api.github.com/users/bhavsarpratik/repos",
"events_url": "https://api.github.com/users/bhavsarpratik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavsarpratik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Just saw run_glue has the new one."
] | 1,563 | 1,564 | 1,564 | NONE | null | The original FP16_Optimizer and the old “Amp” API are deprecated and subject to removal at any time. Should we consider moving to the new one?
https://nvidia.github.io/apex/amp.html#for-users-of-the-old-fp16-optimizer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/883/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/882 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/882/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/882/comments | https://api.github.com/repos/huggingface/transformers/issues/882/events | https://github.com/huggingface/transformers/pull/882 | 472,140,936 | MDExOlB1bGxSZXF1ZXN0MzAwNjA5MDg5 | 882 | fix squad v1 error (na_prob_file should be None) | {
"login": "Liangtaiwan",
"id": 20909894,
"node_id": "MDQ6VXNlcjIwOTA5ODk0",
"avatar_url": "https://avatars.githubusercontent.com/u/20909894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Liangtaiwan",
"html_url": "https://github.com/Liangtaiwan",
"followers_url": "https://api.github.com/users/Liangtaiwan/followers",
"following_url": "https://api.github.com/users/Liangtaiwan/following{/other_user}",
"gists_url": "https://api.github.com/users/Liangtaiwan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Liangtaiwan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Liangtaiwan/subscriptions",
"organizations_url": "https://api.github.com/users/Liangtaiwan/orgs",
"repos_url": "https://api.github.com/users/Liangtaiwan/repos",
"events_url": "https://api.github.com/users/Liangtaiwan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Liangtaiwan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/882?src=pr&el=h1) Report\n> Merging [#882](https://codecov.io/gh/huggingface/pytorch-transformers/pull/882?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/067923d3267325f525f4e46f357360c191ba562e?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/882?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #882 +/- ##\n=======================================\n Coverage 79.03% 79.03% \n=======================================\n Files 34 34 \n Lines 6234 6234 \n=======================================\n Hits 4927 4927 \n Misses 1307 1307\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/882?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/882?src=pr&el=footer). Last update [067923d...a7fce6d](https://codecov.io/gh/huggingface/pytorch-transformers/pull/882?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks indeed, this should fix #891"
] | 1,563 | 1,564 | 1,564 | CONTRIBUTOR | null | When running squad v1, na_prob_file should be None.
Or there will be an error when evaluate on testing data. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/882/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/882",
"html_url": "https://github.com/huggingface/transformers/pull/882",
"diff_url": "https://github.com/huggingface/transformers/pull/882.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/882.patch",
"merged_at": 1564059600000
} |
https://api.github.com/repos/huggingface/transformers/issues/881 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/881/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/881/comments | https://api.github.com/repos/huggingface/transformers/issues/881/events | https://github.com/huggingface/transformers/issues/881 | 472,126,020 | MDU6SXNzdWU0NzIxMjYwMjA= | 881 | can not convert_tf_checkpoint_to_pytorch | {
"login": "Zhangxuri",
"id": 19551110,
"node_id": "MDQ6VXNlcjE5NTUxMTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/19551110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zhangxuri",
"html_url": "https://github.com/Zhangxuri",
"followers_url": "https://api.github.com/users/Zhangxuri/followers",
"following_url": "https://api.github.com/users/Zhangxuri/following{/other_user}",
"gists_url": "https://api.github.com/users/Zhangxuri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zhangxuri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zhangxuri/subscriptions",
"organizations_url": "https://api.github.com/users/Zhangxuri/orgs",
"repos_url": "https://api.github.com/users/Zhangxuri/repos",
"events_url": "https://api.github.com/users/Zhangxuri/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zhangxuri/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"```\r\npython convert.py --tf_checkpoint_path=./uncased_L-12_H-768_A-12/bert_model.ckpt --bert_config_file=./uncased_L-12_H-768_A-12/bert_config.json --pytorch_dump_path=./uncased_L-12_H-768_A-12/bert_model.bin\r\n```"
] | 1,563 | 1,563 | 1,563 | NONE | null | ```
.
├── convert_tf_checkpoint_to_pytorch.py
├── uncased_L-12_H-768_A-12
│ ├── bert_config.json
│ ├── bert_model.ckpt.data-00000-of-00001
│ ├── bert_model.ckpt.index
│ ├── bert_model.ckpt.meta
│ └── vocab.txt
├── uncased_L-12_H-768_A-12.zip
└── Untitled.ipynb
```
```
(base) ➜ ckpt_to_bin git:(master) ✗ python convert.py --tf_checkpoint_path=./uncased_L-12_H-768_A-12 --bert_config_file=./uncased_L-12_H-768_A-12/bert_config.json --pytorch_dump_path=./uncased_L-12_H-768_A-12
Building PyTorch model from configuration: {
"attention_probs_dropout_prob": 0.1,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"num_labels": 2,
"output_attentions": false,
"output_hidden_states": false,
"torchscript": false,
"type_vocab_size": 2,
"vocab_size": 30522
}
INFO:pytorch_transformers.modeling_bert:Converting TensorFlow checkpoint from /home/zxr/summary/bertsum/src/ckpt_to_bin/uncased_L-12_H-768_A-12
Traceback (most recent call last):
File "convert.py", line 65, in <module>
args.pytorch_dump_path)
File "convert.py", line 36, in convert_tf_checkpoint_to_pytorch
load_tf_weights_in_bert(model, config, tf_checkpoint_path)
File "/home/zxr/anaconda3/lib/python3.7/site-packages/pytorch_transformers/modeling_bert.py", line 83, in load_tf_weights_in_bert
init_vars = tf.train.list_variables(tf_path)
File "/home/zxr/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/checkpoint_utils.py", line 95, in list_variables
reader = load_checkpoint(ckpt_dir_or_file)
File "/home/zxr/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/checkpoint_utils.py", line 63, in load_checkpoint
"given directory %s" % ckpt_dir_or_file)
ValueError: Couldn't find 'checkpoint' file or checkpoints in given directory /home/zxr/summary/bertsum/src/ckpt_to_bin/uncased_L-12_H-768_A-12
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/881/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/880 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/880/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/880/comments | https://api.github.com/repos/huggingface/transformers/issues/880/events | https://github.com/huggingface/transformers/issues/880 | 472,115,663 | MDU6SXNzdWU0NzIxMTU2NjM= | 880 | Printing Iteration every example problem | {
"login": "AhmedBahaaElDinMohammed",
"id": 51789113,
"node_id": "MDQ6VXNlcjUxNzg5MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/51789113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AhmedBahaaElDinMohammed",
"html_url": "https://github.com/AhmedBahaaElDinMohammed",
"followers_url": "https://api.github.com/users/AhmedBahaaElDinMohammed/followers",
"following_url": "https://api.github.com/users/AhmedBahaaElDinMohammed/following{/other_user}",
"gists_url": "https://api.github.com/users/AhmedBahaaElDinMohammed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AhmedBahaaElDinMohammed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AhmedBahaaElDinMohammed/subscriptions",
"organizations_url": "https://api.github.com/users/AhmedBahaaElDinMohammed/orgs",
"repos_url": "https://api.github.com/users/AhmedBahaaElDinMohammed/repos",
"events_url": "https://api.github.com/users/AhmedBahaaElDinMohammed/events{/privacy}",
"received_events_url": "https://api.github.com/users/AhmedBahaaElDinMohammed/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Are you running in jupyter? This might be an artifact of how `tqdm` is interacting with whatever shell you're running it in. If you don't want to print anything, you could simply drop the `tdqm` wrapper and just iterate over `train_dataloader`. ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,570 | 1,570 | NONE | null | ```
Iteration: 0%| | 1/250 [00:00<03:28, 1.19it/s]
Iteration: 1%| | 2/250 [00:01<03:21, 1.23it/s]
Iteration: 1%| | 3/250 [00:02<03:17, 1.25it/s]
Iteration: 2%|▏ | 4/250 [00:03<03:14, 1.27it/s]
Iteration: 2%|▏ | 5/250 [00:03<03:11, 1.28it/s]
Iteration: 2%|▏ | 6/250 [00:04<03:09, 1.29it/s]
Iteration: 3%|▎ | 7/250 [00:05<03:05, 1.31it/s]
```
I dont know what is the error that is causing this error , can somebody help ?
and this is the code
```
train_iterator = trange(int(num_train_epochs), desc="Epoch", disable= local_rank not in [-1, 0])
set_seed(42)
Epochs = 0
for _ in train_iterator:
epoch_iterator = tqdm(train_dataloader, desc="Iteration", disable= local_rank not in [-1, 0])
#print('Check')
Epochs = Epochs + 1
for step, batch in enumerate(epoch_iterator):
model.train()
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/880/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/880/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/879 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/879/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/879/comments | https://api.github.com/repos/huggingface/transformers/issues/879/events | https://github.com/huggingface/transformers/pull/879 | 472,114,745 | MDExOlB1bGxSZXF1ZXN0MzAwNTg4MjM0 | 879 | fix #878 | {
"login": "xijiz",
"id": 12234085,
"node_id": "MDQ6VXNlcjEyMjM0MDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/12234085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xijiz",
"html_url": "https://github.com/xijiz",
"followers_url": "https://api.github.com/users/xijiz/followers",
"following_url": "https://api.github.com/users/xijiz/following{/other_user}",
"gists_url": "https://api.github.com/users/xijiz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xijiz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xijiz/subscriptions",
"organizations_url": "https://api.github.com/users/xijiz/orgs",
"repos_url": "https://api.github.com/users/xijiz/repos",
"events_url": "https://api.github.com/users/xijiz/events{/privacy}",
"received_events_url": "https://api.github.com/users/xijiz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/879?src=pr&el=h1) Report\n> Merging [#879](https://codecov.io/gh/huggingface/pytorch-transformers/pull/879?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/067923d3267325f525f4e46f357360c191ba562e?src=pr&el=desc) will **decrease** coverage by `0.01%`.\n> The diff coverage is `0%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/879?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #879 +/- ##\n==========================================\n- Coverage 79.03% 79.02% -0.02% \n==========================================\n Files 34 34 \n Lines 6234 6235 +1 \n==========================================\n Hits 4927 4927 \n- Misses 1307 1308 +1\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/879?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/tokenization\\_utils.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/879/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX3V0aWxzLnB5) | `86.17% <0%> (-0.36%)` | :arrow_down: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/879?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/879?src=pr&el=footer). Last update [067923d...31bc1dd](https://codecov.io/gh/huggingface/pytorch-transformers/pull/879?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Closing this for now. Feel free to re-open if you want to continue with this PR."
] | 1,563 | 1,566 | 1,566 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/879/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/879",
"html_url": "https://github.com/huggingface/transformers/pull/879",
"diff_url": "https://github.com/huggingface/transformers/pull/879.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/879.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/878 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/878/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/878/comments | https://api.github.com/repos/huggingface/transformers/issues/878/events | https://github.com/huggingface/transformers/issues/878 | 472,113,887 | MDU6SXNzdWU0NzIxMTM4ODc= | 878 | Fail to load pre-trained tokens. | {
"login": "xijiz",
"id": 12234085,
"node_id": "MDQ6VXNlcjEyMjM0MDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/12234085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xijiz",
"html_url": "https://github.com/xijiz",
"followers_url": "https://api.github.com/users/xijiz/followers",
"following_url": "https://api.github.com/users/xijiz/following{/other_user}",
"gists_url": "https://api.github.com/users/xijiz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xijiz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xijiz/subscriptions",
"organizations_url": "https://api.github.com/users/xijiz/orgs",
"repos_url": "https://api.github.com/users/xijiz/repos",
"events_url": "https://api.github.com/users/xijiz/events{/privacy}",
"received_events_url": "https://api.github.com/users/xijiz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, how do you solve this problem? If we set `pretrained_model_name_or_path` as a path to vocab.txt, it still need the two files: added_tokens.json, special_tokens_map.json. Where can we get these files? ",
"> Hi, how do you solve this problem? If we set `pretrained_model_name_or_path` as a path to vocab.txt, it still need the two files: added_tokens.json, special_tokens_map.json. Where can we get these files?\r\n\r\nYou can ignore these files: added_tokens.json, special_tokens_map.json. All you need to do is to modify some code lines in the file: tokenization_utils.py. I have modified it in my forked repository as you can see [here](https://github.com/xijiz/pytorch-transformers/commit/31bc1ddf4f68ad790da9874a3623cf22d62dc186). ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,571 | 1,571 | NONE | null | The PreTrainedTokenizer fails to load tokenizer files when I load tokenizer files from local tokenizer files.
The error is caused by code line 174 - 182 in [tokenization_utils.py](https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/tokenization_utils.py). The code assumes that there are three tokenizer files: added_tokens.json, special_tokens_map.json, vocab.txt. However, these path of tokenizer files will be the same path if the given parameter "pretrained_model_name_or_path" is full path of "vocab.txt". | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/878/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/878/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/877 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/877/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/877/comments | https://api.github.com/repos/huggingface/transformers/issues/877/events | https://github.com/huggingface/transformers/issues/877 | 471,990,319 | MDU6SXNzdWU0NzE5OTAzMTk= | 877 | error when tried to migrate from pretrained-bert to transformers. | {
"login": "y8miao",
"id": 47309353,
"node_id": "MDQ6VXNlcjQ3MzA5MzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/47309353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/y8miao",
"html_url": "https://github.com/y8miao",
"followers_url": "https://api.github.com/users/y8miao/followers",
"following_url": "https://api.github.com/users/y8miao/following{/other_user}",
"gists_url": "https://api.github.com/users/y8miao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/y8miao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/y8miao/subscriptions",
"organizations_url": "https://api.github.com/users/y8miao/orgs",
"repos_url": "https://api.github.com/users/y8miao/repos",
"events_url": "https://api.github.com/users/y8miao/events{/privacy}",
"received_events_url": "https://api.github.com/users/y8miao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The `labels` input for the model is not the number of labels but the tensor of labels (see the docstrings and doc).",
"> The `labels` input for the model is not the number of labels but the tensor of labels (see the docstrings and doc).\r\n\r\nThank you for the answer. I'm trying to train the model to do polarity classification for google reviews, this is how the code that computes \"logits\" looked like before in pytorch-pretrained-bert:\r\n`logits = model(input_ids, segment_ids, input_mask, labels=None)`\r\nDo I do after the migration now:\r\n\r\n`output = model(input_ids,labels=None)`\r\n`loss, logits = output[:2]`\r\n\r\nin order to match the similar behaviors? Thanks.\r\np.s. I'm new to the field, picked up the code from the street and trying to figure out how to make it work, I'm sorry if the question is dumb.",
"How was this solved? I have the same problem and for me : \r\n\r\noutput = model(input_ids,labels=None)\r\nloss, logits = output[:2]\r\n\r\ndoes not solve it"
] | 1,563 | 1,568 | 1,563 | NONE | null | The code used to be:
`logits = model(input_ids, segment_ids, input_mask, labels=None)
if OUTPUT_MODE == "classification":
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, num_labels), label_ids.view(-1))
elif OUTPUT_MODE == "regression":
loss_fct = MSELoss()
loss = loss_fct(logits.view(-1), label_ids.view(-1))
if GRADIENT_ACCUMULATION_STEPS > 1:
loss = loss / GRADIENT_ACCUMULATION_STEPS
loss.backward()
print("\r%f" % loss, end='')
tr_loss += loss.item()
nb_tr_examples += input_ids.size(0)
nb_tr_steps += 1
if (step + 1) % GRADIENT_ACCUMULATION_STEPS == 0:
optimizer.step()
optimizer.zero_grad()
global_step += 1`
According to readme, I changed it into:
`output = model(input_ids,labels=num_labels)
loss, logits = output[:2]
if GRADIENT_ACCUMULATION_STEPS > 1:
loss = loss / GRADIENT_ACCUMULATION_STEPS
loss.backward()
print("\r%f" % loss, end='')
tr_loss += loss.item()
nb_tr_examples += input_ids.size(0)
nb_tr_steps += 1
if (step + 1) % GRADIENT_ACCUMULATION_STEPS == 0:
optimizer.step()
optimizer.zero_grad()
global_step += 1`
And the issue I had before was 'tuple' object has no attribute 'view'.
After the change, I'm having a similar issue that says:
`Traceback (most recent call last):
File "C:/Users/Youchen Miao/PycharmProjects/BERT_sent3/to_feature.py", line 150, in <module>
output = model(input_ids,labels=num_labels)
File "C:\Users\Youchen Miao\PycharmProjects\BERT_sent2\BERT_sent3\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "C:\Users\Youchen Miao\PycharmProjects\BERT_sent2\BERT_sent3\lib\site-packages\pytorch_transformers\modeling_bert.py", line 985, in forward
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
AttributeError: 'int' object has no attribute 'view'` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/877/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/876 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/876/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/876/comments | https://api.github.com/repos/huggingface/transformers/issues/876/events | https://github.com/huggingface/transformers/issues/876 | 471,988,735 | MDU6SXNzdWU0NzE5ODg3MzU= | 876 | How to use BERT for finding similar sentences or similar news? | {
"login": "Raghavendra15",
"id": 7957331,
"node_id": "MDQ6VXNlcjc5NTczMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7957331?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Raghavendra15",
"html_url": "https://github.com/Raghavendra15",
"followers_url": "https://api.github.com/users/Raghavendra15/followers",
"following_url": "https://api.github.com/users/Raghavendra15/following{/other_user}",
"gists_url": "https://api.github.com/users/Raghavendra15/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Raghavendra15/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Raghavendra15/subscriptions",
"organizations_url": "https://api.github.com/users/Raghavendra15/orgs",
"repos_url": "https://api.github.com/users/Raghavendra15/repos",
"events_url": "https://api.github.com/users/Raghavendra15/events{/privacy}",
"received_events_url": "https://api.github.com/users/Raghavendra15/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\nBERT out-of-the-box is not the best option for this task, as the run-time in your setup scales with the number of sentences in your corpus. I.e., if you have 10,000 sentences/articles in your corpus, you need to classify 10k pairs with BERT, which is rather slow.\r\n\r\nA better option is to generate sentence embeddings: Every sentence / article is mapped to a fixed sized vector. You need to map your 3k articles only once to a vector.\r\n\r\nA new query is then also mapped to a vector. In this setup, you only need to run BERT for one sentence (at inference), independent how large your corpus is.\r\n\r\nThen, you can use cosine-similiarity, or manhatten / euclidean distance to find sentence embeddings that are closest = that are the most similar.\r\n\r\nI released today a framework which uses pytorch-transformers for exactly that purpose:\r\nhttps://github.com/UKPLab/sentence-transformers\r\n\r\nI also uploaded an example for semantic search, where each sentence in a corpus is mapped to a vector and than cosine-similarity is used to find the most similar sentences / vectors:\r\nhttps://github.com/UKPLab/sentence-transformers/blob/master/examples/application_semantic_search.py\r\n\r\nLet me know if you have further questions.",
"I think you can use [faiss](https://github.com/facebookresearch/faiss) for storing and finding similar embeddings. ",
"@nreimers Amazing!! Thank you so much. What you created is a real-life savior! Can this be used for finding similar news(given title and abstract)? I ran the code and I have the following doubts. \r\nWhich model should I use? \r\n bert-large-nli-stsb-mean-tokens vs bert-base-nli-mean-tokens vs bert-large-nli-mean-tokens (what are the datasets on which all these models are trained on?) \r\n\r\nCan I use [faiss](https://github.com/facebookresearch/faiss) to compute the search/distance of the vectors instead of L2/Manhattan/Cosine distances? \r\n\r\nMany thanks to @stefan-it for introducing me to [faiss](https://github.com/facebookresearch/faiss).\r\n\r\n\r\n",
"@nreimers I don't think scipy.spatial.distance.cdist is good enough, it takes a lot of time to compute the results, almost 10 minutes on a corpus of 3.9k news articles. I think I should try using [faiss](https://github.com/facebookresearch/faiss). I don't know anything about [faiss](https://github.com/facebookresearch/faiss) but I will try.",
"Hi @Raghavendra15,\r\nregarding the model I sadly cannot be helpful, you would need to test them. In general, sentence embeddings methods (like Inference, Universal Sentence Encoder or my git) work well for short text, i.e., for sentences. For longer text with multiple sentences their performance often decrease and average word embeddings or tf-idf is in many case a much better choice. For longer texts, all these sentence embeddings methods are not really needed.\r\n\r\nIt would be great if you have some training data. Then, it would be quite easy to fine-tune a model specifically for your task. It should achieve much better performances than the pre-trained models.\r\n\r\nI think the issue is not scipy.spatial.distance.cdist. On a corpus with 100k embeddings and 1024 embedding size, it requires about 0.2 seconds per query (if you can batch queries, even less time is needed).\r\n\r\nI think the issue might be the generation of the 4k sentence embeddings? Transformer networks like BERT are extremely slow on CPUs. However, on a GPU, the implementation can process about 2000 sentences per seconds. On a GPU, only about 40 sentences.\r\n\r\nBut the corpus must only be processed once and can then be stored & loaded from disk. At inference, you just need to generate one embedding for the respective query.\r\n\r\nYou can of course combine this with faiss. Faiss generates index structures that allow a quick search in vector space and is especially suitable if you have a high number (millions) of vectors. For 4k vectors, scipy takes about 0.008 seconds per queries to find the most similar vectors. \r\n\r\nSo either something is really strange with scipy on your computer, or the long run-time comes from the generation of the embeddings.",
"@nreimers Thank you very much for your response. You're absolutely right, most of the time taken is for generating the embedding for 4k sentences. I'm now confused between choosing this model over [XLNet](https://github.com/zihangdai/xlnet), XLNet has achieved the state of the art results.\r\n\r\nBy your comments on faiss, As long as I have a smaller dataset, results from faiss and scipy won't make any difference? However, If I had millions or billions of news articles then using faiss makes sense right? For smaller datasets, there is no difference in terms of quality of matches between faiss and scipy(the results are the same for computing the distances)? \r\n\r\nI have one important question, If I want to train the model as you suggested which would yield better results, In that case, I should have labeled dataset right? However, for news, I only have titles and abstract about that news. Is there a way to train them without the labels? ",
"Hi,\r\nXLNet achieved state-of-the-art performance for supervised tasks like classification. But it is unclear if it generates also good embeddings for unsupervised tasks.\r\n\r\nIn the framework you can choose XLNet, but I was only able to produce results that are slightly below those of BERT.\r\n\r\nOthers also have problems getting a good performance with Xlnet for supervised tasks, as it appears that it is extremely sensitive to the hyper parameters.\r\n\r\nIf you have millions of docs, faiss makes sense. With scipy, you get exact scores. With faiss, the scores are fuzzy and the returned most similar vectors must not necessarily be the actual most similar vectors. There can be small variations. But I think the difference will be small.\r\n\r\nOften you have in your data some structure, like categories or links between news articles. This structure can be used to fine-tune a model. Let's say you have links linking to similar events. Than you train the network with triplet loss with the two linked articles and one random other article as negative example.\r\n\r\nThis will give you a vector space where (possibly) linked articles are close. ",
"@nreimers Thank you very much for your quick response. \r\nAre the existing model \"bert-large-nli-stsb-mean-tokens\" better than the google news word2vec [google_news_300](https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit?usp=sharing), they claim that-\" We are publishing pre-trained vectors trained on part of Google News dataset (about 100 billion words). The model contains 300-dimensional vectors for 3 million words and phrases.\"\r\nIs the pretrained \"bert-large-nli-stsb-mean-tokens\" better than google's pre-trained news vectors?\r\n\r\nFor training the existing model to improve results for news similarity, the problem I have is I can't create a dataset to compute triplet loss. For triplet loss to work in the case of news similarity for query news ['**a**'], I need to find a news article ['**b**'] which is similar as a positive example and a dissimilar news article ['**c**'] as a negative example. Like <a,b> positive example and <a,c> negative example.\r\n\r\nHowever, If I run the news every day then, new entities/topics are going to pop up every single day? I need to update my embeddings right? I don't know how to handle this situation. \r\n",
"Google News vectors are just word vectors, you still need a strategy to derive sentence embeddings from these. But as mentioned earlier, average word embeddings is a promising idea for your task. Note, average word embeddings models will be added soon to the repository.\r\n\r\nConstantly updating of the model is not needed. News are changing, but the used words remain the same. So training once should give you a model that can be used for a long time. ",
"@nreimers Thank you very much! Any tentative date by when the average word embeddings will be added to the repository? \r\n\r\nI want to know how to evaluate the results of similar sentences numerically, for example when I use your model to evaluate for a given news, finding similar news in the corpus. \r\n\r\nIs there a way to measure numerically how good the similar sentences are in the below example? I used BLEU score, but the problem is, it's not an accurate measure of similarity. BLEU score doesn't consider the context of the sentence, it just blindly counts whether a word in the query sentence is present in the similar sentence regardless of where the word is placed.\r\n\r\nFor an item, I get related items.\r\nIn the below example, the first title in relatedItems is similar, however, the second item in \"relatedItems\" is not at all similar which talks about Stephen Colbert and Joe Biden. \r\nSuppose I use word2vec model for the above task it might give me two totally different sentences as relatedItems, In that case, how can I evaluate both the models and claim numerically which one is better?\r\n\r\nExample:\r\n\r\n{\"title\": \"Google Is Rolling Out A New Version Of Android Auto - Here's What You Can Expect\",\r\n\"abstract\": \"The new Android Auto. Google If you use Android Auto, you're about to receive to a nice upgrade.\",\r\n}\r\n\"relatedItems\": \r\n[{\r\n\r\n \"title\": \"New Android ransomware is spreading through text messages\",\r\n \"abstract\": \"There\\u2019s a new type of Android ransomware making the rounds that leverages SMS to spread, according to a new report from cyberappsecurity com\",\r\n},\r\n{\r\n \"title\": \"Stephen Colbert Brings Curtain Down On Democratic Debates With Joe Biden Tweaks\",\r\n \"abstract\": \"Stephen Colbert closed his second of two live Late Show monologues with a spree of zingers directed at Joe Biden, mixing in plenty for the o\",\r\n\r\n}\r\n]}\r\n\r\n\r\n\r\n",
"Bleu wouldn't be a good measure, because the best similarity metric to find similar news would be: Bleu (of course). \r\n\r\nWhat you would need is an annotated Corpus. For a given article, get for example the 20 articles with the highest tf idf similarity. Then annotate every pair as similar or not.\r\n\r\nWith this data you can compare different methods with Ndcg about how well they rank the 20 candidate articles. \r\n\r\nAvg. Word embeddings should be included within the next two weeks to the repo. ",
"@nreimers When you say -\"Bleu wouldn't be a good measure, because the best similarity metric to find similar news would be: Bleu (of course).\" \r\nDo you mean when I get similar news like in the above example, BLEU is the best metric to measure how similar the two news articles are? Please correct me if I understood this wrong.\r\n\r\nIn the STS benchmark, I saw a pair in the training dataset with gold-standard human evaluated scores. The following paid had a score of 5, however, when I use BLEU scores for 1gram they don't get a score of 1. Instead, they get the following scores. BLEU looks for the exact word to be present in the reference sentence that's the problem I feel. There's no notion of similarity.\r\n\r\ns=word_tokenize(\"The polar bear is sliding on the snow\")\r\nreference = [s]\r\ncandidate =word_tokenize(\"The polar bear is sliding across the snow\") \r\nprint('Individual 1-gram: %f' % sentence_bleu(reference, candidate, weights=(1, 0, 0, 0)))\r\nprint('Individual 2-gram: %f' % sentence_bleu(reference, candidate, weights=(0, 1, 0, 0)))\r\nprint('Individual 3-gram: %f' % sentence_bleu(reference, candidate, weights=(0, 0, 1, 0)))\r\nprint('Individual 4-gram: %f' % sentence_bleu(reference, candidate, weights=(0, 0, 0, 1)))\r\n\r\nIndividual 1-gram: 0.875000\r\nIndividual 2-gram: 0.714286\r\nIndividual 3-gram: 0.500000\r\nIndividual 4-gram: 0.400000\r\n\r\nreference sentence has 8 words out of which the candidate matches exactly 7 words, so 7/8 score for 1-gram matches.\r\n\r\nI'm not sure how the STS benchmarks are evaluated, I'm currently looking into them. If you have any leads or a document I would be more than happy to read them. \r\n\r\nThank you very much for your help :)\r\n\r\n",
"No, BLEU is a terrible idea for evaluation.\r\n\r\nSTS is usually evaluated using Pearson correlation between gold and predicted labels. But Pearson correlation is also a bad idea:\r\nhttps://aclweb.org/anthology/C16-1009\r\n\r\nI strongly recommend to use Spearman correlation for comparison. ",
"@nreimers Kudos on the COLING paper! It's very well written. In the paper, you have mentioned How Pearson correlation can be misleading or ill-suited for the semantic text-similarity task. However, you did not suggest to use Spearman correlation instead of Pearson correlation. But for me, you suggested me to use Spearman correlation why? (That's my current understanding of the paper)\r\n\r\nCan I use the Spearman rank correlation from scipy?\r\nBasically, I want to compare the BERT output sentences from your model and output from word2vec to see which one gives better output.\r\nSo there is a reference sentence and I get a bunch of similar sentences as I mentioned in the previous example [ please refer to the JSON output in the previous comments].\r\n\r\nWill the below code is the right way to do the comparison? \r\nIn your sentence transformer, you have used the same below package in SentenceEvaluator class. I couldn't figure out how to use that class for my comparison. \r\n\r\nWill you please give me some idea in this regard?\r\n\r\nExample code:\r\nfrom scipy.stats import spearmanr\r\nx = [1, 2, 3] ---> I will use BERT and word2vec embeddings here.\r\nx_corr = [2, 4, 6]\r\ncorr, p_value = spearmanr(x, x_corr)\r\nprint (corr)\r\n",
"Hi @Raghavendra15 \r\nThe issue with pearson correlation is, that it assumes a linear correlation between the system output and gold labels. Adding a montone function to the system output can change the scores (make them better or worse), which does not really make sense in applications.\r\n\r\nAssume you have a systems that predicts the perfect gold scores, however, the output is output=sqrt(gold_label).\r\n\r\nThis system would get a really low Pearson correlation. However, for every application, this system would be perfect, as it predicts the gold labels. With Spearman correlation, you don't have this issue. There, just the ranking of the scores are important.\r\n\r\nIn general I think the STS tasks (or the STS benchmark) are not really well suited to evaluated approaches. The STS tasks with Pearson/Spearman correlation weights every score similar, but in applications, we are often only interested in certain examples.\r\n\r\nFor example, if we search for pairs with the highest similarity, then we don't care how the scores are for low similarity pairs. A system that gives a perfect score for high similarity pairs and a random score for low similarity pairs would be great for this application. However, this system would get a low Pearson/Spearman correlation, as it fails to correctly order the somewhat-similar and unsimilar pairs.\r\n\r\nIf you want so estimate the similarity of two vectors, you should use cosine-similarity or Manhatten/Euclidean distance. \r\n\r\nSpearman correlation is only used for the comparison to gold scores.\r\n\r\nAssume you have the pairs:\r\nx_1, y_1\r\nx_2, y_2\r\n\r\n...\r\nfor every (x_i, y_i) you have a score s_i from 0 ... 1 indicating a gold label score for their similarity. \r\n\r\nYou can check how good the embeddings are by computing the cosine similarity between the embeddings for (x_i, y_i) and then you compute the Spearman correlation between these computes cosine similarity scores and the gold score s_i.\r\n\r\nNote: Currently I add methods to compute average word embeddings and similar methods to the repository. So a comparison will become easier.",
"@nreimers Last week you added the methods to compute average word embeddings should I use that method when I get a sentence embedding or will there be a pre-trained average word embedding weights? \r\nIn the below code I will get the embeddings once I pass the input strings. Should I use the compute avg word embedding method on top of this?\r\n\r\ncorpus = ['A man is eating a food.',\r\n 'A man is eating a piece of bread.' ]\r\ncorpus_embeddings = embedder.encode(corpus)\r\nor \r\nBy any chance, pre-trained avg-word embedding weights will be uploaded to the repository by any time this week. ",
"Hi @Raghavendra15 \r\nI just uploaded v0.2.0 to github and PyPi:\r\nhttps://github.com/UKPLab/sentence-transformers\r\n\r\nYou can update with pip install -U sentence-transformers\r\n\r\nI added an example for average word embeddings (+a DAN layer that is trainable):\r\nhttps://github.com/UKPLab/sentence-transformers/blob/master/examples/training_stsbenchmark_avg_word_embeddings.py\r\n\r\nYou can also use it without the DAN layer. There is also a tokenizer implemented that allows the usage of the word2vec Google News vectors. These vectors contain phrases like 'New_York'. These phrases are detected by the tokenizer and mapped to the correct embedding for New_York. But there is currently no example for this in the repo. If you need help, let me know.\r\n\r\nTo get avg. word embeddings only (without DAN), the code must look like this:\r\n```\r\n# Map tokens to traditional word embeddings like GloVe\r\nword_embedding_model = models.WordEmbeddings.from_text_file('glove.6B.300d.txt.gz')\r\n\r\n# Apply mean pooling to get one fixed sized sentence vector\r\npooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension(),\r\n pooling_mode_mean_tokens=True,\r\n pooling_mode_cls_token=False,\r\n pooling_mode_max_tokens=False)\r\nmodel = SentenceTransformer(modules=[word_embedding_model, pooling_model])\r\n\r\ncorpus_embeddings = model.encode(corpus)\r\n```\r\n\r\nNext release will update include support for RoBERTa and add other sentence embeddings methods (like USE, LASER), which will be trainable.",
"@nreimers Thank you very much! You spoke my mind with RoBERTa, I was about to ask you about it. But with the avg-embedding approach, I won't be using BERT at all right? \r\n\r\nIn addition to that, I won't be training the model. I don't think I fully understand this. Earlier I would pass a pretrained weight model into SentenceTransformer, however now I won't pass anything related to BERT, does that mean I won't be using BERT?",
"@Raghavendra15 \r\nThe framework offers you a lot of flexibility. You can choose between the following embedding approaches:\r\n- BERT or XLNet (RoBERTa and other will follow)\r\n- Traditional word embeddings like GloVe, word2vec etc.\r\n\r\nThen, you can choose between different pooling modes: Mean pooling, max pooling, usage of the CLS token for BERT / XLNet.\r\n\r\nFinally, if you like, you can add feed-forward networks to create a deep-averaging network.\r\n\r\nIf you have training data, I can recommend this combination:\r\nBERT + mean-pooling\r\n\r\nThis gave the best performance for many cases. \r\n\r\nIf you have training data, you need a low computation time and performance is not that important, choose this combination:\r\nGloVe embeddings (or something similar) + mean-pooling + 1 or 2 dense layers\r\n\r\nIf you don't have training data, choose:\r\nGloVe embeddings (or something similar) + mean-pooling \r\n\r\nAs you can see, there are various options you can choose from, depending if you have training data and how important is a high speed vs. a good performance.\r\n\r\nOnce I have RoBERTa integrated, how suitable it is for the generation of sentence embeddings. My experiences with XLNet was that the performance is slightly below the performance of BERT for sentence embeddings. Maybe RoBERTa is better for sentence embeddings, maybe not.\r\n\r\nAveraging BERT without fine-tuning on data gave really poor results. However, what you can of course try, is to use one of the existent pretrained BERT models like 'bert-base-nli-mean-tokens', which is BERT+mean-pooling, fine-tuned on NLI data to generate meaningful sentence embeddings.\r\n",
"@nreimers Thank you very much! Why didn't you choose (word2vec) Google news vectors? Is there any particular reason for choosing Glove embedding over word2vec? I'm curious to know how RoBERTa will perform! 😃",
"@Raghavendra15 \r\nThere are two reasons:\r\n1) Google news word2vec is quite large, it requires about 12 GB of RAM to read it in. Not that ideal for an example script. GloVe embeddings are about 10 times smaller.\r\n2) In most of my experiments, the Google news word2vec vectors did not yield good performances. GloVe embeddings were often a bit better. I especially like the embeddings by Levy et al (trained on dependencies) and by Komninos. I also conducted a larger comparison between word embeddings (https://arxiv.org/abs/1707.06799, Table 5).\r\n\r\nBut note, using the Google news word2vec vectors is quite easy. In training_stsbenchmark_avg_word_embeddings.py replace\r\n```\r\nword_embedding_model = models.WordEmbeddings.from_text_file('glove.6B.300d.txt.gz')\r\n```\r\nwith\r\n```\r\nword_embedding_model = models.WordEmbeddings.from_text_file('GoogleNews-vectors-negative300.txt.gz')\r\n```\r\n\r\nFirst experiments with RoBERTa are done: On STSbenchmark, it increases the Spearman correlation by about 1 - 2 percentage points. I will see how it will perform on other datasets.\r\n\r\nBest, Nils Reimers",
"This issue is very interesting, thanks for sharing your experiments and framework @nreimers!",
"@nreimers I read your paper on word embedding comparison, however, when I saw the GLEU scoreboard for STS benchmark Glove scored very less compared to word2vec, Isn't it contradictory to your paper? Also in your paper, the comparisons are on a certain set of tasks like Entity Recognition, NER but not on Semantic Textual Similarity. I don't know much about it, I'm trying to learn. Do my questions make sense?\r\n \r\nIs there any significant difference between using glove.840B.300d.zip (contains 840 billion words vectors trained on the common crawl ) vs glove.6B.300d.txt.gz (contains 6 billion words vectors wikipedia+Gigaword), Is it like more words the better? also, they're trained on different datasets, will that make a huge difference when applied to news similarity?",
"See the GloVe website / paper for the differences. 6B was trained on 6 billion words from Wikipedia, 840B was trained on 840 Billion words from common crawl.\r\n\r\nIt depends on the task and data which one is more suitable. If you have a lot of rare words, and those play an important role for your task, 840B is often better. If you have clean data / only common words are important for your task, 6B often works better. \r\n\r\nHowever, the differences are often only minor between the two versions. \r\n\r\nIn my paper I only compare embeddings for supervised task, only for sequence tagging. \r\n\r\nIn unsupervised tasks, you can get completely different results. Further, how word embeddings are averaged has a big impact. Some authors don't ignore stop words, instead they propose some complicated weighting scheme. If stop words are ignored, performances can be improved up tp 10 percentage points, sometimes outperforming complex weighting approaches. \r\n\r\nBest, \r\nNils Reimers ",
"Thank you for your work, Nils, it is brillant! \r\n\r\nI would like to design a sentence level semantic search engine using email data (Enron dataset). \r\n\r\nI am still a little bit confused about how I should be fine-tuning models on such dataset (maybe I am missing something obvious).\r\n\r\nThanks.\r\n\r\nGogan\r\n",
"@ggndtes In general BM25 will be really hard to beat on this type of task. See this paper where they compare sentence embeddings with BM25 on an end-to-end retrieval task (given: question, find similar / duplicate questions in a large corpus):\r\nhttps://arxiv.org/pdf/1811.08008.pdf\r\n\r\nA complex sentence embedding method only achieves 1 - 2 percentage points improvement against BM25 (Table 2, Dual Encoder Paralex vs. Okapi BM 25).\r\n\r\nEspecially if you have more than just a sentence, carefully constructed BM25 for example with Elasticsearch is really really hard to beat. If you are interested in a production system, I would highly recommend to first try Elasticsearch (or similar), beating it will be difficult.\r\n\r\nBack to your question how you can tune it:\r\nThe big question narrows down to: What are your queries, what are your documents. Are your documents complete emails? Or only email subjects? Or only sentences within emails?\r\n\r\nAre your queries inputs from the user, email subjects or complete emails?\r\n\r\nIn general you would need to construct same sort of similarity. Currently I can only think of imperfect method to create similarity labels. One option would be: Triplet loss with 2 emails from the same inbox vs. one random other subject. But this would I think create rather bad embeddings.\r\n\r\nCurrently I can't think of a good method to create similarity labels for that dataset. And as mention, even with perfect labels, it will be really hard to beat BM25. \r\n\r\nBest,\r\n-Nils Reimers\r\n \r\n\r\n\r\n",
"@nreimers The sentence encoder actually takes quite a lot of time to load the Glove embeddings, Is there a way where I can make it load from the disk or make it faster?",
"@Raghavendra15 When you run the code the first time, the embeddings are downloaded and stored in the path of the script. In follow-up executions, the embeddings file is loaded from disk.\r\n\r\nGloVe embeddings are quite large, so loading it can take some time.\r\n\r\nThere are two ways to speed it up:\r\n1) Limit the vocab size, i.e., don't load all the ~400k embeddings. Pass the parameter 'max_vocab_size' to the method 'from_text_file' when called.\r\n2) Save the WordEmbeddings model to disc. In follow-up executions, you can load the (binary) model directly from disc and you don't have to read in and parse in the text file.\r\n\r\nShould work something like this:\r\n```\r\nword_model = WordEmbeddings.from_text_file('my-glove-file.txt')\r\nword_model.save('my/output/folder/GloveWordModel')\r\n\r\n# In follow-up calls, should be faster\r\nword_model = WordEmbeddings.load('my/output/folder/GloveWordModel')\r\n```",
"@nreimers Wow!! It works blazingly fast! \r\nI was trying to play with the below code. Thank you very much for the help :) \r\nCode in In WordEmbeddings.py file:\r\n```\r\n with gzip.open(embeddings_file_path, \"rt\", encoding=\"utf8\") if embeddings_file_path.endswith('.gz') else open(embeddings_file_path, encoding=\"utf8\") as fIn:\r\n iterator = tqdm(fIn, desc=\"Load Word Embeddings\", unit=\"Embeddings\")\r\n for line in iterator:\r\n\r\n```\r\nAlso, can I load the model similar to that for BERT pre-trained weights? such as the below code?\r\n\r\n`embedder = SentenceTransformer('bert-large-nli-stsb-mean-tokens')`\r\nCan I load the above pre-trained weights somehow just like you have `load` method for glove weights?\r\n\r\nIs the avg embedding with Glove better than \"bert-large-nli-stsb-mean-tokens\" the BERT pre-trained model you have loaded in the repository? How's RoBERTa doing? Your work is amazing! Thank you so much again! \r\n",
"@Raghavendra15 Sure you can:\r\n```\r\nword_embedding_model = models.WordEmbeddings.from_text_file('glove.6B.300d.txt.gz')\r\n\r\n# Apply mean pooling to get one fixed sized sentence vector\r\npooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension(),\r\n pooling_mode_mean_tokens=True,\r\n pooling_mode_cls_token=False,\r\n pooling_mode_max_tokens=False)\r\n\r\n\r\nmodel = SentenceTransformer(modules=[word_embedding_model, pooling_model])\r\nmodel.save('my/output/folder/avg-glove-embeddings')\r\n\r\n# Load the Model:\r\nmodel = SentenceTransformer('my/output/folder/avg-glove-embeddings')\r\n```\r\n\r\nWhich model is better depends extremely on your data and on your task. The BERT models work good if you have clean data, which is not too domain specific and rather descriptive. This is due to the nature on which data it was fine-tuned (on NLI dataset).\r\n\r\nAverage GloVe embeddings works I think better if you have noisy data, really domain specific data or very short sentences or very large paragraphs. \r\n\r\nExperiments with RoBERTa are finished. Paper will be uploaded next week to arxiv. In my experiments, I could not observe a major difference between BERT and RoBERTa for sentence embeddings: Sometimes BERT is a little bit better, sometimes RoBERTa. But nothing that is significant. XLNet was so far in general worse than BERT.\r\n\r\nBest\r\n-Nils Reimers"
] | 1,563 | 1,698 | 1,619 | NONE | null | I have used BERT NextSentencePredictor to find similar sentences or similar news, However, It's super slow. Even on Tesla V100 which is the fastest GPU till now. It takes around 10secs for a query title with around 3,000 articles. Is there a way to use BERT better for finding similar sentences or similar news given a corpus of news articles? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/876/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/875 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/875/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/875/comments | https://api.github.com/repos/huggingface/transformers/issues/875/events | https://github.com/huggingface/transformers/issues/875 | 471,850,052 | MDU6SXNzdWU0NzE4NTAwNTI= | 875 | XLNet bidirectional input pipeline requires batch size at least 2 | {
"login": "langfield",
"id": 35980963,
"node_id": "MDQ6VXNlcjM1OTgwOTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/35980963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/langfield",
"html_url": "https://github.com/langfield",
"followers_url": "https://api.github.com/users/langfield/followers",
"following_url": "https://api.github.com/users/langfield/following{/other_user}",
"gists_url": "https://api.github.com/users/langfield/gists{/gist_id}",
"starred_url": "https://api.github.com/users/langfield/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/langfield/subscriptions",
"organizations_url": "https://api.github.com/users/langfield/orgs",
"repos_url": "https://api.github.com/users/langfield/repos",
"events_url": "https://api.github.com/users/langfield/events{/privacy}",
"received_events_url": "https://api.github.com/users/langfield/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,569 | 1,569 | NONE | null | This may not be a true bug since it's mentioned in the paper that
> each of the forward and backward directions takes half of the batch size
but when using the bidirectional input pipeline, any call to `XLNetModel.forward()` will raise an error of the form
```
RuntimeError: shape '[x, y, z]' is invalid for input of size 0
```
if the **batch size of the `input_ids` passed is less than 2**. This is because it halves (integer div) `bsz` in accordance with the above quote in the following block:
```
if self.bi_data:
fwd_pos_seq = torch.arange(beg, end, -1.0, dtype=torch.float)
bwd_pos_seq = torch.arange(-beg, -end, 1.0, dtype=torch.float)
if self.clamp_len > 0:
fwd_pos_seq = fwd_pos_seq.clamp(-self.clamp_len, self.clamp_len)
bwd_pos_seq = bwd_pos_seq.clamp(-self.clamp_len, self.clamp_len)
if bsz is not None:
fwd_pos_emb = self.positional_embedding(fwd_pos_seq, inv_freq, bsz//2)
bwd_pos_emb = self.positional_embedding(bwd_pos_seq, inv_freq, bsz//2)
```
The result is a batch size of `0`, which obviously wreaks havoc later on. It's only relevant if people are trying to run a MWE with really small input as I was for testing, but maybe an assert statement somewhere near the top of the `XLNetModel.forward()` function is a good idea, conditional on `bi_data` being `True`.
More generally, a shape mismatch is caused for the same reason if `bi_data` is `True` and `bsz` is any positive odd integer. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/875/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/875/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/874 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/874/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/874/comments | https://api.github.com/repos/huggingface/transformers/issues/874/events | https://github.com/huggingface/transformers/issues/874 | 471,835,517 | MDU6SXNzdWU0NzE4MzU1MTc= | 874 | Fine-tuning model and Generation | {
"login": "antmarakis",
"id": 17463361,
"node_id": "MDQ6VXNlcjE3NDYzMzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/17463361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antmarakis",
"html_url": "https://github.com/antmarakis",
"followers_url": "https://api.github.com/users/antmarakis/followers",
"following_url": "https://api.github.com/users/antmarakis/following{/other_user}",
"gists_url": "https://api.github.com/users/antmarakis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antmarakis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antmarakis/subscriptions",
"organizations_url": "https://api.github.com/users/antmarakis/orgs",
"repos_url": "https://api.github.com/users/antmarakis/repos",
"events_url": "https://api.github.com/users/antmarakis/events{/privacy}",
"received_events_url": "https://api.github.com/users/antmarakis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Having the same question - how to use bert for generation? ",
"the same problem, how to train my own data for text generation?",
"We'll add an example for fine-tuning this month.",
"@thomwolf as I read in other issues, BERT model cannot be used to generate text directly (your reply https://github.com/huggingface/pytorch-transformers/issues/401#issuecomment-477111518).\r\nWhat exact examples are you planning to add? Thanks.",
"@Bruno-bai did you figure out how to train own data?",
"Not really. Would appreciate a tutorial:)\n\nOn Mon, Aug 19, 2019 at 5:22 AM Vedang Mandhana <[email protected]>\nwrote:\n\n> @Bruno-bai <https://github.com/Bruno-bai> did you figure out how to train\n> own data?\n>\n> —\n> You are receiving this because you commented.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/pytorch-transformers/issues/874?email_source=notifications&email_token=AHZ2KL5Q2RIKUTGARMX6EY3QFINYTA5CNFSM4IGHELH2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4RUR4I#issuecomment-522406129>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/AHZ2KL3FS6CDZQN75KNXVPDQFINYTANCNFSM4IGHELHQ>\n> .\n>\n",
"this would be very useful example to have. to finetune gpt2, xlnet, ... and run generation from the finetuned model. Don't know whether bert supports generation or not, but the ones that do..",
"I am too struggling with similar problem. I want to train a non-english (hindi) language model on my custom dataset and use it for text generation. From what I understood, BERT sucks at text generation as it uses MLM for training. The ones that do well (gpt,trans-xl,xlnet) don't have a pretrained multilingual model available.\r\n\r\n@Bruno-bai @sakalouski are you looking for training own data for language generation? Coz I have done it for classification and can help with that.",
"Hi @thomwolf \r\n\r\n> We'll add an example for fine-tuning this month.\r\n\r\nHas this example been added yet?\r\n\r\nThanks",
"Hi @amin-nejad, the example has been added and is available [here](https://github.com/huggingface/pytorch-transformers/blob/master/examples/run_lm_finetuning.py).",
"Thanks @LysandreJik. Will this also work with Transformer-XL if we just modify the source code to include the Transformer-XL Config, LMHeadModel and Tokenizer as a model class? Or will it require more substantial changes?",
"Using `run_lm_finetuning.py` seemingly works for Transformer-XL if we additionally import the Transformer-XL Config, LMHeadModel and Tokenizer and modify the `MODEL_CLASSES` to include them. We also need to provide the `block_size` as a command line parameter. Training curves look reasonable and decoding also happens without errors using `run_generation.py` but the model output is pretty much always just a bunch of equals signs e.g. `= = = = = = = = =` etc. for me at least anyway. Clearly more substantial changes are required to `run_lm_finetuning.py` to make it work. If anyone knows what/why, please let me know",
"One thing we should do (maybe when we have some bandwidth for that with @LysandreJik) is to push back a PR to PyTorch repo to add an option to have biases on all clusters of PyTorch's Adaptive Softmax so we can rely on the official Adaptive Softmax implementation instead of having our own.\r\n\r\nThat would make the job of maintaining and upgrading Transformer-XL a lot easier as it's currently the most cumbersome code base to maintain.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,616 | 1,573 | CONTRIBUTOR | null | Hello!
I am beginner and I just wanted to run some experiments, but I've hit a road block. I am trying to generate text using `run_generator.py` after I fine-tune a model on my data using `simple_lm_finetuning.py`. I've looked around a bit, and I'm not sure how to go about this, or if this is possible at all. I don't see an option for `run_generator` to use BERT models, and I'm not sure how to bridge the two scripts.
Basically what I want to do is to fine-tune a model on my data and then generate text. Can this be done with `run_generator` and `simple_lm_finetuning`?
Thank you!
---
EDIT:
Forgot to add my code:
```
python pytorch-transformers/examples/lm_finetuning/simple_lm_finetuning.py \
--train_corpus data.txt \
--bert_model bert-base-uncased \
--do_lower_case \
--output_dir finetuned_lm/ \
--do_train
python pytorch-transformers/examples/run_generation.py \
--model_type=transfo-xl \
--length=20 \
--model_name_or_path='finetuned_lm'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/874/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/874/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/873 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/873/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/873/comments | https://api.github.com/repos/huggingface/transformers/issues/873/events | https://github.com/huggingface/transformers/pull/873 | 471,779,098 | MDExOlB1bGxSZXF1ZXN0MzAwMzYzMjcw | 873 | Add nn.Identity replacement for old PyTorch | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,563 | 1,578 | 1,563 | MEMBER | null | Fix #869 to keep at least PyTorch 1.0.0 compatiblity. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/873/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/873",
"html_url": "https://github.com/huggingface/transformers/pull/873",
"diff_url": "https://github.com/huggingface/transformers/pull/873.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/873.patch",
"merged_at": 1563898596000
} |
https://api.github.com/repos/huggingface/transformers/issues/872 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/872/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/872/comments | https://api.github.com/repos/huggingface/transformers/issues/872/events | https://github.com/huggingface/transformers/pull/872 | 471,714,832 | MDExOlB1bGxSZXF1ZXN0MzAwMzEwOTc5 | 872 | Updating schedules for state_dict saving/loading | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/872?src=pr&el=h1) Report\n> Merging [#872](https://codecov.io/gh/huggingface/pytorch-transformers/pull/872?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/268c6cc160ba046d6a91747c5f281f82bd88a4d8?src=pr&el=desc) will **increase** coverage by `0.12%`.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/872?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #872 +/- ##\n==========================================\n+ Coverage 78.9% 79.03% +0.12% \n==========================================\n Files 34 34 \n Lines 6192 6228 +36 \n==========================================\n+ Hits 4886 4922 +36 \n Misses 1306 1306\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/872?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/tests/optimization\\_test.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/872/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdGVzdHMvb3B0aW1pemF0aW9uX3Rlc3QucHk=) | `98.97% <100%> (+0.4%)` | :arrow_up: |\n| [pytorch\\_transformers/optimization.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/872/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvb3B0aW1pemF0aW9uLnB5) | `96.62% <100%> (+0.33%)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/872?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/872?src=pr&el=footer). Last update [268c6cc...0740e63](https://codecov.io/gh/huggingface/pytorch-transformers/pull/872?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,563 | 1,578 | 1,563 | MEMBER | null | This PR updates the schedules so that they can be saved/reloaded using the standard `state_dict()` and `load_state_dict()` methods of PyTorch [`LambdaLR` model](https://pytorch.org/docs/stable/optim.html#torch.optim.lr_scheduler.LambdaLR.load_state_dict).
Useful for continuing stopped training as mentioned in #839 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/872/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/872",
"html_url": "https://github.com/huggingface/transformers/pull/872",
"diff_url": "https://github.com/huggingface/transformers/pull/872.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/872.patch",
"merged_at": 1563890587000
} |
https://api.github.com/repos/huggingface/transformers/issues/871 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/871/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/871/comments | https://api.github.com/repos/huggingface/transformers/issues/871/events | https://github.com/huggingface/transformers/issues/871 | 471,652,639 | MDU6SXNzdWU0NzE2NTI2Mzk= | 871 | fp16 is not work | {
"login": "zsk423200",
"id": 18025765,
"node_id": "MDQ6VXNlcjE4MDI1NzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/18025765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zsk423200",
"html_url": "https://github.com/zsk423200",
"followers_url": "https://api.github.com/users/zsk423200/followers",
"following_url": "https://api.github.com/users/zsk423200/following{/other_user}",
"gists_url": "https://api.github.com/users/zsk423200/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zsk423200/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zsk423200/subscriptions",
"organizations_url": "https://api.github.com/users/zsk423200/orgs",
"repos_url": "https://api.github.com/users/zsk423200/repos",
"events_url": "https://api.github.com/users/zsk423200/events{/privacy}",
"received_events_url": "https://api.github.com/users/zsk423200/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Duplicate of #868"
] | 1,563 | 1,563 | 1,563 | NONE | null | GPU:v100
run run_glue.py with the command in the README:
```
python ./examples/run_glue.py \
--model_type bert \
--model_name_or_path bert-base-uncased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--do_lower_case \
--data_dir $GLUE_DIR/$TASK_NAME \
--max_seq_length 128 \
--per_gpu_eval_batch_size=8 \
--per_gpu_train_batch_size=8 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/$TASK_NAME/
```
and i also run with --fp16, but spend the same time and gpu memory, why fp16 does not work | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/871/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/870 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/870/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/870/comments | https://api.github.com/repos/huggingface/transformers/issues/870/events | https://github.com/huggingface/transformers/issues/870 | 471,572,906 | MDU6SXNzdWU0NzE1NzI5MDY= | 870 | How to load a fine-tuned model pytorch_model.bin produced by run_bert_swag.py | {
"login": "xiami2019",
"id": 37145051,
"node_id": "MDQ6VXNlcjM3MTQ1MDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/37145051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiami2019",
"html_url": "https://github.com/xiami2019",
"followers_url": "https://api.github.com/users/xiami2019/followers",
"following_url": "https://api.github.com/users/xiami2019/following{/other_user}",
"gists_url": "https://api.github.com/users/xiami2019/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xiami2019/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xiami2019/subscriptions",
"organizations_url": "https://api.github.com/users/xiami2019/orgs",
"repos_url": "https://api.github.com/users/xiami2019/repos",
"events_url": "https://api.github.com/users/xiami2019/events{/privacy}",
"received_events_url": "https://api.github.com/users/xiami2019/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"That's a strange error, what are the exact process you are using and full error log?",
"> That's a strange error, what are the exact process you are using and full error log?\r\n\r\nHi, thanks for the reply.\r\nI used distributed training in one node with 2GPUs and my command is:\r\nexport SWAG_DIR=SWAG; export export CUDA_VISIBLE_DEVICES=2,3; python -m torch.distributed.launch --nproc_per_node=2 run_bert_swag.py --bert_model bert-base-uncased --do_train --do_lower_case --do_eval --data_dir $SWAG_DIR/data --train_batch_size 16 --learning_rate 2e-5 --num_train_epochs 1.0 --max_seq_length 80 --output_dir /home/disk1/chengqinyuan/pt_transformer_examples/swag_output --gradient_accumulation_steps 1\r\n\r\nI modified run_swag.py with follow lines:\r\n if args.do_train:\r\n # Save a trained model, configuration and tokenizer\r\n model_to_save = model.module if hasattr(model, 'module') else model # Only save the model it-self\r\n\r\n # If we save using the predefined names, we can load using `from_pretrained`\r\n output_model_file = os.path.join(args.output_dir, WEIGHTS_NAME)\r\n output_config_file = os.path.join(args.output_dir, CONFIG_NAME)\r\n\r\n # torch.save(model.state_dict(), output_model_file)\r\n model_to_save.save_pretrained(args.output_dir)\r\n model_to_save.config.to_json_file(output_config_file)\r\n tokenizer.save_vocabulary(args.output_dir)\r\n\r\n # Load a trained model and vocabulary that you have fine-tuned\r\n model = BertForMultipleChoice.from_pretrained(args.output_dir, num_choices=4)\r\n tokenizer = BertTokenizer.from_pretrained(args.output_dir, do_lower_case=args.do_lower_case)\r\n else:\r\n model = BertForMultipleChoice.from_pretrained(args.output_dir, num_choices=4)\r\n tokenizer = BertTokenizer.from_pretrained(args.output_dir, do_lower_case=args.do_lower_case)\r\n model.to(device)\r\n\r\nAnd the error log is:\r\nTraceback (most recent call last):\r\n File \"run_bert_swag.py\", line 571, in <module>\r\n main()\r\n File \"run_bert_swag.py\", line 505, in main\r\n model = BertForMultipleChoice.from_pretrained(args.output_dir)\r\n File \"/home/chengqinyuan/anaconda3/envs/py3/lib/python3.6/site-packages/pytorch_transformers-1.0.0-py3.6.egg/pytorch_transformers/modeling_utils.py\", line 406, in from_pretrained\r\n File \"/home/chengqinyuan/anaconda3/envs/py3/lib/python3.6/site-packages/torch/serialization.py\", line 387, in load\r\n return _load(f, map_location, pickle_module, **pickle_load_args)\r\n File \"/home/chengqinyuan/anaconda3/envs/py3/lib/python3.6/site-packages/torch/serialization.py\", line 581, in _load\r\n deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)\r\nRuntimeError: storage has wrong size: expected -4807048246308659860 got 589824\r\n\r\nBy the way, when I used the script to save a pre_trained model without fine-tune(I skipped the training process and then saved the model), it can normally load the saved model and do evaluation. But the error would occur when I loaded a fine-tuned model.\r\n\r\nOr could you please give me a brief guideline to execute run_swag.py with pytorch_transformers. I followed the guideline at https://huggingface.co/pytorch-transformers/examples.html and encounter many bugs. Thank you very much!",
"I solved this error by use multi-GPU training instead of distributed training, it seems like something wrong in distributed training setting, thanks for your reply : )",
"> I solved this error by use multi-GPU training instead of distributed training, it seems like something wrong in distributed training setting, thanks for your reply : )\r\n\r\n我在用单机多卡训练后保存模型也遇到了这个问题,请问你是怎么解决的,用DataParallel还是DistributedDataParallel?\r\n"
] | 1,563 | 1,584 | 1,563 | NONE | null | Hi, guys! I have a little question about how to load a fine-tuned model 'pytorch_model.bin' produced by run_bert_swag.py.
When I load a fine-tuned model pytorch_model.bin with .from_pretrained methods, runtime error occurs as follow:
RuntimeError: storage has wrong size: expected 4357671300540823961 got 589824.
I fine-tuned a bert-base-uncased model.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/870/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/869 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/869/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/869/comments | https://api.github.com/repos/huggingface/transformers/issues/869/events | https://github.com/huggingface/transformers/issues/869 | 471,551,615 | MDU6SXNzdWU0NzE1NTE2MTU= | 869 | module 'torch.nn' has no attribute 'Identity' | {
"login": "ahlawatankit",
"id": 24656446,
"node_id": "MDQ6VXNlcjI0NjU2NDQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/24656446?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ahlawatankit",
"html_url": "https://github.com/ahlawatankit",
"followers_url": "https://api.github.com/users/ahlawatankit/followers",
"following_url": "https://api.github.com/users/ahlawatankit/following{/other_user}",
"gists_url": "https://api.github.com/users/ahlawatankit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ahlawatankit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahlawatankit/subscriptions",
"organizations_url": "https://api.github.com/users/ahlawatankit/orgs",
"repos_url": "https://api.github.com/users/ahlawatankit/repos",
"events_url": "https://api.github.com/users/ahlawatankit/events{/privacy}",
"received_events_url": "https://api.github.com/users/ahlawatankit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This was added in PyTorch 1.1.0 (see [changelog here](https://github.com/pytorch/pytorch/tree/v1.1.0) :)\r\n\r\nSo I guess you just have to update your PyTorch version!",
"Oh yes, I guess we can add a replacement to keep older PyTorch compatibility.\r\nWould be sad to lose backward compatibility just for this.",
"Is this replacement added in any of the newer versions?"
] | 1,563 | 1,605 | 1,563 | NONE | null | Traceback (most recent call last):
File "trainer.py", line 17, in <module>
model = XLMForSequenceClassification(config)
File "/home/ankit/anaconda3/lib/python3.6/site-packages/pytorch_transformers/modeling_xlm.py", line 823, in __init__
self.sequence_summary = SequenceSummary(config)
File "/home/ankit/anaconda3/lib/python3.6/site-packages/pytorch_transformers/modeling_utils.py", line 734, in __init__
self.summary = nn.Identity()
AttributeError: module 'torch.nn' has no attribute 'Identity'
https://github.com/huggingface/pytorch-transformers/blob/2f869dc6651f9cf9253f4c5a43279027a0eccfc5/pytorch_transformers/modeling_utils.py#L734 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/869/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/868 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/868/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/868/comments | https://api.github.com/repos/huggingface/transformers/issues/868/events | https://github.com/huggingface/transformers/issues/868 | 471,550,517 | MDU6SXNzdWU0NzE1NTA1MTc= | 868 | fp16 is broken | {
"login": "zsk423200",
"id": 18025765,
"node_id": "MDQ6VXNlcjE4MDI1NzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/18025765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zsk423200",
"html_url": "https://github.com/zsk423200",
"followers_url": "https://api.github.com/users/zsk423200/followers",
"following_url": "https://api.github.com/users/zsk423200/following{/other_user}",
"gists_url": "https://api.github.com/users/zsk423200/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zsk423200/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zsk423200/subscriptions",
"organizations_url": "https://api.github.com/users/zsk423200/orgs",
"repos_url": "https://api.github.com/users/zsk423200/repos",
"events_url": "https://api.github.com/users/zsk423200/events{/privacy}",
"received_events_url": "https://api.github.com/users/zsk423200/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, thanks! Fixed in master"
] | 1,563 | 1,563 | 1,563 | NONE | null | run run_glue.py with the parameter of --fp16, and return error:
```
RuntimeError: Incoming model is an instance of torch.nn.parallel.DistributedDataParallel. Parallel wrappers should only be applied to the model(s) AFTER
the model(s) have been returned from amp.initialize.
```
i find the reason is the wrong order of `amp.initialize` and `model = torch.nn.DataParallel`, [iIf DDP wrapping occurs before amp.initialize, amp.initialize will raise an error](https://github.com/NVIDIA/apex/blob/master/examples/imagenet/README.md), and it worked after i change the order | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/868/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/867 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/867/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/867/comments | https://api.github.com/repos/huggingface/transformers/issues/867/events | https://github.com/huggingface/transformers/issues/867 | 471,451,123 | MDU6SXNzdWU0NzE0NTExMjM= | 867 | XLnet sentence vector | {
"login": "luv4me",
"id": 38608151,
"node_id": "MDQ6VXNlcjM4NjA4MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/38608151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luv4me",
"html_url": "https://github.com/luv4me",
"followers_url": "https://api.github.com/users/luv4me/followers",
"following_url": "https://api.github.com/users/luv4me/following{/other_user}",
"gists_url": "https://api.github.com/users/luv4me/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luv4me/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luv4me/subscriptions",
"organizations_url": "https://api.github.com/users/luv4me/orgs",
"repos_url": "https://api.github.com/users/luv4me/repos",
"events_url": "https://api.github.com/users/luv4me/events{/privacy}",
"received_events_url": "https://api.github.com/users/luv4me/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You can train the model on a downstream task to get a sentence vector related to your task or you can get a sentence vector by averaging or max-pooling the output sequence of token hidden-states.",
"Try doing:\r\n\r\n```Python\r\nmodel = model_class.from_pretrained(pretrained_weights,\r\n output_hidden_states=True,\r\n output_attentions=True)\r\n\r\nsequence_summary = SequenceSummary(model.config)\r\n\r\nes = torch.tensor([tokenizer.encode(s)])\r\n# sentence embedding\r\nt = sequence_summary(model(es)[0])",
"A simple strategy is to take the concatentation of the last hidden state, the mean-pooling, and the max-pooling (tends to be a reasonably good baseline pooling strategy, e.g. [ULMFit](https://arxiv.org/pdf/1801.06146.pdf))",
"@cpcdoy hey, and I have a question why the sentence is not stationary.",
"So, we should pool over all hidden states, and not just use the hidden state corresponding to `[CLS]`?",
"I would go with @rishibommasani's solution for a general \"semantic\" sentence embeddings or fine-tuning and using the `[CLS]` for a task-specific sentence embedding.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,570 | 1,570 | NONE | null | how can i get the XLnet sentence vector by pytorch-transformers. I use the sample but I only get the word vector. it drives me crazy | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/867/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/866 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/866/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/866/comments | https://api.github.com/repos/huggingface/transformers/issues/866/events | https://github.com/huggingface/transformers/pull/866 | 471,320,188 | MDExOlB1bGxSZXF1ZXN0MzAwMDMxMjQ3 | 866 | Rework how PreTrainedModel.from_pretrained handles its arguments | {
"login": "anlsh",
"id": 2720400,
"node_id": "MDQ6VXNlcjI3MjA0MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2720400?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anlsh",
"html_url": "https://github.com/anlsh",
"followers_url": "https://api.github.com/users/anlsh/followers",
"following_url": "https://api.github.com/users/anlsh/following{/other_user}",
"gists_url": "https://api.github.com/users/anlsh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anlsh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anlsh/subscriptions",
"organizations_url": "https://api.github.com/users/anlsh/orgs",
"repos_url": "https://api.github.com/users/anlsh/repos",
"events_url": "https://api.github.com/users/anlsh/events{/privacy}",
"received_events_url": "https://api.github.com/users/anlsh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hmm, well that's embarrassing. I'll inspect the failing tests some more to see what's up",
"Regarding python 2, yes we want to keep supporting it and thanks for taking care of it.\r\n\r\nGoogle (which is still using python 2) is a major supplier of pretrained model and architectures and having python 2 support in the library make the job of re-implementing the models a lot easier (I can load TF and PT models side-by-side) :)",
"I have updated the readme breaking change section on this (ba52fe6)",
"Thanks for the feedback: In my latest commits I've updated the documentation as requested and renamed the `return_unused_args` parameter to `return_unused_kwargs` to remove any ambiguity.\r\n\r\nI also removed the unused `*args` parameter from `PreTrainedConfig.from_pretrained`, which is the only actual interface/logic change",
"Looks good to me, thanks a lot @xanlsh!"
] | 1,563 | 1,563 | 1,563 | NONE | null | Unification of the `from_pretrained` functions belonging to various modules (GPT2PreTrainedModel, OpenAIGPTPreTrainedModel, BertPreTrainedModel) brought changes to the function's argument handling which don't cause any issues within the repository itself (afaik), but have the potential to break a variety of downstream code (eg. my own).
In the last release of pytorch_transformers ([v0.6.2](https://github.com/huggingface/pytorch-transformers/tree/v0.6.2)), the `from_pretrained` functions took in `*args` and `**kwargs` and passed them directly to the relevant model's constructor (perhaps with some processing along the way). For a typical example, see `from_pretrained`'s signature in `modeling.py` here https://github.com/huggingface/pytorch-transformers/blob/b832d5bb8a6dfc5965015b828e577677eace601e/pytorch_pretrained_bert/modeling.py#L526
and the relevant usage of said arguments (after [some small modifications](https://github.com/huggingface/pytorch-transformers/blob/b832d5bb8a6dfc5965015b828e577677eace601e/pytorch_pretrained_bert/modeling.py#L553-L558)) https://github.com/huggingface/pytorch-transformers/blob/b832d5bb8a6dfc5965015b828e577677eace601e/pytorch_pretrained_bert/modeling.py#L600
In the [latest release](https://github.com/huggingface/pytorch-transformers/tree/v1.0.0), the function's signature remains unchanged but the `*args` and most of the `**kwargs` parameters, in particular pretty much anything not explicitly accessed in [[1]](https://github.com/huggingface/pytorch-transformers/blob/b33a385091de604afb566155ec03329b84c96926/pytorch_transformers/modeling_utils.py#L354-L358)
https://github.com/huggingface/pytorch-transformers/blob/b33a385091de604afb566155ec03329b84c96926/pytorch_transformers/modeling_utils.py#L354-L358
is ignored. If a key of `kwargs` is shared with the relevant model's configuration file then its value is still used to override said key (see the relevant logic [here](https://github.com/huggingface/pytorch-transformers/blob/b33a385091de604afb566155ec03329b84c96926/pytorch_transformers/modeling_utils.py#L138-L148)), but the current architecture breaks, for example, the following pattern which was previously possible.
```
class UsefulSubclass(BertForSequenceClassification)
def __init__(self, *args, useful_argument, **kwargs):
super().__init__(*args, **kwargs)
*logic*
...
bert = UsefulSubclass.from_pretrained(model_name, useful_argument=42).
```
What's more, if these arguments have default values declared in `__init__` then the entire pattern is broken **silently**: because these default values will **never** be overwritten via pretrained instantiation. Thus end users might continue running experiments passing different values of `useful_argument` to `from_pretrained`, unaware that **nothing is actually being changed**
As evidenced by issue #833, I'm not the only one whose code was broken. This commit implements behavior which is a compromise between the old and new behaviors. From [my docstring](https://github.com/xanlsh/pytorch-transformers/blob/764b2d3d2310458b77dc563913313ba0c6d826dd/pytorch_transformers/modeling_utils.py#L347-L351):
```
If config is None, then **kwargs will be passed to the model.
If config is *not* None, then kwargs will be used to
override any keys shared with the default configuration for the
given pretrained_model_name_or_path, and only the unshared
key/value pairs will be passed to the model.
```
It would actually be ideal to avoid mixing configuration and model parameters entirely (via some sort of `model_args` parameter for example): however this fix has the advantages of
1. Not breaking code written during the `pytorch-pretrained-bert` era
2. Preserving (to the extent possible) the usage of the `from_pretrained.**kwargs` parameter introduced with `pytorch-transformers`
--------------------------------------------------------------------------
I have also included various other (smaller) changes in this pull request:
* ~~Making `PreTrainedModel.__init__` not accept `*args` and `**kwargs` parameters which it has no use for and currently ignores~~ Apparently necessary for the tests to pass :(
* ~~Stop using the the "popping from kwargs" antipattern (see [[1]](https://github.com/huggingface/pytorch-transformers/blob/b33a385091de604afb566155ec03329b84c96926/pytorch_transformers/modeling_utils.py#L354-L358)). Keyword arguments with default values achieve the same thing more quickly, and are strictly more informative since they linters/autodoc modules can actually make use of them. I've replaced all instances that I could find, if this pattern exists elsewhere it should be removed.~~ Oops: turns out this is a Python 2 compatibility thing. With that said, is there really a need to continue supporting Python 2? Especially with its EOL coming up in just a few months, and especially when it necessitates such ugly code...
* Subsume the fix included in #864 , which would conflict (admittedly in a very minor fashion) with this PR.
* Remove some trailing whitespace which seems to have infiltrated the file | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/866/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/866/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/866",
"html_url": "https://github.com/huggingface/transformers/pull/866",
"diff_url": "https://github.com/huggingface/transformers/pull/866.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/866.patch",
"merged_at": 1563897930000
} |
https://api.github.com/repos/huggingface/transformers/issues/865 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/865/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/865/comments | https://api.github.com/repos/huggingface/transformers/issues/865/events | https://github.com/huggingface/transformers/issues/865 | 471,293,096 | MDU6SXNzdWU0NzEyOTMwOTY= | 865 | Using Fp16 half precision makes Bert prediction slower. | {
"login": "Oxi84",
"id": 25420033,
"node_id": "MDQ6VXNlcjI1NDIwMDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/25420033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Oxi84",
"html_url": "https://github.com/Oxi84",
"followers_url": "https://api.github.com/users/Oxi84/followers",
"following_url": "https://api.github.com/users/Oxi84/following{/other_user}",
"gists_url": "https://api.github.com/users/Oxi84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Oxi84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oxi84/subscriptions",
"organizations_url": "https://api.github.com/users/Oxi84/orgs",
"repos_url": "https://api.github.com/users/Oxi84/repos",
"events_url": "https://api.github.com/users/Oxi84/events{/privacy}",
"received_events_url": "https://api.github.com/users/Oxi84/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"gtx 1080. Is there any other way to make the predictions faster?",
"Hi, you need at least a Volta GPU to get benefits from fp16 unfortunately.",
"@thomwolf Does P100 applicable?",
"I don't think so",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,570 | 1,570 | NONE | null | When I use:
model = BertForMaskedLM.from_pretrained('bert-large-cased')
model = model.half()
model.eval()
model.to('cuda')
by adding Fp16:
model = model.half()
It runs around 50% slower. Why is that?
I run it on ubuntu 18.04, cuda 9, pytorch 1.1 and python 3.6.8
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/865/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/864 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/864/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/864/comments | https://api.github.com/repos/huggingface/transformers/issues/864/events | https://github.com/huggingface/transformers/pull/864 | 471,267,386 | MDExOlB1bGxSZXF1ZXN0MzAwMDAyODY3 | 864 | Fixed PreTrainedModel.from_pretrained(...) not passing cache_dir to PretrainedConfig.from_pretrained(...) | {
"login": "mbugert",
"id": 23331603,
"node_id": "MDQ6VXNlcjIzMzMxNjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/23331603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mbugert",
"html_url": "https://github.com/mbugert",
"followers_url": "https://api.github.com/users/mbugert/followers",
"following_url": "https://api.github.com/users/mbugert/following{/other_user}",
"gists_url": "https://api.github.com/users/mbugert/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mbugert/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mbugert/subscriptions",
"organizations_url": "https://api.github.com/users/mbugert/orgs",
"repos_url": "https://api.github.com/users/mbugert/repos",
"events_url": "https://api.github.com/users/mbugert/events{/privacy}",
"received_events_url": "https://api.github.com/users/mbugert/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed thanks. We'll subsume this PR with #866 which add a few other stuff.\r\n\r\nI agree with you on the `pop` pattern. We'll move away from this when the first one of these two events happens: (i) google stop open-sourcing interesting new models or (ii) google stop using python 2 internally ;)",
"Okay! :+1: "
] | 1,563 | 1,563 | 1,563 | NONE | null | See #863
It's not a beautiful solution, but neither is the practice of modifying incoming parameters via pop. 🤷♂ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/864/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/864/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/864",
"html_url": "https://github.com/huggingface/transformers/pull/864",
"diff_url": "https://github.com/huggingface/transformers/pull/864.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/864.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/863 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/863/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/863/comments | https://api.github.com/repos/huggingface/transformers/issues/863/events | https://github.com/huggingface/transformers/issues/863 | 471,266,449 | MDU6SXNzdWU0NzEyNjY0NDk= | 863 | PreTrainedModel.from_pretrained(...) doesn't pass cache_dir to PretrainedConfig.from_pretrained(...) | {
"login": "mbugert",
"id": 23331603,
"node_id": "MDQ6VXNlcjIzMzMxNjAz",
"avatar_url": "https://avatars.githubusercontent.com/u/23331603?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mbugert",
"html_url": "https://github.com/mbugert",
"followers_url": "https://api.github.com/users/mbugert/followers",
"following_url": "https://api.github.com/users/mbugert/following{/other_user}",
"gists_url": "https://api.github.com/users/mbugert/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mbugert/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mbugert/subscriptions",
"organizations_url": "https://api.github.com/users/mbugert/orgs",
"repos_url": "https://api.github.com/users/mbugert/repos",
"events_url": "https://api.github.com/users/mbugert/events{/privacy}",
"received_events_url": "https://api.github.com/users/mbugert/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Fix with #866"
] | 1,563 | 1,563 | 1,563 | NONE | null | The cache_dir key-value parameter does not work as intended in `PreTrainedModel.from_pretrained(...)`. It is popped from the kwargs, then `PretrainedConfig.from_pretrained(...)` is called which expects this parameter in the kwargs, but it's obviously not there anymore. A default location is used as a fallback, but this leads to strange behaviour if this default location doesn't exist or isn't writable (as it was in my case). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/863/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/862 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/862/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/862/comments | https://api.github.com/repos/huggingface/transformers/issues/862/events | https://github.com/huggingface/transformers/issues/862 | 471,219,824 | MDU6SXNzdWU0NzEyMTk4MjQ= | 862 | Bert encodings | {
"login": "shubhamagarwal92",
"id": 7984532,
"node_id": "MDQ6VXNlcjc5ODQ1MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7984532?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shubhamagarwal92",
"html_url": "https://github.com/shubhamagarwal92",
"followers_url": "https://api.github.com/users/shubhamagarwal92/followers",
"following_url": "https://api.github.com/users/shubhamagarwal92/following{/other_user}",
"gists_url": "https://api.github.com/users/shubhamagarwal92/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shubhamagarwal92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shubhamagarwal92/subscriptions",
"organizations_url": "https://api.github.com/users/shubhamagarwal92/orgs",
"repos_url": "https://api.github.com/users/shubhamagarwal92/repos",
"events_url": "https://api.github.com/users/shubhamagarwal92/events{/privacy}",
"received_events_url": "https://api.github.com/users/shubhamagarwal92/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I have borrowed most of the ideas from [utils](https://github.com/huggingface/pytorch-transformers/blob/master/examples/utils_glue.py#L391) in one of the examples to create a [script to extract embeddings](https://gist.github.com/shubhamagarwal92/37ccb747f7130a35a8e76aa66d60e014). \r\n\r\nHowever, I am still curious if there is any way where we can pass the vocabulary (and tensors) directly instead of passing raw strings?\r\n\r\n",
"Hi, I think your example is nice.\r\n\r\nI'm not sure to understand what you are referring to when you want to \"pass the vocabulary (and tensors) directly instead of passing raw strings\". ",
"Currently, I am trying to get the Bert embeddings in my encoder before I use `nn.Embedding` instead of pre-computing it. \r\n\r\nThus, I have to convert the tensors to raw strings using `vocab` before passing it through the bert model and hence the gist. ",
"Sorry,I want to know why we pass the sentence(Hello, my dog is cute) directly instead of adding some tokens([CLS] Hello, my dog is cute [SEP]) in the @thomwolf .\r\n\r\n```\r\n >>> config = BertConfig.from_pretrained('bert-base-uncased')\r\n >>> tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n >>> model = BertModel(config)\r\n >>> input_ids = torch.tensor(tokenizer.encode(\"Hello, my dog is cute\")).unsqueeze(0) # Batch size 1\r\n >>> outputs = model(input_ids)\r\n >>> last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple\r\n```",
"If you use the function such as [this](https://github.com/huggingface/pytorch-transformers/blob/master/examples/utils_glue.py#L391), it appends the special tokens automatically. I guess the example you mentioned needs to append these tokens. \r\n\r\n@thomwolf could verify that for you! ",
"Yeah we'll add the option to automatically add control tokens. It can be useful.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> If you use the function such as [this](https://github.com/huggingface/pytorch-transformers/blob/master/examples/utils_glue.py#L391), it appends the special tokens automatically. I guess the example you mentioned needs to append these tokens.\r\n> \r\n> @thomwolf could verify that for you!\r\n\r\nYes, I understand one can set \"add_special_tokens=True\" for the same when encoding the document. "
] | 1,563 | 1,572 | 1,570 | CONTRIBUTOR | null | Hi,
Really interesting work!
I want to use BERT embeddings for a downstream task. I have been following the [steps](https://github.com/huggingface/pytorch-transformers#quick-tour) here as:
```
import torch
from pytorch_transformers import BertModel, BertTokenizer
pretrained_weights = 'bert-base-uncased'
tokenizer = BertTokenizer.from_pretrained(pretrained_weights)
model = BertModel.from_pretrained(pretrained_weights)
raw_text = ["[CLS] This is first element [SEP] continuing statement",
"[CLS] second element of the list."]
encoding = tokenizer.encode(raw_text)
input_ids = torch.tensor(encoding)
last_hidden_states = model(input_ids)[0] # Models outputs are now tuples
print(last_hidden_states.size())
```
getting the error as:
```
File "/home/shubham/anaconda3/envs/test/lib/python3.6/site-packages/pytorch_transformers/tokenization_utils.py", line 356, in split_on_tokens
split_text = text.split(tok)
AttributeError: 'list' object has no attribute 'split'
```
@thomwolf Is there an easy way to pass the list instead of strings or should I use lambda functions (which might be slow)? Can we pass the maximum sequence length as well?
Also I need some advice related to the code structure. If I have pre-exisiting code with data loader, should I convert these embedding there (can't do finetuning then) or pass the raw strings and convert them before passing them to model (model run gets too slow)?
Is there any way where we can pass the vocabulary (and tensors) directly instead of passing raw strings? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/862/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/861 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/861/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/861/comments | https://api.github.com/repos/huggingface/transformers/issues/861/events | https://github.com/huggingface/transformers/issues/861 | 471,178,942 | MDU6SXNzdWU0NzExNzg5NDI= | 861 | Deleting models | {
"login": "RuiPChaves",
"id": 33401801,
"node_id": "MDQ6VXNlcjMzNDAxODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/33401801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RuiPChaves",
"html_url": "https://github.com/RuiPChaves",
"followers_url": "https://api.github.com/users/RuiPChaves/followers",
"following_url": "https://api.github.com/users/RuiPChaves/following{/other_user}",
"gists_url": "https://api.github.com/users/RuiPChaves/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RuiPChaves/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RuiPChaves/subscriptions",
"organizations_url": "https://api.github.com/users/RuiPChaves/orgs",
"repos_url": "https://api.github.com/users/RuiPChaves/repos",
"events_url": "https://api.github.com/users/RuiPChaves/events{/privacy}",
"received_events_url": "https://api.github.com/users/RuiPChaves/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Models are usually located under `~/.cache/torch/pytorch_pretrained_bert` (older version of this library) or `~/.cache/torch/pytorch_transformers` (now) :)",
"Thank you! I found it elsewhere, actually. What worked for me (quaintly) was:\r\n find . -type f -size +1G -print 2>/dev/null\r\n",
"hey, do you guys know where it is stored on windows? thanks",
"In my case, it gets stored in \" /tmp/torch\"",
"In my case I found a variable containing the default cache path.\r\nRun the following in python:\r\n```\r\nfrom transformers import file_utils\r\nprint(file_utils.default_cache_path)\r\n```\r\n\r\nIf it is not there, check your environmental variables.\r\n\r\nIn my current transformers version `PYTORCH_PRETRAINED_BERT_CACHE`, `PYTORCH_TRANSFORMERS_CACHE` and `TRANSFORMERS_CACHE` can overwrite the default cache path."
] | 1,563 | 1,605 | 1,563 | NONE | null | I would like to delete the 'bert-base-uncased' and 'bert-large-uncased' models and the tokenizer from my hardrive (working under Ubuntu 18.04). I assumed that uninstalling pytorch-pretrained-bert would do it, but it did not. Where are these models located at?
Thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/861/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/860 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/860/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/860/comments | https://api.github.com/repos/huggingface/transformers/issues/860/events | https://github.com/huggingface/transformers/pull/860 | 471,076,815 | MDExOlB1bGxSZXF1ZXN0Mjk5ODYzNzQ0 | 860 | read().splitlines() -> readlines() | {
"login": "Yiqing-Zhou",
"id": 40547184,
"node_id": "MDQ6VXNlcjQwNTQ3MTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/40547184?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yiqing-Zhou",
"html_url": "https://github.com/Yiqing-Zhou",
"followers_url": "https://api.github.com/users/Yiqing-Zhou/followers",
"following_url": "https://api.github.com/users/Yiqing-Zhou/following{/other_user}",
"gists_url": "https://api.github.com/users/Yiqing-Zhou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yiqing-Zhou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yiqing-Zhou/subscriptions",
"organizations_url": "https://api.github.com/users/Yiqing-Zhou/orgs",
"repos_url": "https://api.github.com/users/Yiqing-Zhou/repos",
"events_url": "https://api.github.com/users/Yiqing-Zhou/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yiqing-Zhou/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/860?src=pr&el=h1) Report\n> Merging [#860](https://codecov.io/gh/huggingface/pytorch-transformers/pull/860?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/2f869dc6651f9cf9253f4c5a43279027a0eccfc5?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `100%`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/860?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #860 +/- ##\n======================================\n Coverage 78.9% 78.9% \n======================================\n Files 34 34 \n Lines 6192 6192 \n======================================\n Hits 4886 4886 \n Misses 1306 1306\n```\n\n\n| [Impacted Files](https://codecov.io/gh/huggingface/pytorch-transformers/pull/860?src=pr&el=tree) | Coverage Δ | |\n|---|---|---|\n| [pytorch\\_transformers/tokenization\\_bert.py](https://codecov.io/gh/huggingface/pytorch-transformers/pull/860/diff?src=pr&el=tree#diff-cHl0b3JjaF90cmFuc2Zvcm1lcnMvdG9rZW5pemF0aW9uX2JlcnQucHk=) | `93.2% <100%> (ø)` | :arrow_up: |\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/860?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/860?src=pr&el=footer). Last update [2f869dc...bef0c62](https://codecov.io/gh/huggingface/pytorch-transformers/pull/860?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Great, thanks!"
] | 1,563 | 1,563 | 1,563 | CONTRIBUTOR | null | splitlines() does not work as what we expect here for bert-base-chinese because there is a '\u2028' (unicode line seperator) token in vocab file. Value of '\u2028'.splitlines() is ['', ''].
Perhaps we should use readlines() instead. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/860/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/860/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/860",
"html_url": "https://github.com/huggingface/transformers/pull/860",
"diff_url": "https://github.com/huggingface/transformers/pull/860.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/860.patch",
"merged_at": 1563888266000
} |
https://api.github.com/repos/huggingface/transformers/issues/859 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/859/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/859/comments | https://api.github.com/repos/huggingface/transformers/issues/859/events | https://github.com/huggingface/transformers/issues/859 | 471,065,871 | MDU6SXNzdWU0NzEwNjU4NzE= | 859 | Bug of BertTokenizer | {
"login": "LeeJuly30",
"id": 31088072,
"node_id": "MDQ6VXNlcjMxMDg4MDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/31088072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LeeJuly30",
"html_url": "https://github.com/LeeJuly30",
"followers_url": "https://api.github.com/users/LeeJuly30/followers",
"following_url": "https://api.github.com/users/LeeJuly30/following{/other_user}",
"gists_url": "https://api.github.com/users/LeeJuly30/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LeeJuly30/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeeJuly30/subscriptions",
"organizations_url": "https://api.github.com/users/LeeJuly30/orgs",
"repos_url": "https://api.github.com/users/LeeJuly30/repos",
"events_url": "https://api.github.com/users/LeeJuly30/events{/privacy}",
"received_events_url": "https://api.github.com/users/LeeJuly30/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"If you are loading the chinese model, this is probably related to #860 and #825.\r\nShould be fixed now.",
"thanks!"
] | 1,563 | 1,563 | 1,563 | NONE | null | when load a tokenizer from pretrain
```python
tokenizer = BertTokenizer.from_pretrained(vocab_path)
```
the vocab length is:
```python
len(tokenizer.vocab)
21128
```
but the last token of vocab is:
```python
next(reversed(tokenizer.vocab.items()))
('##😎', 21129)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/859/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/858 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/858/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/858/comments | https://api.github.com/repos/huggingface/transformers/issues/858/events | https://github.com/huggingface/transformers/issues/858 | 471,064,929 | MDU6SXNzdWU0NzEwNjQ5Mjk= | 858 | CLS segment_id for BERT | {
"login": "guillaume-be",
"id": 27071604,
"node_id": "MDQ6VXNlcjI3MDcxNjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/27071604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guillaume-be",
"html_url": "https://github.com/guillaume-be",
"followers_url": "https://api.github.com/users/guillaume-be/followers",
"following_url": "https://api.github.com/users/guillaume-be/following{/other_user}",
"gists_url": "https://api.github.com/users/guillaume-be/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guillaume-be/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guillaume-be/subscriptions",
"organizations_url": "https://api.github.com/users/guillaume-be/orgs",
"repos_url": "https://api.github.com/users/guillaume-be/repos",
"events_url": "https://api.github.com/users/guillaume-be/events{/privacy}",
"received_events_url": "https://api.github.com/users/guillaume-be/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, this one was also mentioned in https://github.com/huggingface/pytorch-transformers/issues/810#issuecomment-512991164.\r\n\r\nIt is fixed now."
] | 1,563 | 1,563 | 1,563 | CONTRIBUTOR | null | Hello,
In your example for GLUE you set the CLS segment id token to 1 for BERT: https://github.com/huggingface/pytorch-transformers/blob/2f869dc6651f9cf9253f4c5a43279027a0eccfc5/examples/run_glue.py#L259
Reading the original reference implementation it seems that CLS should have a segment_id=0. This is also aligned with several comments & docstrings you have around the code. Is this a design choice? What is the impact on general performance? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/858/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/857 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/857/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/857/comments | https://api.github.com/repos/huggingface/transformers/issues/857/events | https://github.com/huggingface/transformers/issues/857 | 471,003,087 | MDU6SXNzdWU0NzEwMDMwODc= | 857 | XLMForMaskedLM | {
"login": "Shuailong",
"id": 1918038,
"node_id": "MDQ6VXNlcjE5MTgwMzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1918038?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shuailong",
"html_url": "https://github.com/Shuailong",
"followers_url": "https://api.github.com/users/Shuailong/followers",
"following_url": "https://api.github.com/users/Shuailong/following{/other_user}",
"gists_url": "https://api.github.com/users/Shuailong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shuailong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shuailong/subscriptions",
"organizations_url": "https://api.github.com/users/Shuailong/orgs",
"repos_url": "https://api.github.com/users/Shuailong/repos",
"events_url": "https://api.github.com/users/Shuailong/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shuailong/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You can do that using `XLNetLMHeadModel` and custom masks as shown in the [`run_generation` example](https://github.com/huggingface/pytorch-transformers/blob/master/examples/run_generation.py#L115-L121).\r\n\r\nBut note that XLNet is rather bad on short text input completions as I discussed in https://github.com/huggingface/pytorch-transformers/issues/846#issuecomment-514228565",
"> #846 (comment)\r\n\r\nThanks for your reply! However, I mean the `XLM` version of BERT, instead of XLNet. Is it also convenient to do that for XLM?\r\n\r\nThanks!",
"Oh, right! You can just use `XLMWithLMHeadModel` with an input sequence containing XLM masked token `tokenizer.mask_token` (which is `<special1>` for XLM).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,569 | 1,569 | NONE | null | Hi, I am currently training a BERT model using facebook XLM framework. I use the script in this repo to convert XLM format to PyTorch format. Is it possible to implement an `XLMForMaskedLM` which is just like `BertForMaskedLM` but use XLM trained BERT instead? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/857/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/856 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/856/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/856/comments | https://api.github.com/repos/huggingface/transformers/issues/856/events | https://github.com/huggingface/transformers/issues/856 | 470,899,951 | MDU6SXNzdWU0NzA4OTk5NTE= | 856 | manually download models | {
"login": "Arvedek",
"id": 18126379,
"node_id": "MDQ6VXNlcjE4MTI2Mzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/18126379?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arvedek",
"html_url": "https://github.com/Arvedek",
"followers_url": "https://api.github.com/users/Arvedek/followers",
"following_url": "https://api.github.com/users/Arvedek/following{/other_user}",
"gists_url": "https://api.github.com/users/Arvedek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arvedek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arvedek/subscriptions",
"organizations_url": "https://api.github.com/users/Arvedek/orgs",
"repos_url": "https://api.github.com/users/Arvedek/repos",
"events_url": "https://api.github.com/users/Arvedek/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arvedek/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"If you don't want/cannot to use the built-in download/caching method, you can download both files manually, save them in a directory and rename them respectively `config.json` and `pytorch_model.bin`\r\n\r\nThen you can load the model using `model = BertModel.from_pretrained('path/to/your/directory')`",
"What if I try to run a GPT-2 example from docs Quickstart:\r\n```\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\n...\r\nmodel = GPT2LMHeadModel.from_pretrained('gpt2')\r\n```\r\nand get this\r\n```\r\nINFO:pytorch_transformers.file_utils:https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json not found in cache, downloading to C:\\Users\\KHOVRI~1\\AppData\\Local\\Temp\\tmprm150emm\r\nERROR:pytorch_transformers.tokenization_utils:Couldn't reach server to download vocabulary.\r\n```\r\n\r\nWhere should I put vocab file and get other files for GPT-2? I work under corporate proxy, maybe there is a way to write this proxy into the sort of config?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"> 错误:pytorch_transformers.modeling_utils:无法到达位于https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json的服务器,无法下载经过预先训练的模型配置文件。\r\n> 错误:pytorch_transformers.modeling_utils:无法到达位于“ https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-pytorch_model.bin ”的服务器以下载预先训练的权重。\r\n> 错误:pytorch_transformers.tokenization_utils:无法访问服务器以下载词汇。\r\n> \r\n> 如果手动将这两个文件下载到某个路径,如何指向这两个文件?\r\nI also encountered such a problem, the network speed is very slow, can not get off.However, I ran several times, about 10 times, and finally successfully ran without any error\r\n",
"Same question. Thank you.",
"> If you don't want/cannot to use the built-in download/caching method, you can download both files manually, save them in a directory and rename them respectively `config.json` and `pytorch_model.bin`\r\n> \r\n> Then you can load the model using `model = BertModel.from_pretrained('path/to/your/directory')`\r\n\r\nFor posterity, those who get errors because of missing vocab.txt despite doing above, you can get it at `https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt` and also rename it to `vocab.txt` in desired folder. Resolved my errors.",
"Hi swayson,\r\n\r\nmodel = BertModel.from_pretrained('path/to/your/directory')\r\n\r\nwhere we need to add above line of code for loading model?",
"You can find all the models here [https://stackoverflow.com/a/64280935/251674](https://stackoverflow.com/a/64280935/251674)",
"> If you don't want/cannot to use the built-in download/caching method, you can download both files manually, save them in a directory and rename them respectively `config.json` and `pytorch_model.bin`\r\n> \r\n> Then you can load the model using `model = BertModel.from_pretrained('path/to/your/directory')`\r\n\r\nso great!",
"I tried downloading these models and then upload it in Jupyter lab to use in `Styleformer ` package. But the result seems to be broken. It works fine in Google Colab but fails when I try to manually upload and run. \r\n\r\nModels: https://huggingface.co/prithivida/informal_to_formal_styletransfer\r\n https://huggingface.co/prithivida/parrot_adequacy_on_BART",
"In my case, I want to load gpt2 pretrained model locally.\r\n- First I download config.json and pytorch_model.bin from [hugginface model zoo](https://huggingface.co/gpt2/tree/main)\r\nWhen I execute code below:\r\n`gpt2_tok = GPT2Tokenizer.from_pretrained(myfolderpath, do_lower_case=False)`\r\nSome error occurs like:\r\n`TypeError: expected str bytes or os.pathlike object not nonetype`\r\n\r\n- Later I follow the instruction @swayson provides to download vocab.json.\r\nSame error occurs again but it indicates that I may need some kind \"merges\" file:\r\n`with open(merges_file, encoding=\"utf-8\") as merges_handle\r\n...\r\nTypeError: expected str bytes or os.pathlike object not nonetype`\r\n\r\n- So I go back to [hugginface model zoo](https://huggingface.co/gpt2/tree/main) and download merges.txt.\r\nFinally the code is executed successfully and the encoding process also works.\r\n\r\nHope this helps someone who also suffers from the internet connection problem."
] | 1,563 | 1,638 | 1,570 | NONE | null | ERROR:pytorch_transformers.modeling_utils:Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json' to download pretrained model configuration file.
ERROR:pytorch_transformers.modeling_utils:Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-pytorch_model.bin' to download pretrained weights.
ERROR:pytorch_transformers.tokenization_utils:Couldn't reach server to download vocabulary.
how can I point to these 2 files if I manually download these two to some path? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/856/reactions",
"total_count": 9,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/856/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/855 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/855/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/855/comments | https://api.github.com/repos/huggingface/transformers/issues/855/events | https://github.com/huggingface/transformers/issues/855 | 470,892,081 | MDU6SXNzdWU0NzA4OTIwODE= | 855 | modeling_xlnet.py lines 798 torch.eisum('i,d->id', pos_seq, inv_freq) | {
"login": "lcy081099",
"id": 15140404,
"node_id": "MDQ6VXNlcjE1MTQwNDA0",
"avatar_url": "https://avatars.githubusercontent.com/u/15140404?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lcy081099",
"html_url": "https://github.com/lcy081099",
"followers_url": "https://api.github.com/users/lcy081099/followers",
"following_url": "https://api.github.com/users/lcy081099/following{/other_user}",
"gists_url": "https://api.github.com/users/lcy081099/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lcy081099/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lcy081099/subscriptions",
"organizations_url": "https://api.github.com/users/lcy081099/orgs",
"repos_url": "https://api.github.com/users/lcy081099/repos",
"events_url": "https://api.github.com/users/lcy081099/events{/privacy}",
"received_events_url": "https://api.github.com/users/lcy081099/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I don't understand your question.\r\n\r\nCan you give more details and point to the exact code lines you are referring to?\r\n\r\nYou cannot provide specific position indices to XLNet if that's what you are trying to do. You have to use the built-in relative embeddings.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"For new PyTorch 1.0, the syntax should be `torch.eisum('i,d->id', pos_seq, inv_freq) `",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,575 | 1,575 | NONE | null | hi I run position embedding in modeling_xlnet.py , but it not work , why not torch.eisum('i,d->id', [pos_seq, inv_freq]) ?i use pytorch 0.4.1 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/855/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/854 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/854/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/854/comments | https://api.github.com/repos/huggingface/transformers/issues/854/events | https://github.com/huggingface/transformers/issues/854 | 470,886,856 | MDU6SXNzdWU0NzA4ODY4NTY= | 854 | Get the different result at BertModel | {
"login": "jackhsu2005",
"id": 49636419,
"node_id": "MDQ6VXNlcjQ5NjM2NDE5",
"avatar_url": "https://avatars.githubusercontent.com/u/49636419?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jackhsu2005",
"html_url": "https://github.com/jackhsu2005",
"followers_url": "https://api.github.com/users/jackhsu2005/followers",
"following_url": "https://api.github.com/users/jackhsu2005/following{/other_user}",
"gists_url": "https://api.github.com/users/jackhsu2005/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jackhsu2005/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jackhsu2005/subscriptions",
"organizations_url": "https://api.github.com/users/jackhsu2005/orgs",
"repos_url": "https://api.github.com/users/jackhsu2005/repos",
"events_url": "https://api.github.com/users/jackhsu2005/events{/privacy}",
"received_events_url": "https://api.github.com/users/jackhsu2005/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Have you read in detail the [migration guide](https://github.com/huggingface/pytorch-transformers#migrating-from-pytorch-pretrained-bert-to-pytorch-transformers) of the readme?\r\n\r\nThere is also a new `run_glue` example which is an updated version of the previous `run_classifier` and that you can use as a starting point for designing your own fine-tuning scripts.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,569 | 1,569 | NONE | null | At old version pytorch-pretrained-bert :
I used the BertModel to fine-tuned, loss will decrease.
But I used the New version BertModel to use the same data to finetune, but loss won't decrease.
> optimzer
>I have tried different optimzer AdamW ,BertAdam.
> learning rate
>0.1 0.01 0.001 ... 0.0000000001
> Batch Size
>1
> input I put **input_ids** and **token_type_ids**
> At old version return _,CLS,I take CLS
> At new version return [seq_len,1,768],[seq_len,768],I take [0,1,:] or [0,:] loss won't decrease.
I don't know what the detail I miss in the new version?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/854/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/854/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/853 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/853/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/853/comments | https://api.github.com/repos/huggingface/transformers/issues/853/events | https://github.com/huggingface/transformers/issues/853 | 470,881,273 | MDU6SXNzdWU0NzA4ODEyNzM= | 853 | Error loading converted pytorch checkpoint | {
"login": "hguan6",
"id": 19914123,
"node_id": "MDQ6VXNlcjE5OTE0MTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/19914123?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hguan6",
"html_url": "https://github.com/hguan6",
"followers_url": "https://api.github.com/users/hguan6/followers",
"following_url": "https://api.github.com/users/hguan6/following{/other_user}",
"gists_url": "https://api.github.com/users/hguan6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hguan6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hguan6/subscriptions",
"organizations_url": "https://api.github.com/users/hguan6/orgs",
"repos_url": "https://api.github.com/users/hguan6/repos",
"events_url": "https://api.github.com/users/hguan6/events{/privacy}",
"received_events_url": "https://api.github.com/users/hguan6/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Do you have a file named `biobert_v1.1_pubmed/config.json` as mentioned in the error?",
"Oh thanks. There is a config file name \"bert_config.json\" in the directory \"biobert_v1.1_pubmed/\". I changed the file name to \"config.json\" and it works! \r\nI am wondering why the pytorch-pretrained-bert can load the checkpoint. Maybe it reads \"bert_config.json\" and pytorch-transformer reads \"config.json\"?"
] | 1,563 | 1,563 | 1,563 | NONE | null | I am using BioBert. After converting the tensorflow checkpoint to pytorch checkpoint, I want to load it to Bert model. I found that the old pytorch-pretrained-bert works perfect but the new pytorch-transformer fails.
Here is the successful run using pytorch-pretrained-bert:
> from pytorch_pretrained_bert import BertForSequenceClassification
> model = BertForSequenceClassification.from_pretrained("biobert_v1.1_pubmed/", num_labels=1)
> exit()
Here's the failure in pytorch_transformers:
> from pytorch_transformers import BertForSequenceClassification
> model = BertForSequenceClassification.from_pretrained("biobert_v1.1_pubmed/", num_labels=1)
>
> Model name 'biobert_v1.1_pubmed/' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc). We assumed 'biobert_v1.1_pubmed/config.json' was a path or url but couldn't find any file associated to this path or url.
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
> File "/home/michael/.local/lib/python3.7/site-packages/pytorch_transformers/modeling_utils.py", line 403, in from_pretrained
> model = cls(config)
> File "/home/michael/.local/lib/python3.7/site-packages/pytorch_transformers/modeling_bert.py", line 958, in __init__
> super(BertForSequenceClassification, self).__init__(config)
> File "/home/michael/.local/lib/python3.7/site-packages/pytorch_transformers/modeling_bert.py", line 548, in __init__
> super(BertPreTrainedModel, self).__init__(*inputs, **kwargs)
> File "/home/michael/.local/lib/python3.7/site-packages/pytorch_transformers/modeling_utils.py", line 206, in __init__
> self.__class__.__name__, self.__class__.__name__
> ValueError: Parameter config in `BertForSequenceClassification(config)` should be an instance of class `PretrainedConfig`. To create a model from a pretrained model use `model = BertForSequenceClassification.from_pretrained(PRETRAINED_MODEL_NAME)`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/853/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/852 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/852/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/852/comments | https://api.github.com/repos/huggingface/transformers/issues/852/events | https://github.com/huggingface/transformers/issues/852 | 470,868,437 | MDU6SXNzdWU0NzA4Njg0Mzc= | 852 | UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. | {
"login": "g-jing",
"id": 44223191,
"node_id": "MDQ6VXNlcjQ0MjIzMTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/44223191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-jing",
"html_url": "https://github.com/g-jing",
"followers_url": "https://api.github.com/users/g-jing/followers",
"following_url": "https://api.github.com/users/g-jing/following{/other_user}",
"gists_url": "https://api.github.com/users/g-jing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-jing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-jing/subscriptions",
"organizations_url": "https://api.github.com/users/g-jing/orgs",
"repos_url": "https://api.github.com/users/g-jing/repos",
"events_url": "https://api.github.com/users/g-jing/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-jing/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"It should be fine. There are probably your output losses.",
"@thomwolf Thanks for your reply. Could you explain more about why this happens ? I am still confused though",
"Maybe it is caused by calculating the loss in the model's forward function.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I have the same problem in version 2.2.1, what can it be?",
"In my case, it doesn’t influence the results\n\nOn Mon, Jan 6, 2020 at 5:19 AM calusbr <[email protected]> wrote:\n\n> I have the same problem in version 2.2.1, what can it be?\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/852?email_source=notifications&email_token=AKRMVV4J4UT6HCXEEMOYBGDQ4MVV3A5CNFSM4IFUKP3KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIFNJAI#issuecomment-571135105>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AKRMVV37AGO5AXJNPSGUVLTQ4MVV3ANCNFSM4IFUKP3A>\n> .\n>\n",
"In my case, the speed of training with 4 GPU is the same as 1 GPU. How solve the speed issues?",
"👀",
"> In my case, the speed of training with 4 GPU is the same as 1 GPU. How solve the speed issues?\r\n\r\n@sociengineer , sorry for bothering you, may I ask if you have solved this problem as I meet the same one. Thanks!",
"> > In my case, the speed of training with 4 GPU is the same as 1 GPU. How solve the speed issues?\r\n> \r\n> @sociengineer , sorry for bothering you, may I ask if you have solved this problem as I meet the same one. Thanks!\r\n\r\n@XLechter I couldn't resolve the problem 😭"
] | 1,563 | 1,705 | 1,570 | NONE | null | When I finetune Bert with simple_lm_finetuning.py, there seems an error:
"UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector."
Will it influence the performance of the finetuning process ? Thanks in advance for any suggestion. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/852/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/852/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/851 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/851/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/851/comments | https://api.github.com/repos/huggingface/transformers/issues/851/events | https://github.com/huggingface/transformers/issues/851 | 470,859,126 | MDU6SXNzdWU0NzA4NTkxMjY= | 851 | problem when calling resize_token_embeddings | {
"login": "LeeJuly30",
"id": 31088072,
"node_id": "MDQ6VXNlcjMxMDg4MDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/31088072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LeeJuly30",
"html_url": "https://github.com/LeeJuly30",
"followers_url": "https://api.github.com/users/LeeJuly30/followers",
"following_url": "https://api.github.com/users/LeeJuly30/following{/other_user}",
"gists_url": "https://api.github.com/users/LeeJuly30/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LeeJuly30/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeeJuly30/subscriptions",
"organizations_url": "https://api.github.com/users/LeeJuly30/orgs",
"repos_url": "https://api.github.com/users/LeeJuly30/repos",
"events_url": "https://api.github.com/users/LeeJuly30/events{/privacy}",
"received_events_url": "https://api.github.com/users/LeeJuly30/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Which model were you resizing?",
"I'm working on chinese BertForPreTraining model",
"This is strange because Bert's LM head has no bias...\r\n\r\nWould need to have a more complete error message to be able to understand.",
"```python\r\nclass BertLMPredictionHead(nn.Module):\r\n def __init__(self, config):\r\n super(BertLMPredictionHead, self).__init__()\r\n self.transform = BertPredictionHeadTransform(config)\r\n\r\n # The output weights are the same as the input embeddings, but there is\r\n # an output-only bias for each token.\r\n self.decoder = nn.Linear(config.hidden_size,\r\n config.vocab_size,\r\n bias=False)\r\n\r\n self.bias = nn.Parameter(torch.zeros(config.vocab_size))\r\n\r\n def forward(self, hidden_states):\r\n hidden_states = self.transform(hidden_states)\r\n hidden_states = self.decoder(hidden_states) + self.bias\r\n return hidden_states\r\n```\r\n```python\r\nself.bias = nn.Parameter(torch.zeros(config.vocab_size))\r\n```\r\nFor example, we load the pretrained model whose vocab size is 23189, and i add 1000 tokens and call resize_token_embeddings. Since the decoder weight is actually embedding weight, they are reshaped to (24189, hidden_size), but the bias is still 23189.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,570 | 1,570 | NONE | null | When calling resize_token_embeddings, the model actually only modifies its embedding and decoder weight, whlie the decoder bias is unchanged. So whenever the forward function is called, the following error will be raised.
```python
RuntimeError: The size of tensor a (21215) must match the size of tensor b (21128) at non-singleton dimension 2
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/851/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/850 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/850/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/850/comments | https://api.github.com/repos/huggingface/transformers/issues/850/events | https://github.com/huggingface/transformers/issues/850 | 470,786,969 | MDU6SXNzdWU0NzA3ODY5Njk= | 850 | Confused about the prune heads operation. | {
"login": "sjcfr",
"id": 34537582,
"node_id": "MDQ6VXNlcjM0NTM3NTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/34537582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sjcfr",
"html_url": "https://github.com/sjcfr",
"followers_url": "https://api.github.com/users/sjcfr/followers",
"following_url": "https://api.github.com/users/sjcfr/following{/other_user}",
"gists_url": "https://api.github.com/users/sjcfr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sjcfr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sjcfr/subscriptions",
"organizations_url": "https://api.github.com/users/sjcfr/orgs",
"repos_url": "https://api.github.com/users/sjcfr/repos",
"events_url": "https://api.github.com/users/sjcfr/events{/privacy}",
"received_events_url": "https://api.github.com/users/sjcfr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, I'll add a detailed example for this method in the coming weeks (update of the bertology script).\r\n\r\nThis can be used to remove heads in the model following the work of [Michel et al. (Are Sixteen Heads Really Better than One?)](http://arxiv.org/abs/1905.10650) among others.",
"Thanks a lot!",
"Hi @thomwolf, would it be possible to provide an example on how to prune or select some heads for a layer? when i just change the config file by setting \r\nconfig.pruned_heads = {11:[1,2,3]} and use it in initializing the model, it throws an error.\r\n```\r\nsize mismatch for bert.encoder.layer.11.attention.self.query.weight: copying a param with shape torch.Size([768\r\nurrent model is torch.Size([576, 768]). and more. \r\n```\r\nso, the default query,key and vaule are set with 768 dim. \r\nI assume we can not just prune heads and still load the pre-trained model because the word embedding and layer norm was setup up with 768 dim. ",
"meanwhile i came across bertology.py script and realize that we can save a model after pruning. that works fine for me. now, i'm trying to load the saved model, and I get the opposite error. \r\n```\r\nsize mismatch for bert.encoder.layer.11.attention.self.query.weight: copying a param with shape torch.Size([576, 768]) from checkpoint, the sh\r\nape in current model is torch.Size([768, 768]).\r\n```\r\nthe error wouldn't go away after even changing the config file. "
] | 1,563 | 1,580 | 1,563 | NONE | null | In codes there are a 'prune_heads' method for the 'BertAttention' class, which refers to the 'prune_linear_layer' operation. Not understanding the meaning of such operation. The codes of 'prune_linear_layer' is listed below. Thanks for any help!
def prune_linear_layer(layer, index, dim=0):
""" Prune a linear layer (a model parameters) to keep only entries in index.
Return the pruned layer as a new layer with requires_grad=True.
Used to remove heads.
"""
index = index.to(layer.weight.device)
W = layer.weight.index_select(dim, index).clone().detach()
if layer.bias is not None:
if dim == 1:
b = layer.bias.clone().detach()
else:
b = layer.bias[index].clone().detach()
new_size = list(layer.weight.size())
new_size[dim] = len(index)
new_layer = nn.Linear(new_size[1], new_size[0], bias=layer.bias is not None).to(layer.weight.device)
new_layer.weight.requires_grad = False
new_layer.weight.copy_(W.contiguous())
new_layer.weight.requires_grad = True
if layer.bias is not None:
new_layer.bias.requires_grad = False
new_layer.bias.copy_(b.contiguous())
new_layer.bias.requires_grad = True
return new_layer | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/850/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/849 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/849/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/849/comments | https://api.github.com/repos/huggingface/transformers/issues/849/events | https://github.com/huggingface/transformers/issues/849 | 470,782,782 | MDU6SXNzdWU0NzA3ODI3ODI= | 849 | can't find utils_glue | {
"login": "YosiMass",
"id": 6850963,
"node_id": "MDQ6VXNlcjY4NTA5NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6850963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YosiMass",
"html_url": "https://github.com/YosiMass",
"followers_url": "https://api.github.com/users/YosiMass/followers",
"following_url": "https://api.github.com/users/YosiMass/following{/other_user}",
"gists_url": "https://api.github.com/users/YosiMass/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YosiMass/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YosiMass/subscriptions",
"organizations_url": "https://api.github.com/users/YosiMass/orgs",
"repos_url": "https://api.github.com/users/YosiMass/repos",
"events_url": "https://api.github.com/users/YosiMass/events{/privacy}",
"received_events_url": "https://api.github.com/users/YosiMass/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"It's hard to tell what could be the source of the error without describing with some minimum details what you tried to do/run and which version of the code you are running...\r\n\r\n_If_ you are running `run_glue.py` in the examples, make sure that [`utils_glue.py`](https://github.com/huggingface/pytorch-transformers/blob/master/examples/utils_glue.py) is present in the same working directory in which you are executing the script.",
"Thanks David. Turns out that the import in e.g., `run_glue.py` should be \r\n\r\n`from examples.utils_glue import ...`\r\n\r\ninstead of \r\n\r\n`from utils_glue import ...`\r\n\r\nMaybe it's a problem of PyCharm IDE",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,569 | 1,569 | NONE | null | import fails | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/849/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/848 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/848/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/848/comments | https://api.github.com/repos/huggingface/transformers/issues/848/events | https://github.com/huggingface/transformers/issues/848 | 470,781,190 | MDU6SXNzdWU0NzA3ODExOTA= | 848 | adaptive softmax in transformer-xl | {
"login": "lepus2",
"id": 17767202,
"node_id": "MDQ6VXNlcjE3NzY3MjAy",
"avatar_url": "https://avatars.githubusercontent.com/u/17767202?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lepus2",
"html_url": "https://github.com/lepus2",
"followers_url": "https://api.github.com/users/lepus2/followers",
"following_url": "https://api.github.com/users/lepus2/following{/other_user}",
"gists_url": "https://api.github.com/users/lepus2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lepus2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lepus2/subscriptions",
"organizations_url": "https://api.github.com/users/lepus2/orgs",
"repos_url": "https://api.github.com/users/lepus2/repos",
"events_url": "https://api.github.com/users/lepus2/events{/privacy}",
"received_events_url": "https://api.github.com/users/lepus2/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Yes. At the moment, this library is designed for loading pretrained models mostly and no one has open-sourced a Transformer-XL pretrained model using something else than adaptive softmax so I have not spent time adding these options.\r\n\r\nHappy to welcome PR though.\r\n\r\nThe main thing of interest here, if you want to give it a try would be to add bias to all clusters in PyTorch official [`AdaptiveLogSoftmaxWithLoss`](https://pytorch.org/docs/stable/nn.html?highlight=adaptivelogsoftmaxwithloss#torch.nn.AdaptiveLogSoftmaxWithLoss) module so we could just use the official implementation without maintaining ours.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,569 | 1,569 | NONE | null | I guess there is some incomplete part for adaptive softmax in [modeling_transfo_xl.py](https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/modeling_transfo_xl.py)
Actually, it is impossible to build model not using adaptive softmax even though `TransfoXLConfig` has `adaptive` parameter.
I can see that if `sample_softmax` is larger than -1, model uses sample softmax other than adaptive softmax, which seems to be the case of not using adaptive softmax.
However, in case of `sample_softmax` > -1 and `tie_weight`=True, there is a problem on [this line](https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/modeling_transfo_xl.py#L1317)<code><pre>self.out_layer.weight = self.transformer.word_emb.weight</pre></code> because the model always use `AdaptiveEmbedding` as `word_emb`, which has no `weight` property.
Presumably we need some code for not using adaptive softmax and but using standard softmax and usual `nn.Embedding` as a word embedding.
Then we can tie weights between standard `nn.Embedding` and `nn.Linear` when we doesn't need adaptive softmax.
Can you consider about this problem? Thanks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/848/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/847 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/847/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/847/comments | https://api.github.com/repos/huggingface/transformers/issues/847/events | https://github.com/huggingface/transformers/pull/847 | 470,777,251 | MDExOlB1bGxSZXF1ZXN0Mjk5NjMzMjkw | 847 | typos | {
"login": "lpq29743",
"id": 12952648,
"node_id": "MDQ6VXNlcjEyOTUyNjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/12952648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lpq29743",
"html_url": "https://github.com/lpq29743",
"followers_url": "https://api.github.com/users/lpq29743/followers",
"following_url": "https://api.github.com/users/lpq29743/following{/other_user}",
"gists_url": "https://api.github.com/users/lpq29743/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lpq29743/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lpq29743/subscriptions",
"organizations_url": "https://api.github.com/users/lpq29743/orgs",
"repos_url": "https://api.github.com/users/lpq29743/repos",
"events_url": "https://api.github.com/users/lpq29743/events{/privacy}",
"received_events_url": "https://api.github.com/users/lpq29743/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/847?src=pr&el=h1) Report\n> Merging [#847](https://codecov.io/gh/huggingface/pytorch-transformers/pull/847?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/a615499076a67dceb8907ecdf8eadaff04bb8d6a?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/847?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #847 +/- ##\n======================================\n Coverage 78.9% 78.9% \n======================================\n Files 34 34 \n Lines 6192 6192 \n======================================\n Hits 4886 4886 \n Misses 1306 1306\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/847?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/847?src=pr&el=footer). Last update [a615499...76be189](https://codecov.io/gh/huggingface/pytorch-transformers/pull/847?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Indeed!"
] | 1,563 | 1,563 | 1,563 | CONTRIBUTOR | null | "ouputs" -> "outputs" | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/847/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/847/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/847",
"html_url": "https://github.com/huggingface/transformers/pull/847",
"diff_url": "https://github.com/huggingface/transformers/pull/847.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/847.patch",
"merged_at": 1563888512000
} |
https://api.github.com/repos/huggingface/transformers/issues/846 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/846/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/846/comments | https://api.github.com/repos/huggingface/transformers/issues/846/events | https://github.com/huggingface/transformers/issues/846 | 470,724,022 | MDU6SXNzdWU0NzA3MjQwMjI= | 846 | XLNET completely wrong and random output | {
"login": "Oxi84",
"id": 25420033,
"node_id": "MDQ6VXNlcjI1NDIwMDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/25420033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Oxi84",
"html_url": "https://github.com/Oxi84",
"followers_url": "https://api.github.com/users/Oxi84/followers",
"following_url": "https://api.github.com/users/Oxi84/following{/other_user}",
"gists_url": "https://api.github.com/users/Oxi84/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Oxi84/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oxi84/subscriptions",
"organizations_url": "https://api.github.com/users/Oxi84/orgs",
"repos_url": "https://api.github.com/users/Oxi84/repos",
"events_url": "https://api.github.com/users/Oxi84/events{/privacy}",
"received_events_url": "https://api.github.com/users/Oxi84/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I solved one of the problems, using another way to load the model described bellow, but still it works way worse than BERT.\r\n \r\n tokenizer = XLNetTokenizer.from_pretrained(\"xlnet-large-cased\")\r\n model = XLNetLMHeadModel.from_pretrained(\"xlnet-large-cased\")\r\n model.eval()\r\n if torch.cuda.is_available(): model.to('cuda') #if we have a GPU \r\n target_id = 5\r\n input_ids = torch.tensor(tokenizer.encode(\"I believe my sister is <mask> because she eats a lot of vegetables .\")).unsqueeze(0) # We will predict the masked token\r\n perm_mask = torch.zeros((1, input_ids.shape[1], input_ids.shape[1]), dtype=torch.float)\r\n perm_mask[:, :, target_id] = 1.0 # Previous tokens don't see last token\r\n target_mapping = torch.zeros((1, 1, input_ids.shape[1]), dtype=torch.float) # Shape [1, 1, seq_length] => let's predict one token\r\n target_mapping[0, 0, target_id] = 1.0 # Our first (and only) prediction will be the last token of the sequence (the masked token)\r\n \r\n input_ids_tensor = input_ids.to(\"cuda\")\r\n target_mapping_tensor = target_mapping.to(\"cuda\")\r\n perm_mask_tensor = perm_mask.to(\"cuda\")\r\n \r\n with torch.no_grad():\r\n predictions = model(input_ids_tensor, perm_mask=perm_mask_tensor, target_mapping=target_mapping_tensor)\r\n \r\n predicted_k_indexes = torch.topk(predictions[0][0][0],k=10)\r\n predicted_logits_list = predicted_k_indexes[0] \r\n predicted_indexes_list = predicted_k_indexes[1] \r\n \r\n \r\n print (\"predicted word:\",tokenizer.decode(input_ids[0][target_id].item()))\r\n for i,item in enumerate(predicted_indexes_list):\r\n the_index = predicted_indexes_list[i].item()\r\n print(\"word and logits\",tokenizer.decode(the_index),predicted_logits_list[i].item())\r\n\r\n\r\nBut the output is not so good, i believe Bert is better. I hope this is correct code to get masked word inside a sentence.\r\n\r\nI am not sure if this line should be any different:\r\n \r\n perm_mask[:, :, target_id] = 1.0 # Previous tokens don't see last token\r\n\r\n\r\n\r\n\r\noutput:\r\n\r\n sentence: \"I believe my sister is <mask> because she is a blonde .\"\r\n predicted word: <mask>\r\n word and logits is -30.468482971191406\r\n word and logits the -33.0710334777832\r\n word and logits was -34.586158752441406\r\n word and logits because -34.74900436401367\r\n word and logits in -34.762718200683594\r\n word and logits that -34.86489486694336\r\n word and logits but -34.97043991088867\r\n word and logits and -35.04599380493164\r\n word and logits if -35.07524108886719\r\n word and logits not -35.1640510559082\r\n\r\nwhen i do not use perm_mask and call only: \r\n\r\n predictions = model(input_ids_tensor, target_mapping=target_mapping_tensor)\r\n\r\nI get a better, but still quite bad results, but it is at least interesting.\r\n \r\n sentence: \"I believe my sister is <mask> because she is a blonde .\"\r\n predicted word: <mask>\r\n word and logits Colombian 25.14841651916504\r\n word and logits a 25.1247615814209\r\n word and logits the 25.11375617980957\r\n word and logits Venezuelan 25.041296005249023\r\n word and logits I 24.912843704223633\r\n word and logits Beyonce 24.855722427368164\r\n word and logits Jessica 24.557470321655273\r\n word and logits in 24.518535614013672\r\n word and logits paranoid 24.407917022705078\r\n word and logits not 24.374282836914062\r\n\r\n\r\nWith bert base you get much better output, that makes much more sense [mainly adjectives]:\r\n\r\n [('beautiful', 7.622010231018066), ('attractive', 6.6926116943359375), ('special', 6.309513568878174), ('crazy', 6.045520782470703), ('pretty', 5.968326091766357), ('lucky', 5.951317310333252), ('famous', 5.942074775695801), ('different', 5.920231819152832), ('gorgeous', 5.897611141204834), ('blonde', 5.834926605224609)]\r\n \r\n \r\n\r\n",
"I also did comparasion with Bert, so far just one example, but I found that Bert is much much better. I am not sure why is that ... but there must be a reason. \r\n\r\n",
"Agreed! There is a chance we are not using the permutation mask and target mapping correctly, but I am suspicious as the documentation's example is not working very well either.",
"The main reason you get bad performance is that XLNet is not good on short inputs (comes from the way it is pretrained, always having a long memory and only guessing a few words in the sequence).\r\n\r\nThe `run_generation` example [here](https://github.com/huggingface/pytorch-transformers/blob/master/examples/run_generation.py) will show you how to get better performances by adding a random text as initiator.\r\n\r\nAman Rusia also wrote a blog post about that [here](https://medium.com/@amanrusia/xlnet-speaks-comparison-to-gpt-2-ea1a4e9ba39e). We are using his solution in the `run_generation` example.\r\n ",
"Thanks, I am going to try the generation method and post the results here. Hope prediction is going to improve, but i guess that ading a lot of padding is make to slow the execution down a lot.",
"@Oxi84 Any luck with your results ? I still get pretty random results even while using this trick",
"Thank you suggestions. \r\n\r\nAfter adding padding text, result is much more reasonable for predicting both middle masked token and text generation.\r\n\r\nSome texting sample:\r\nInput\r\n`text = 'The quick brown fox jumps <mask> the lazy dog.'`\r\nOutput\r\n```\r\nThe quick brown fox jumps above the lazy dog.\r\nThe quick brown fox jumps across the lazy dog.\r\n```\r\n\r\nInput\r\n`text = 'The <mask> brown fox jumps over the lazy dog.'`\r\nOutput\r\n```\r\nThe rapid brown fox jumps over the lazy dog.\r\nThe slow brown fox jumps over the lazy dog.\r\n```\r\n",
"hey guys, \r\n\r\nQ1)\r\ncan someone give some more insight what @thomwolf explaining about?\r\n'''\r\nhttps://github.com/huggingface/transformers/issues/846\r\nThe main reason you get bad performance is that XLNet is not good on short inputs (comes from the way it is pretrained, always having a long memory and only guessing a few words in the sequence).\r\n\r\nThe run_generation example here will show you how to get better performances by adding a random text as initiator.\r\nAman Rusia also wrote a blog post about that here. We are using his solution in the run_generation example.\r\n'''\r\n I can't understand the difference the way both Bert and XLnetLM works for LMhead task. \r\nAren't both model having disadvantages if they have short sentence? \r\n\r\nIt seems he said **XLnet has huge disadvantage** on short input sentence \r\nwhile Bert does not(or has less disadvantage). Any detail explanation could be useful ! \r\n\r\nQ2)\r\nAlso, I can't get the point of adding extra padding or adding random padding things to improve XLnetLMHead model. Any snippet or explanation could be appreciated too...(saw the link but could not fully understood). I experimented by just adding extra strings of line:'I believe my sister is <mask> because she is a blonde ' + '<eod> </s> <eos>' and it gives much better result than not having <eod> </s> <eos> at the end....\r\n\r\nQ3)\r\nhttps://github.com/huggingface/transformers/issues/846#issuecomment-513514039\r\nLastly, why do we have better result when we don't use perm_mask ? above link response shows that \r\nnot having perm_mask option does give at least better result...But isn't perm_mask supposed to help to get better prediction and what author of paper used for SOTA ?\r\n\r\nisn't perm_mask allow model to not seeing the next <mask> tokens in the given input while can see the previous <mask> tokens? According to the paper and the original code, I could see that if permute order is 3->4->1->2, mask=1,3, then model cannot see masked<1> when it tried to predict masked<3> but the reverse is possible.\r\n\r\nMany thanks in advance ! ",
"I think these questions are not directly related to this repo. Maybe you should check out the [paper](https://arxiv.org/abs/1906.08237) or ask on [quora](https://www.quora.com/) or on [researchgate](https://explore.researchgate.net/display/support/Asking+questions)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,581 | 1,581 | NONE | null | I followed the example here: https://huggingface.co/pytorch-transformers/model_doc/xlnet.html#pytorch_transformers.XLNetModel
I found that I get completelly wrong output, I mean predicted word for the masked sentences are completelly irelevant and they change each run. I guess there is some bug, culd you please take a look at this:
**code:**
##############################
config = XLNetConfig.from_pretrained('xlnet-large-cased')
tokenizer = XLNetTokenizer.from_pretrained('xlnet-large-cased')'
model = XLNetLMHeadModel(config)
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is very <..mask..> ")).unsqueeze(0) # We will
predict the masked token
print("input_ids",input_ids)
perm_mask = torch.zeros((1, input_ids.shape[1], input_ids.shape[1]), dtype=torch.float)
perm_mask[:, :, -1] = 1.0 # Previous tokens don't see last token
target_mapping = torch.zeros((1, 1, input_ids.shape[1]), dtype=torch.float) # Shape [1, 1, seq_length] => let's predict one token
target_mapping[0, 0, -1] = 1.0
predictions = model(input_ids, perm_mask=perm_mask, target_mapping=target_mapping)
predicted_k_indexes = torch.topk(predictions[0],k=10)
predicted_logits_list = predicted_k_indexes[0]
predicted_indexes_list = predicted_k_indexes[1]
print ("predicted <masked> words:")
for i,item in enumerate(predicted_indexes_list[0][0]):
the_index = predicted_indexes_list[0][0][i].item()
print("word and logits",tokenizer.decode(the_index),predicted_logits_list[0][0][i].item())
###########################
output (one example - it changes each run):
#################################
input_ids tensor([[ 17, 11368, 19, 94, 2288, 27, 172, 6]])
predicted <masked> words:
word and logits **emptiness** 2.7753820419311523
word and logits **Oklahoma** 2.61531400680542
word and logits **stars** 2.56619930267334
word and logits **bite** 2.5252184867858887
word and logits **Conte** 2.4745044708251953
word and logits **enforced** 2.4537196159362793
word and logits **antibody** 2.4416041374206543
word and logits **Got** 2.332545280456543
word and logits **Chev** 2.31380033493042
word and logits **MAG** 2.3047127723693848
####################################
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/846/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/846/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/845 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/845/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/845/comments | https://api.github.com/repos/huggingface/transformers/issues/845/events | https://github.com/huggingface/transformers/pull/845 | 470,661,645 | MDExOlB1bGxSZXF1ZXN0Mjk5NTU1MDMy | 845 | fixed version issues in run_openai_gpt | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/845?src=pr&el=h1) Report\n> Merging [#845](https://codecov.io/gh/huggingface/pytorch-transformers/pull/845?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/a615499076a67dceb8907ecdf8eadaff04bb8d6a?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/845?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #845 +/- ##\n======================================\n Coverage 78.9% 78.9% \n======================================\n Files 34 34 \n Lines 6192 6192 \n======================================\n Hits 4886 4886 \n Misses 1306 1306\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/845?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/845?src=pr&el=footer). Last update [a615499...f63ff53](https://codecov.io/gh/huggingface/pytorch-transformers/pull/845?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"LGTM, thanks @rabeehk"
] | 1,563 | 1,563 | 1,563 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/845/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/845",
"html_url": "https://github.com/huggingface/transformers/pull/845",
"diff_url": "https://github.com/huggingface/transformers/pull/845.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/845.patch",
"merged_at": 1563888572000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/844 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/844/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/844/comments | https://api.github.com/repos/huggingface/transformers/issues/844/events | https://github.com/huggingface/transformers/pull/844 | 470,652,537 | MDExOlB1bGxSZXF1ZXN0Mjk5NTQ4OTA3 | 844 | Fixed typo | {
"login": "rish-16",
"id": 20137995,
"node_id": "MDQ6VXNlcjIwMTM3OTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/20137995?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rish-16",
"html_url": "https://github.com/rish-16",
"followers_url": "https://api.github.com/users/rish-16/followers",
"following_url": "https://api.github.com/users/rish-16/following{/other_user}",
"gists_url": "https://api.github.com/users/rish-16/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rish-16/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rish-16/subscriptions",
"organizations_url": "https://api.github.com/users/rish-16/orgs",
"repos_url": "https://api.github.com/users/rish-16/repos",
"events_url": "https://api.github.com/users/rish-16/events{/privacy}",
"received_events_url": "https://api.github.com/users/rish-16/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/844?src=pr&el=h1) Report\n> Merging [#844](https://codecov.io/gh/huggingface/pytorch-transformers/pull/844?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/a615499076a67dceb8907ecdf8eadaff04bb8d6a?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/844?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #844 +/- ##\n======================================\n Coverage 78.9% 78.9% \n======================================\n Files 34 34 \n Lines 6192 6192 \n======================================\n Hits 4886 4886 \n Misses 1306 1306\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/844?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/844?src=pr&el=footer). Last update [a615499...6b3d9ad](https://codecov.io/gh/huggingface/pytorch-transformers/pull/844?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,563 | 1,563 | 1,563 | CONTRIBUTOR | null | Fixed typo in README.md | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/844/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/844",
"html_url": "https://github.com/huggingface/transformers/pull/844",
"diff_url": "https://github.com/huggingface/transformers/pull/844.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/844.patch",
"merged_at": 1563721537000
} |
https://api.github.com/repos/huggingface/transformers/issues/843 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/843/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/843/comments | https://api.github.com/repos/huggingface/transformers/issues/843/events | https://github.com/huggingface/transformers/issues/843 | 470,631,241 | MDU6SXNzdWU0NzA2MzEyNDE= | 843 | Issue | {
"login": "bmanishreddy",
"id": 15935444,
"node_id": "MDQ6VXNlcjE1OTM1NDQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/15935444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bmanishreddy",
"html_url": "https://github.com/bmanishreddy",
"followers_url": "https://api.github.com/users/bmanishreddy/followers",
"following_url": "https://api.github.com/users/bmanishreddy/following{/other_user}",
"gists_url": "https://api.github.com/users/bmanishreddy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bmanishreddy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bmanishreddy/subscriptions",
"organizations_url": "https://api.github.com/users/bmanishreddy/orgs",
"repos_url": "https://api.github.com/users/bmanishreddy/repos",
"events_url": "https://api.github.com/users/bmanishreddy/events{/privacy}",
"received_events_url": "https://api.github.com/users/bmanishreddy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"@bmanishreddy Your issue contains no text, should it be closed?",
"Yeah .. my bad it can be closed ",
"No worries, please close it so it doesn't create clutter. Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,568 | 1,563 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/843/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/842 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/842/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/842/comments | https://api.github.com/repos/huggingface/transformers/issues/842/events | https://github.com/huggingface/transformers/issues/842 | 470,620,212 | MDU6SXNzdWU0NzA2MjAyMTI= | 842 | 16 GB dataset for finetuning fail on reduce_memory | {
"login": "g-jing",
"id": 44223191,
"node_id": "MDQ6VXNlcjQ0MjIzMTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/44223191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-jing",
"html_url": "https://github.com/g-jing",
"followers_url": "https://api.github.com/users/g-jing/followers",
"following_url": "https://api.github.com/users/g-jing/following{/other_user}",
"gists_url": "https://api.github.com/users/g-jing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-jing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-jing/subscriptions",
"organizations_url": "https://api.github.com/users/g-jing/orgs",
"repos_url": "https://api.github.com/users/g-jing/repos",
"events_url": "https://api.github.com/users/g-jing/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-jing/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,569 | 1,569 | NONE | null | Hi, I am using 16GB dataset to finetune bert Model. When I do not use reduce_memory, which is loading dataset into memory first, it will use all of my 120GB memory and then crush because of out of memory. Now I am using reduce_memory model, with the increase of loading lines, the memory use is still increasing. But it is much slower in reduce_memory setting. So, I am thinking whether it would crush at the end. Could anyone have an answer for that ?
Sorry for the bothering, but it is too slow to run with reduce_memory setting, I have no idea weather it would crush or go well. I am afraid that it would keep loading for days and crushed finally. Thanks in advance for any suggestions | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/842/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/841 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/841/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/841/comments | https://api.github.com/repos/huggingface/transformers/issues/841/events | https://github.com/huggingface/transformers/issues/841 | 470,613,390 | MDU6SXNzdWU0NzA2MTMzOTA= | 841 | Detaching Variables | {
"login": "rishibommasani",
"id": 47439426,
"node_id": "MDQ6VXNlcjQ3NDM5NDI2",
"avatar_url": "https://avatars.githubusercontent.com/u/47439426?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rishibommasani",
"html_url": "https://github.com/rishibommasani",
"followers_url": "https://api.github.com/users/rishibommasani/followers",
"following_url": "https://api.github.com/users/rishibommasani/following{/other_user}",
"gists_url": "https://api.github.com/users/rishibommasani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rishibommasani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rishibommasani/subscriptions",
"organizations_url": "https://api.github.com/users/rishibommasani/orgs",
"repos_url": "https://api.github.com/users/rishibommasani/repos",
"events_url": "https://api.github.com/users/rishibommasani/events{/privacy}",
"received_events_url": "https://api.github.com/users/rishibommasani/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Maybe you are not using `with torch.grad()` when calling the model for inference?\r\n\r\nI've added that in the readme example (it used to be mentioned there indeed).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,569 | 1,569 | NONE | null | Something I noticed in transitioning from Pretrained-BERT to Transformers is that for the purposes of using BERT as a feature extractor/probing the pretrained representations, I need to detach variables whereas I previously didn't. I am not sure if this is noted somewhere (I didn't see it in the section in the docs about transitioning) but found it be highly relevant to prevent unnecessary memory usage. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/841/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/840 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/840/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/840/comments | https://api.github.com/repos/huggingface/transformers/issues/840/events | https://github.com/huggingface/transformers/issues/840 | 470,552,761 | MDU6SXNzdWU0NzA1NTI3NjE= | 840 | AttributeError: 'BertModel' object has no attribute '_load_from_state_dict' | {
"login": "Shandilya21",
"id": 28632968,
"node_id": "MDQ6VXNlcjI4NjMyOTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/28632968?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shandilya21",
"html_url": "https://github.com/Shandilya21",
"followers_url": "https://api.github.com/users/Shandilya21/followers",
"following_url": "https://api.github.com/users/Shandilya21/following{/other_user}",
"gists_url": "https://api.github.com/users/Shandilya21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shandilya21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shandilya21/subscriptions",
"organizations_url": "https://api.github.com/users/Shandilya21/orgs",
"repos_url": "https://api.github.com/users/Shandilya21/repos",
"events_url": "https://api.github.com/users/Shandilya21/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shandilya21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You are probably not using the new release of PyTorch-transformers.\r\nTry `pip install pytorch-transformers --upgrade`.\r\nAnd read the full [readme](https://github.com/huggingface/pytorch-transformers), there are several breaking changes.",
"@thomwolf Thanks for your update, it works for me !!"
] | 1,563 | 1,564 | 1,564 | NONE | null | Hi,
I am getting these error, even the i tried the model straight from the repository examples for the test cases.? anyone help me to understand this issues
Thanks for help in advance
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/840/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/839 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/839/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/839/comments | https://api.github.com/repos/huggingface/transformers/issues/839/events | https://github.com/huggingface/transformers/issues/839 | 470,515,113 | MDU6SXNzdWU0NzA1MTUxMTM= | 839 | How to restore a training? | {
"login": "bingyupiaoyao",
"id": 1625353,
"node_id": "MDQ6VXNlcjE2MjUzNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1625353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bingyupiaoyao",
"html_url": "https://github.com/bingyupiaoyao",
"followers_url": "https://api.github.com/users/bingyupiaoyao/followers",
"following_url": "https://api.github.com/users/bingyupiaoyao/following{/other_user}",
"gists_url": "https://api.github.com/users/bingyupiaoyao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bingyupiaoyao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bingyupiaoyao/subscriptions",
"organizations_url": "https://api.github.com/users/bingyupiaoyao/orgs",
"repos_url": "https://api.github.com/users/bingyupiaoyao/repos",
"events_url": "https://api.github.com/users/bingyupiaoyao/events{/privacy}",
"received_events_url": "https://api.github.com/users/bingyupiaoyao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"You will have to modify the provided example to save/reload the model, optimizer and scheduler states.\r\n\r\nI've updated the scheduler classes in #872 so that we can save/reload the schedulers with the standard PyTorch serialization practice:\r\n```\r\ntorch.save(schedule.state_dict(), FILE_NAME) # save\r\nschedule.load_state_dict(torch.load(FILE_NAME)) # reload\r\n```",
"Thanks for the response~",
"@thomwolf - I happen to need save/resume training for `run_glue.py`; I'm willing to implement this and make a PR if I can get feedback about the overall approach.\r\n\r\nIt looks like I would want to save:\r\n\r\n- `global_step`\r\n- `step` (from this one could presumably skip to the right place in the `epoch_iterator` of `train`)\r\n- optimizer, model, scheduler (these look like they should be trivially amenable to `torch.save`)\r\n- random seed, torch's random seed, numpy's random seed\r\n\r\nBased on the current set up, I'm not actually sure what the right way to preserve the `train_sampler` and `train_dataloader` states are (without also serializing them too, which seems like a waste, but is by far the easiest way to handle it if they are so amenable), since their states can't be recreated with the above information. In my case, punting on these is an acceptable option, as well as being wasteful and saving to disk.\r\n\r\nThoughts?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,569 | 1,569 | NONE | null | For example, I use "run_glue.py" to train a model and stop at Epoch 30, and how to restore the training process from Epoch 30? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/839/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/838 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/838/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/838/comments | https://api.github.com/repos/huggingface/transformers/issues/838/events | https://github.com/huggingface/transformers/issues/838 | 470,372,340 | MDU6SXNzdWU0NzAzNzIzNDA= | 838 | Standardized head for Question Answering | {
"login": "andrelmfarias",
"id": 43521764,
"node_id": "MDQ6VXNlcjQzNTIxNzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/43521764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andrelmfarias",
"html_url": "https://github.com/andrelmfarias",
"followers_url": "https://api.github.com/users/andrelmfarias/followers",
"following_url": "https://api.github.com/users/andrelmfarias/following{/other_user}",
"gists_url": "https://api.github.com/users/andrelmfarias/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andrelmfarias/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andrelmfarias/subscriptions",
"organizations_url": "https://api.github.com/users/andrelmfarias/orgs",
"repos_url": "https://api.github.com/users/andrelmfarias/repos",
"events_url": "https://api.github.com/users/andrelmfarias/events{/privacy}",
"received_events_url": "https://api.github.com/users/andrelmfarias/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Well, the output of the `BertForQuestionAnswering` model hasn't changed, the returned `start_score` and `end_score` are still scores before the softmax.\r\n\r\nCan you point more specifically to the changes you are referring to?",
"I'm sorry. Indeed the implementation of `BertForQuestionAnswering` remains the same.\r\n\r\nActually, I was referring to the implementation of `XLNetForQuestionAnswering`, which is pretty different from `BertForQuestionAnswering` (I thought you had standardised the implementation for all QA models and I hadn't checked the `BERT` implementation before posting the issue here)\r\n\r\nPlease correct me if I am wrong, but I do not see the `forward()` method outputting the Start and End's logits here (only the softmax probas - `start_log_probs`and `end_log_probs`):\r\n\r\nhttps://github.com/huggingface/pytorch-transformers/blob/268c6cc160ba046d6a91747c5f281f82bd88a4d8/pytorch_transformers/modeling_xlnet.py#L1226-L1290\r\n\r\nSame with XLM:\r\n\r\nhttps://github.com/huggingface/pytorch-transformers/blob/fec76a481d1ecfbf068d87735dd44ffc26158f6e/pytorch_transformers/modeling_xlm.py#L899-L921\r\n\r\nhttps://github.com/huggingface/pytorch-transformers/blob/fec76a481d1ecfbf068d87735dd44ffc26158f6e/pytorch_transformers/modeling_utils.py#L649",
"Oh yes the official XLNet implementation uses a beam search for Question Answering so the output is more complex.\r\n\r\nI'll see if I can come up with a standardized way to use both.",
"Thanks, I get it. Maybe caching the `start_logits` during the search and returning the logits of chosen End and Start positions would be a solution.\r\n\r\nBy the way, FYI the examples in the documentation for `XLNetForQuestionAnswering` and `XLMForQuestionAnswering` are incorrect, both show `start_scores` and `end_scores` in the outputs and the example in `XLNetForQuestionAnswering` uses `XLM` instead of `XLNet`\r\n\r\n[`XLNetForQuestionAnswering`](https://huggingface.co/pytorch-transformers/model_doc/xlnet.html#xlnetforquestionanswering):\r\n\r\n\r\n\r\n[`XLMForQuestionAnswering`](https://huggingface.co/pytorch-transformers/model_doc/xlm.html#xlmforquestionanswering):\r\n\r\n\r\n",
"This is silly/nitpicky - but can we change the title of this issue? Got very worried I had been using the BERT heads incorrectly until I read into the weeds of the comments..",
"Hello, any news on this standardized head? @thomwolf?\r\nWhat do you think about my proposition about caching the `start_logits` during the Beam search and outputting both `start_logits` and `end_logits` as it is done with `BertForQuestionAnswering`?\r\nOr do you have any other ideas?\r\n\r\nI can try to work on that and do a PR if you need",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,574 | 1,574 | NONE | null | Hi,
With some colleagues, we developed a QA system that uses a whole QA pipeline (Retriever, Reader, Ranker). We use your older version of `BertForQuestionAnswering` as Reader and now we wish to update it to be compatible with your new release and to add others models as well (XLNet, XLM).
Our system uses the logits outputted by the model in order to rank the answers between different paragraphs (using probabilities outputted by the softmax layer is an incorrect approach for such systems).
However, we understand that in your new API, the `forward` method of QA models now can only return probabilities. We suggest you add the option to output the raw logits as well.
We could of course overcome this by using the functions `self.start_logits` and `self.end_logits`, but we think that this feature can be useful for other users as well | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/838/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/837 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/837/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/837/comments | https://api.github.com/repos/huggingface/transformers/issues/837/events | https://github.com/huggingface/transformers/issues/837 | 470,348,073 | MDU6SXNzdWU0NzAzNDgwNzM= | 837 | run_openai_gpt.py issues with Adamw | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Yes, so this should be fixed by (your own :) PR #845 Thanks again!",
"thanks :)\nBest regards,\nRabeeh\n\nOn Tue, Jul 23, 2019 at 3:30 PM Thomas Wolf <[email protected]>\nwrote:\n\n> Yes, so this should be fixed by (your own :) PR #845\n> <https://github.com/huggingface/pytorch-transformers/pull/845> Thanks\n> again!\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/pytorch-transformers/issues/837?email_source=notifications&email_token=ABP4ZCAGTX6DNAYMBCHNORTQA4BYFA5CNFSM4IFGMGK2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD2TDV6Y#issuecomment-514210555>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCGDZR3TG2SS2LXIBTDQA4BYFANCNFSM4IFGMGKQ>\n> .\n>\n",
"Thanks @rabeehk! #845 with flat learning rate gives a good result of 87.2% on ROCStories. Here are the args (the defaults in the file work well).\r\n\r\n```\r\npython run_openai_gpt.py \\\r\n --model_name openai-gpt \\\r\n --do_train \\\r\n --do_eval \\\r\n --train_dataset \"./ROCStories/cloze_test_val__spring2016 - cloze_test_ALL_val.csv\" \\\r\n --eval_dataset \"./ROCStories/cloze_test_test__spring2016 - cloze_test_ALL_test.csv\" \\\r\n --train_batch_size 8 \\\r\n --eval_batch_size 16 \\\r\n --num_train_epochs 3\r\n```",
"@prrao87 great :) ",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,569 | 1,569 | NONE | null | Hi
Adamw in this script has parameters not existing anymore, ...
Thanks for updates in advance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/837/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/836 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/836/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/836/comments | https://api.github.com/repos/huggingface/transformers/issues/836/events | https://github.com/huggingface/transformers/issues/836 | 470,344,188 | MDU6SXNzdWU0NzAzNDQxODg= | 836 | BertForNextSentencePrediction labels | {
"login": "ArthurCamara",
"id": 709027,
"node_id": "MDQ6VXNlcjcwOTAyNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/709027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurCamara",
"html_url": "https://github.com/ArthurCamara",
"followers_url": "https://api.github.com/users/ArthurCamara/followers",
"following_url": "https://api.github.com/users/ArthurCamara/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurCamara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurCamara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurCamara/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurCamara/orgs",
"repos_url": "https://api.github.com/users/ArthurCamara/repos",
"events_url": "https://api.github.com/users/ArthurCamara/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurCamara/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You should supply your own labels when using the `BertForSequenceClassification` class (`labels` input to the forward method). You can choose the labels you like.\r\n\r\nThe `BertForSequenceClassification` class is **not** related to the Next Sentence Classification task used during Bert pretraining. You can use the [BertForNextSentencePrediction](https://huggingface.co/pytorch-transformers/model_doc/bert.html#bertfornextsentenceprediction) class if you want to do next sentence prediction (classification).",
"Yes, I'm sorry I meant the [BertForNextSentencePrediction](https://huggingface.co/pytorch-transformers/model_doc/bert.html#bertfornextsentenceprediction) class.",
"Yes, in this case, you should follow the docstring: `0 indicates sequence B is a continuation of sequence A, 1 indicates sequence B is a random sequence.`"
] | 1,563 | 1,565 | 1,565 | NONE | null | Hi everyone!
I was reading trough the documentation, and, according to https://huggingface.co/pytorch-transformers/model_doc/bert.html#bertforsequenceclassification, it expects that `next_sentence_label` is `1` if B is **not** a next sequence for A, and `0` if B **is** a sequence for B.
That's somewhat counterintuitive, since most of the datasets (I believe) will assume this problem to be a binary classification.
Is my assumption correct? Should I flip my dataset before fine-tuning the model? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/836/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/835 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/835/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/835/comments | https://api.github.com/repos/huggingface/transformers/issues/835/events | https://github.com/huggingface/transformers/issues/835 | 470,177,370 | MDU6SXNzdWU0NzAxNzczNzA= | 835 | How to use the pretrain script with only token classification task ? | {
"login": "g-jing",
"id": 44223191,
"node_id": "MDQ6VXNlcjQ0MjIzMTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/44223191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-jing",
"html_url": "https://github.com/g-jing",
"followers_url": "https://api.github.com/users/g-jing/followers",
"following_url": "https://api.github.com/users/g-jing/following{/other_user}",
"gists_url": "https://api.github.com/users/g-jing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-jing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-jing/subscriptions",
"organizations_url": "https://api.github.com/users/g-jing/orgs",
"repos_url": "https://api.github.com/users/g-jing/repos",
"events_url": "https://api.github.com/users/g-jing/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-jing/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"No, you probably need to adapt the example script to your exact task."
] | 1,563 | 1,565 | 1,565 | NONE | null | Hi, I need to train on my own twitter corpus, but most of the twitter contains only one sentence. Therefore I can not use the sentence prediction task to train the model. Will the script automatically use only token classification task when there is no next sentence ? Thanks in advance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/835/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/834 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/834/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/834/comments | https://api.github.com/repos/huggingface/transformers/issues/834/events | https://github.com/huggingface/transformers/issues/834 | 470,154,199 | MDU6SXNzdWU0NzAxNTQxOTk= | 834 | git pull pytorch-transformers?? | {
"login": "Linohong",
"id": 19821168,
"node_id": "MDQ6VXNlcjE5ODIxMTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/19821168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Linohong",
"html_url": "https://github.com/Linohong",
"followers_url": "https://api.github.com/users/Linohong/followers",
"following_url": "https://api.github.com/users/Linohong/following{/other_user}",
"gists_url": "https://api.github.com/users/Linohong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Linohong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Linohong/subscriptions",
"organizations_url": "https://api.github.com/users/Linohong/orgs",
"repos_url": "https://api.github.com/users/Linohong/repos",
"events_url": "https://api.github.com/users/Linohong/events{/privacy}",
"received_events_url": "https://api.github.com/users/Linohong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I suspect this is more of a general `git` question. We'll close this unless there is something specific to the lib.\r\n\r\nGood luck!",
"@julien-c ,\r\nyeah, I had some of code mismatch issues and that falls into general github matters \r\nand I got it done :) \r\n\r\nThank you :)"
] | 1,563 | 1,564 | 1,563 | NONE | null | Hello,
I have git cloned 'pytorch-pretrained-bert' before there is a new release, pytorch-transformers
and I added many of the comments and new example files in the cloned project.
However, when I found there has been a new version released, git pulling didn't work
for conflicting files issues.
Is it because of the new release of 'pytorch-transformers' conflicts to the older version which
is totally different in name?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/834/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/833 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/833/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/833/comments | https://api.github.com/repos/huggingface/transformers/issues/833/events | https://github.com/huggingface/transformers/issues/833 | 470,152,469 | MDU6SXNzdWU0NzAxNTI0Njk= | 833 | missing 1 required positional argument: 'num_classes' in 'from_pretrained' | {
"login": "desireevl",
"id": 17139032,
"node_id": "MDQ6VXNlcjE3MTM5MDMy",
"avatar_url": "https://avatars.githubusercontent.com/u/17139032?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/desireevl",
"html_url": "https://github.com/desireevl",
"followers_url": "https://api.github.com/users/desireevl/followers",
"following_url": "https://api.github.com/users/desireevl/following{/other_user}",
"gists_url": "https://api.github.com/users/desireevl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/desireevl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/desireevl/subscriptions",
"organizations_url": "https://api.github.com/users/desireevl/orgs",
"repos_url": "https://api.github.com/users/desireevl/repos",
"events_url": "https://api.github.com/users/desireevl/events{/privacy}",
"received_events_url": "https://api.github.com/users/desireevl/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, @xanlsh is working on an update to reduce the effect of this breaking change in #866.\r\n\r\nYou should be able to keep your script unchanged.",
"@desireevl Since #866 has been merged, your code should work now",
"Thanks!"
] | 1,563 | 1,565 | 1,565 | CONTRIBUTOR | null | I am running a [multiclass BERT classification](https://github.com/desireevl/Bert-Multi-Label-Text-Classification/blob/master/train_bert_multi_label.py) model and am receiving the following error:
`
Traceback (most recent call last):
File "train_bert_multi_label.py", line 144, in <module>
main()
File "train_bert_multi_label.py", line 78, in main
num_classes = len(id2label))
File "/opt/miniconda3/envs/tempenv/lib/python3.6/site-packages/pytorch_transformers/modeling_utils.py", line 403, in from_pretrained
model = cls(config)
TypeError: __init__() missing 1 required positional argument: 'num_classes'
`
The script worked fine when using `pytorch-pretrained-bert` and am guessing there is an issue from the new release.
Thanks for the great tools :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/833/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/832 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/832/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/832/comments | https://api.github.com/repos/huggingface/transformers/issues/832/events | https://github.com/huggingface/transformers/issues/832 | 470,106,701 | MDU6SXNzdWU0NzAxMDY3MDE= | 832 | Training with wrong GPU count | {
"login": "hairzooc",
"id": 13031514,
"node_id": "MDQ6VXNlcjEzMDMxNTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/13031514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hairzooc",
"html_url": "https://github.com/hairzooc",
"followers_url": "https://api.github.com/users/hairzooc/followers",
"following_url": "https://api.github.com/users/hairzooc/following{/other_user}",
"gists_url": "https://api.github.com/users/hairzooc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hairzooc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hairzooc/subscriptions",
"organizations_url": "https://api.github.com/users/hairzooc/orgs",
"repos_url": "https://api.github.com/users/hairzooc/repos",
"events_url": "https://api.github.com/users/hairzooc/events{/privacy}",
"received_events_url": "https://api.github.com/users/hairzooc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, this is expected behavior. Each script in distributed training has ownership over one GPU only.\r\n\r\nYou can read this blog post for details on parallel and distributed training: https://medium.com/huggingface/training-larger-batches-practical-tips-on-1-gpu-multi-gpu-distributed-setups-ec88c3e51255"
] | 1,563 | 1,565 | 1,565 | NONE | null | Hi,
Thank you for your repo :)
I'm fine-tuning with 4 GPU (run_squad, bert model)
And I found that gpu count is wrong when to do distributed training.
I've got 1 GPU count and that's caused by source code below
Is there any reason to set n_gpu = 1 when to do distributed training?
if args.local_rank == -1 or args.no_cuda:
device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu")
n_gpu = torch.cuda.device_count()
else:
torch.cuda.set_device(args.local_rank)
device = torch.device("cuda", args.local_rank)
n_gpu = 1 <= this!!!
# Initializes the distributed backend which will take care of sychronizing nodes/GPUs
torch.distributed.init_process_group(backend='nccl')
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/832/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/831 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/831/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/831/comments | https://api.github.com/repos/huggingface/transformers/issues/831/events | https://github.com/huggingface/transformers/issues/831 | 470,075,427 | MDU6SXNzdWU0NzAwNzU0Mjc= | 831 | finetune_on_pregenerate Loss.backwards() throw an error | {
"login": "sophiapeng0426",
"id": 11810773,
"node_id": "MDQ6VXNlcjExODEwNzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/11810773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sophiapeng0426",
"html_url": "https://github.com/sophiapeng0426",
"followers_url": "https://api.github.com/users/sophiapeng0426/followers",
"following_url": "https://api.github.com/users/sophiapeng0426/following{/other_user}",
"gists_url": "https://api.github.com/users/sophiapeng0426/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sophiapeng0426/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sophiapeng0426/subscriptions",
"organizations_url": "https://api.github.com/users/sophiapeng0426/orgs",
"repos_url": "https://api.github.com/users/sophiapeng0426/repos",
"events_url": "https://api.github.com/users/sophiapeng0426/events{/privacy}",
"received_events_url": "https://api.github.com/users/sophiapeng0426/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes this example should have been updated now with #797."
] | 1,563 | 1,563 | 1,563 | NONE | null | In finetune_on_pregenerated.py, loss are tuples and thus loss.backward() is not going to work.
Original:
loss = model(input_ids, segment_ids, input_mask, lm_label_ids, is_next)
Update:
loss, _ , _ = model(input_ids, segment_ids, input_mask, lm_label_ids, is_next)
Is this fix correct? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/831/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/830 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/830/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/830/comments | https://api.github.com/repos/huggingface/transformers/issues/830/events | https://github.com/huggingface/transformers/issues/830 | 470,074,075 | MDU6SXNzdWU0NzAwNzQwNzU= | 830 | AdamW does not have args warmup and t_total | {
"login": "sophiapeng0426",
"id": 11810773,
"node_id": "MDQ6VXNlcjExODEwNzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/11810773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sophiapeng0426",
"html_url": "https://github.com/sophiapeng0426",
"followers_url": "https://api.github.com/users/sophiapeng0426/followers",
"following_url": "https://api.github.com/users/sophiapeng0426/following{/other_user}",
"gists_url": "https://api.github.com/users/sophiapeng0426/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sophiapeng0426/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sophiapeng0426/subscriptions",
"organizations_url": "https://api.github.com/users/sophiapeng0426/orgs",
"repos_url": "https://api.github.com/users/sophiapeng0426/repos",
"events_url": "https://api.github.com/users/sophiapeng0426/events{/privacy}",
"received_events_url": "https://api.github.com/users/sophiapeng0426/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Yes this example should have been updated now by #797.\r\n\r\nRegarding `AdamW` and the schedule, details, and examples for the conversion are indicated in the migration section of the readme: https://github.com/huggingface/pytorch-transformers#Migrating-from-pytorch-pretrained-bert-to-pytorch-transformers",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,569 | 1,569 | NONE | null | In finetune_on_pregenerated.py, below code throw error cause AdamW does not have those two arguments. This can be fixed by comment out those two columns but not sure if that means warmup will be not effective after that?
optimizer = AdamW(optimizer_grouped_parameters,
lr=args.learning_rate)
#warmup=args.warmup_proportion,
# t_total=num_train_optimization_steps)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/830/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/830/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/829 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/829/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/829/comments | https://api.github.com/repos/huggingface/transformers/issues/829/events | https://github.com/huggingface/transformers/issues/829 | 470,012,887 | MDU6SXNzdWU0NzAwMTI4ODc= | 829 | RoBERTa support | {
"login": "sleepinyourhat",
"id": 1284441,
"node_id": "MDQ6VXNlcjEyODQ0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1284441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sleepinyourhat",
"html_url": "https://github.com/sleepinyourhat",
"followers_url": "https://api.github.com/users/sleepinyourhat/followers",
"following_url": "https://api.github.com/users/sleepinyourhat/following{/other_user}",
"gists_url": "https://api.github.com/users/sleepinyourhat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sleepinyourhat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sleepinyourhat/subscriptions",
"organizations_url": "https://api.github.com/users/sleepinyourhat/orgs",
"repos_url": "https://api.github.com/users/sleepinyourhat/repos",
"events_url": "https://api.github.com/users/sleepinyourhat/events{/privacy}",
"received_events_url": "https://api.github.com/users/sleepinyourhat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Working on the code/paper release as we speak :) It largely follows the existing masked_lm implementation in fairseq. Happy to help get this integrated here.",
"Hi @myleott great news :) I'm really excited about the release 🤗 I've some questions: do you plan to perform any comparisons between RoBERTa and BERT on NER (CoNLL-2003)? \r\n\r\nI've read the [Cloze-driven Pretraining of Self-attention Networks](https://arxiv.org/abs/1903.07785) paper, and if I recall correctly, the implementation is currently done in the `bi_trans_lm` branch in `fairseq`, but do you have any updates on that? It would be awesome if a pre-trained CNN model from that paper could also be integrated into `pytorch-transformers` 😍",
"Sounds great @myleott. Keep us updated about the release!",
"Models and README are uploaded: https://github.com/pytorch/fairseq/tree/master/examples/roberta. We submitted the paper to arXiv today so it should be out Sunday evening.\r\n\r\n> I've some questions: do you plan to perform any comparisons between RoBERTa and BERT on NER (CoNLL-2003)?\r\n\r\nWe haven't yet, but it would be interesting to explore. RoBERTa was trained on considerably more data than BERT, so I expect it would do well on NER tasks.",
"Paper is [out](https://arxiv.org/abs/1907.11692). Thanks @myleott!",
"Work in progress in #964 feel free to chime in :)",
"Example of how RoBERTa can be used to predict a masked token.\r\n\r\nimport torch\r\nfrom pytorch_transformers import RobertaTokenizer, RobertaForMaskedLM\r\n\r\ntokenizer = RobertaTokenizer.from_pretrained('roberta-large')\r\nmodel = RobertaForMaskedLM.from_pretrained('roberta-large')\r\nmodel.eval()\r\n\r\nif torch.cuda.is_available(): model.to('cuda') #if we have a GPU\r\ntext = 'I believe my sister is <mask<m>> because she eats a lot of vegetables .'\r\n\r\ntokenized_text = tokenizer.tokenize(text)\r\nmasked_index = tokenized_text.index(<mask<m>>)+1\r\n\r\n#add_special_tokens adds a <s<s>> to the beginning and </s</s>> to the end of the text\r\ninput_ids = torch.tensor(tokenizer.encode(text,add_special_tokens=True)).unsqueeze(0) \r\ninput_ids_tensor = input_ids.to(\"cuda\")\r\n\r\n#with torch.no_grad():\r\noutputs = model(input_ids_tensor, masked_lm_labels=input_ids_tensor)\r\nloss, prediction_scores = outputs[:2]\r\n\r\n#predicted_index = torch.argmax(prediction_scores[0, masked_index]).item()\r\n#predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]\r\n\r\npredicted_k_indexes = torch.topk(prediction_scores[0, masked_index],k=20)\r\npredicted_logits_list = predicted_k_indexes[0] \r\npredicted_indexes_list = predicted_k_indexes[1]\r\n\r\nfor i, item in enumerate(predicted_indexes_list):\r\n the_index = predicted_indexes_list[i].item()\r\n print(\"word and logits\",tokenizer.decode(the_index),predicted_logits_list[i].item())\r\n",
"Hi @pwolff, at first glance it looks ok to me. You don't need to send the `masked_lm_labels` if you don't use the loss though.",
"@thomwolf hello, I trained the robert on my customized corpus following the fairseq instruction. I am confused how to generate the robert vocab.json and also merge.txt because I want to use the pytorch-transformer RoBERTaTokenizer.",
"@stefan-it hello, I trained the robert on my customized corpus following the fairseq instruction. I am confused how to generate the robert vocab.json and also merge.txt because I want to use the pytorch-transformer RoBERTaTokenizer.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"@songtaoshi I think this can be done via `subword-nmt`, see this note:\r\n\r\nhttps://github.com/pytorch/fairseq/issues/1163#issuecomment-534098220\r\n\r\n",
"is this still an issue?",
"Nope, RoBERTa support was shipped in [v1.1.0](https://github.com/huggingface/transformers/releases/tag/1.1.0)\r\n\r\nThanks all!"
] | 1,563 | 1,575 | 1,575 | NONE | null | https://twitter.com/sleepinyourhat/status/1151940994688016384
The code/parameters aren't out yet, but I figure it couldn't hurt to put in an obnoxious feature request now! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/829/reactions",
"total_count": 29,
"+1": 10,
"-1": 0,
"laugh": 12,
"hooray": 0,
"confused": 0,
"heart": 7,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/829/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/828 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/828/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/828/comments | https://api.github.com/repos/huggingface/transformers/issues/828/events | https://github.com/huggingface/transformers/issues/828 | 469,978,335 | MDU6SXNzdWU0Njk5NzgzMzU= | 828 | CUDA error: invalid configuration argument when not using DataParallel | {
"login": "Phirefly9",
"id": 16687050,
"node_id": "MDQ6VXNlcjE2Njg3MDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/16687050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Phirefly9",
"html_url": "https://github.com/Phirefly9",
"followers_url": "https://api.github.com/users/Phirefly9/followers",
"following_url": "https://api.github.com/users/Phirefly9/following{/other_user}",
"gists_url": "https://api.github.com/users/Phirefly9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Phirefly9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Phirefly9/subscriptions",
"organizations_url": "https://api.github.com/users/Phirefly9/orgs",
"repos_url": "https://api.github.com/users/Phirefly9/repos",
"events_url": "https://api.github.com/users/Phirefly9/events{/privacy}",
"received_events_url": "https://api.github.com/users/Phirefly9/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Further testing showed this was caused by the batch size being too high and the card running out of memory, and providing a misleading error."
] | 1,563 | 1,563 | 1,563 | NONE | null | Good Evening,
We have a DGX2 system running the latest Nvidia pytorch docker container - 19.06. When attempting to use the gpt2 or gpt2-medium models to extract out embeddings we are getting the following error, but only when not using dataparallel: (note we are using apex here but an optimization level of 0, this issue occurs without apex)
```
[...]
File "[removed]", line 167, in forward
x1_emb, past = self.embedding(x1)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 494, in __call__
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/pytorch_transformers/modeling_gpt2.py", line 515, in forward
outputs = block(hidden_states, layer_past, head_mask[i])
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 494, in __call__
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/pytorch_transformers/modeling_gpt2.py", line 332, in forward
output_attn = self.attn(self.ln_1(x), layer_past=layer_past, head_mask=head_mask)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 494, in __call__
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/pytorch_transformers/modeling_gpt2.py", line 285, in forward
x = self.c_attn(x)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 494, in __call__
result = self.forward(*input, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/pytorch_transformers/modeling_utils.py", line 490, in forward
x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight)
RuntimeError: CUDA error: invalid configuration argument
```
I found reference to this issue from Pytorch:
https://github.com/pytorch/pytorch/issues/2080
but it is reported as fixed so I'm not sure if this issue belongs to this repo or pytorch. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/828/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/827 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/827/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/827/comments | https://api.github.com/repos/huggingface/transformers/issues/827/events | https://github.com/huggingface/transformers/issues/827 | 469,947,330 | MDU6SXNzdWU0Njk5NDczMzA= | 827 | xlnet input_mask and attention_mask type error | {
"login": "Saner3",
"id": 30628796,
"node_id": "MDQ6VXNlcjMwNjI4Nzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/30628796?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saner3",
"html_url": "https://github.com/Saner3",
"followers_url": "https://api.github.com/users/Saner3/followers",
"following_url": "https://api.github.com/users/Saner3/following{/other_user}",
"gists_url": "https://api.github.com/users/Saner3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Saner3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Saner3/subscriptions",
"organizations_url": "https://api.github.com/users/Saner3/orgs",
"repos_url": "https://api.github.com/users/Saner3/repos",
"events_url": "https://api.github.com/users/Saner3/events{/privacy}",
"received_events_url": "https://api.github.com/users/Saner3/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Humm you are right, the docstrings are off, it would be more clear if they were all indicated as `torch.FloatTensor` (even though officially torch.Tensor is an alias for the default tensor type (torch.FloatTensor))."
] | 1,563 | 1,563 | 1,563 | NONE | null | when I use:
```input_mask = (input_ids == 0)```
```perm_mask = perm_mask = torch.zeros((1, input_ids.shape[1], input_ids.shape[1]), dtype=torch.float, device=device)```
```perm_mask[:, :, -1] = 1.0 # Previous tokens don't see last token```
File "pytorch-transformers/pytorch_transformers/modeling_xlnet.py", line 881, in forward
data_mask = input_mask[None] + perm_mask
RuntimeError: expected backend CUDA and dtype Float but got backend CUDA and dtype Byte
input_mask is a Tensor but perm_mask is FloatTensor | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/827/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/826 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/826/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/826/comments | https://api.github.com/repos/huggingface/transformers/issues/826/events | https://github.com/huggingface/transformers/issues/826 | 469,887,826 | MDU6SXNzdWU0Njk4ODc4MjY= | 826 | Providing older documentation | {
"login": "Ricocotam",
"id": 9447752,
"node_id": "MDQ6VXNlcjk0NDc3NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9447752?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ricocotam",
"html_url": "https://github.com/Ricocotam",
"followers_url": "https://api.github.com/users/Ricocotam/followers",
"following_url": "https://api.github.com/users/Ricocotam/following{/other_user}",
"gists_url": "https://api.github.com/users/Ricocotam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ricocotam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ricocotam/subscriptions",
"organizations_url": "https://api.github.com/users/Ricocotam/orgs",
"repos_url": "https://api.github.com/users/Ricocotam/repos",
"events_url": "https://api.github.com/users/Ricocotam/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ricocotam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, you may go to https://github.com/huggingface/pytorch-transformers/releases, select the release you are working with and in its \"Assets\" download the repo and navigate the code, together with documentation",
"Hi, here is the older documentation: https://github.com/huggingface/pytorch-transformers/tree/v0.6.2"
] | 1,563 | 1,563 | 1,563 | NONE | null | Hey, would it be possible to release the previous documentation ? I'm working on previous version and can't find proper doc right now.
Thanks if you can help | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/826/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/825 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/825/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/825/comments | https://api.github.com/repos/huggingface/transformers/issues/825/events | https://github.com/huggingface/transformers/issues/825 | 469,877,928 | MDU6SXNzdWU0Njk4Nzc5Mjg= | 825 | Chinese BERT broken probably after `pytorch-transformer` release | {
"login": "leemengtw",
"id": 3454980,
"node_id": "MDQ6VXNlcjM0NTQ5ODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3454980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leemengtw",
"html_url": "https://github.com/leemengtw",
"followers_url": "https://api.github.com/users/leemengtw/followers",
"following_url": "https://api.github.com/users/leemengtw/following{/other_user}",
"gists_url": "https://api.github.com/users/leemengtw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leemengtw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leemengtw/subscriptions",
"organizations_url": "https://api.github.com/users/leemengtw/orgs",
"repos_url": "https://api.github.com/users/leemengtw/repos",
"events_url": "https://api.github.com/users/leemengtw/events{/privacy}",
"received_events_url": "https://api.github.com/users/leemengtw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"We did slightly change the way the tokenizer strip the spaces at the end of the words when loading the tokenizer, as discussed in issue #328, in particular here https://github.com/huggingface/pytorch-transformers/issues/328#issuecomment-503630929.\r\nNow, I'm not exactly sure what is the right solution for both cases. I'll give a look, but I won't have time right now. If you want to investigate the source of the issue and compare with #328, it can help.",
"@thomwolf Thanks for the suggestion. After some twists, I can reproduce the desired result (though it's very hacky and we should come up with a better solution)\r\n\r\nI used the previous version of `load_vocb` function to regenerate vocabulary, and it did reproduce the desired vocab:\r\n\r\nhttps://github.com/huggingface/pytorch-transformers/blob/3763f8944dc3fef8afb0c525a2ced8a04889c14f/pytorch_pretrained_bert/tokenization.py#L56\r\n\r\n```python\r\nimport collections\r\n\r\n# previous version of `load_voacb`\r\ndef load_vocab(vocab_file):\r\n \"\"\"Loads a vocabulary file into a dictionary.\"\"\"\r\n vocab = collections.OrderedDict()\r\n index = 0\r\n with open(vocab_file, \"r\", encoding=\"utf-8\") as reader:\r\n while True:\r\n token = reader.readline()\r\n if not token:\r\n break\r\n token = token.strip()\r\n vocab[token] = index\r\n index += 1\r\n return vocab\r\n\r\n# get the vocab file to regenerate vocb\r\n!wget https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-vocab.txt\r\n\r\n# first load the latest version tokenizer and overwrite the vocab by previous version of `load_vocab`\r\ntokenizer = torch.hub.load(GITHUB_REPO, 'bertTokenizer', \"bert-base-chinese\")\r\ntokenizer.vocab = load_vocab(\"bert-base-chinese-vocab.txt\")\r\n\r\n# get the desired result as previously\r\nindices = list(range(647, 657))\r\nsome_pairs = [(t, idx) for t, idx in vocab.items() if idx in indices]\r\nfor pair in some_pairs:\r\n print(pair)\r\n```\r\n\r\n\r\n\r\n(left is the desired result acquired by using old `load_vab` function)\r\n\r\nThe vocab size is the same, but it seems that current / previous vocab index is kind of **offset by 1** so after getting the prediction (using twisted `tokenizer`) from the model, I have to **first add 1** to all predicted tokens and then convert them back to tokens:\r\n\r\n```python \r\n# predict masked token\r\nmaskedLM_model = torch.hub.load(GITHUB_REPO, \r\n 'bertForMaskedLM', \r\n \"bert-base-chinese\")\r\n\r\nmaskedLM_model.eval()\r\nwith torch.no_grad():\r\n outputs = maskedLM_model(tokens_tensor, segments_tensors)\r\n predictions = outputs[0]\r\n\r\nprobs, indices = torch.topk(torch.softmax(predictions[0, masked_index], -1), k)\r\n\r\n# HACKY HOTFIX HERE\r\nindices += 1 \r\n\r\n# correct result\r\npredicted_tokens = tokenizer.convert_ids_to_tokens(indices.tolist())\r\n```\r\n\r\nIn sum, by:\r\n- use previous `load_vocab` function\r\n- add 1 to model output\r\n\r\nI can reproduce the same correct result as before in this maskLM scenario. But of course, this is very hacky. We need a better solution.\r\n",
"At the line 344 of bert-base-chinese vocab file the token is '\\u2028', which is an unicode line separator.\r\nI think using 'token = reader.readlines()' instead of 'token = reader.read().splitlines()' might solve the problem.",
"Have submitted a PR for this: https://github.com/huggingface/pytorch-transformers/pull/860",
"Great, thanks for investigating deeper @Yiqing-Zhou and @leemengtaiwan!",
"Thank you guys @Yiqing-Zhou and @thomwolf!\r\n\r\nI have used the latest version of Chinese BERT and it seems that the vocab and accuracy of my downstream task work perfectly now. :)",
"Hi, @leemengtaiwan What kind of dataset are you running on?\r\nI get the same issue even when I update to the latest version.\r\nThe same issue is also mention in #903 \r\n\r\nI'm running on Chinese-Style SQuAD dataset (DRCD).\r\nI can train Chinese-Bert successfully about half year ago.\r\nHowever, I could not train the model successfully but I can train Multi-Bert successfully.\r\n\r\n@thomwolf Do you update Chinese-Bert recently? or there are still some bugs in preprocess step?",
"> Hi, @leemengtaiwan What kind of dataset are you running on?\r\n\r\n@Liangtaiwan I'm using custom dataset (to be more specific, [WSDM Fake News Classification](https://www.kaggle.com/c/fake-news-pair-classification-challenge/) on Kaggle).\r\n\r\nThe updated version seems to work fine for me, but if you still encounter some issues, maybe you can create a separate issue and reference this issue if needed.\r\n"
] | 1,563 | 1,564 | 1,564 | NONE | null | I suspect that there is some recent code change that breaks the Chinese BERT.
I used the following PyTorch hub code to load the Chinese BERT tokenizer and print out some tokens in the vocab perhaps just a few days ago and everything was fine:
```python
import torch
GITHUB_REPO = "huggingface/pytorch-pretrained-BERT"
tokenizer = torch.hub.load(GITHUB_REPO, 'bertTokenizer', "bert-base-chinese")
# print some pre-determined tokens with their corresponding indices
indices = list(range(647, 657))
some_pairs = [(t, idx) for t, idx in vocab.items() if idx in indices]
for pair in some_pairs:
print(pair)
```
It used to produce the following result:

But after some recent commits or may the latest release, the voacab result slightly changed to even with the same code:

This difference should not happen since we're using the exact same model and code. And the following maskedLM task failed to predict the masked token accordingly and produced a broken result (which used to predict the correct result just a few days ago).
I already tried replacing `pytorch-pretrained-BERT` to `pytorch-transformers` but it still don't work.
I also tried to use the tokenizer directly from repo and it didn't work either.
```python
from pytorch_transformers import BertTokenizer, BertModel, BertForMaskedLM
tokenizer = BertTokenizer.from_pretrained("bert-base-chinese")
```
Please kindly provide some guide or suggestion about how to fix this problem. Chinese BERT may not be functioning as expected now. Thanks in advance.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/825/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/825/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/824 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/824/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/824/comments | https://api.github.com/repos/huggingface/transformers/issues/824/events | https://github.com/huggingface/transformers/issues/824 | 469,862,479 | MDU6SXNzdWU0Njk4NjI0Nzk= | 824 | Bertology example is probably broken | {
"login": "davidefiocco",
"id": 4547987,
"node_id": "MDQ6VXNlcjQ1NDc5ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4547987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidefiocco",
"html_url": "https://github.com/davidefiocco",
"followers_url": "https://api.github.com/users/davidefiocco/followers",
"following_url": "https://api.github.com/users/davidefiocco/following{/other_user}",
"gists_url": "https://api.github.com/users/davidefiocco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidefiocco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidefiocco/subscriptions",
"organizations_url": "https://api.github.com/users/davidefiocco/orgs",
"repos_url": "https://api.github.com/users/davidefiocco/repos",
"events_url": "https://api.github.com/users/davidefiocco/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidefiocco/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Yes, this example is still work in progress. Hopefully, I can finish it before ACL (but not sure).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,569 | 1,569 | CONTRIBUTOR | null | Hello!
I tried to run `run_bertology.py` in the example dir calling it with
```
export TASK_NAME=CoLA
python ./run_bertology.py --data_dir $GLUE_DIR/$TASK_NAME
--model_name bert-base-uncased
--task_name $TASK_NAME
--max_seq_length 128
--output_dir ./tmp/$TASK_NAME/
--try_masking
--metric_name mcc
```
But it fails with
> Traceback (most recent call last):
> File "./run_bertology.py", line 346, in <module>
> main()
> File "./run_bertology.py", line 327, in main
> eval_data = load_and_cache_examples(args, args.task_name, tokenizer, evaluate=True)
> File "/data/home/dfiocco/BERT/run_glue.py", line 245, in load_and_cache_examples
> list(filter(None, args.model_name_or_path.split('/'))).pop(),
> AttributeError: 'Namespace' object has no attribute 'model_name_or_path'
One fix to that problem should be (?) replacing all occurrences of `model_name` with `model_name_or_path` in `run_bertology.py`. Still, even with that "patch" running the code gives
> Traceback (most recent call last):
> File "./run_bertology.py", line 346, in <module>
> main()
> File "./run_bertology.py", line 341, in main
> head_mask = mask_heads(args, model, eval_dataloader)
> File "./run_bertology.py", line 175, in mask_heads
> print_2d_tensor(head_mask)
> UnboundLocalError: local variable 'head_mask' referenced before assignment
Trying another task (MRPC) I get instead
>
> Traceback (most recent call last):
> File "./run_bertology.py", line 346, in <module>
> main()
> File "./run_bertology.py", line 341, in main
> head_mask = mask_heads(args, model, eval_dataloader)
> File "./run_bertology.py", line 169, in mask_heads
> _, head_importance, preds, labels = compute_heads_importance(args, model, eval_dataloader, compute_entropy=False, head_mask=new_head_mask)
> File "./run_bertology.py", line 97, in compute_heads_importance
> head_importance += head_mask.grad.abs().detach()
> AttributeError: 'NoneType' object has no attribute 'abs'
Did anybody manage to run the Bertology example without hiccups?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/824/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/823 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/823/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/823/comments | https://api.github.com/repos/huggingface/transformers/issues/823/events | https://github.com/huggingface/transformers/issues/823 | 469,832,638 | MDU6SXNzdWU0Njk4MzI2Mzg= | 823 | Updating simple_lm_finetuning.py for FP16 training | {
"login": "crowegian",
"id": 14296792,
"node_id": "MDQ6VXNlcjE0Mjk2Nzky",
"avatar_url": "https://avatars.githubusercontent.com/u/14296792?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/crowegian",
"html_url": "https://github.com/crowegian",
"followers_url": "https://api.github.com/users/crowegian/followers",
"following_url": "https://api.github.com/users/crowegian/following{/other_user}",
"gists_url": "https://api.github.com/users/crowegian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/crowegian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/crowegian/subscriptions",
"organizations_url": "https://api.github.com/users/crowegian/orgs",
"repos_url": "https://api.github.com/users/crowegian/repos",
"events_url": "https://api.github.com/users/crowegian/events{/privacy}",
"received_events_url": "https://api.github.com/users/crowegian/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi, has this been fixed? I've tried updating my language modeling script to match but still getting errors.",
"Having the same problem at the moment ...",
"I guess the preferred way is to use `apex.amp` like in this example?\r\nhttps://github.com/huggingface/pytorch-transformers/blob/master/examples/run_glue.py",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,570 | 1,570 | NONE | null | in simple_lm_finetuning the recent updated code doesn't work with the old optimizer specifications.
When not running with --fp16
`
optimizer = BertAdam(optimizer_grouped_parameters,
lr=args.learning_rate,
warmup=args.warmup_proportion,
t_total=num_train_optimization_steps)
// In PyTorch-Transformers, optimizer and schedules are splitted and instantiated like this:
optimizer = AdamW(model.parameters(), lr=args.learning_rate, correct_bias=False) # To reproduce BertAdam specific behavior set correct_bias=False
scheduler = WarmupLi
`
fixes the problem as suggested.
But when running --fp16
`scheduler = WarmupLinearSchedule(optimizer, warmup_steps=num_warmup_steps, t_total=num_train_optimization_steps) # PyTorch scheduler`
will error out with FP16_Optimizer saying that
> TypeError: FP16_Optimizer is not an Optimizer
Does the FuseAdam object need to be passed into WarmupLinearSchedule instead? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/823/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/823/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/822 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/822/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/822/comments | https://api.github.com/repos/huggingface/transformers/issues/822/events | https://github.com/huggingface/transformers/issues/822 | 469,791,932 | MDU6SXNzdWU0Njk3OTE5MzI= | 822 | XLNet-large-cased on Squad 2.0: can't replicate results | {
"login": "avisil",
"id": 43005718,
"node_id": "MDQ6VXNlcjQzMDA1NzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/43005718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avisil",
"html_url": "https://github.com/avisil",
"followers_url": "https://api.github.com/users/avisil/followers",
"following_url": "https://api.github.com/users/avisil/following{/other_user}",
"gists_url": "https://api.github.com/users/avisil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avisil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avisil/subscriptions",
"organizations_url": "https://api.github.com/users/avisil/orgs",
"repos_url": "https://api.github.com/users/avisil/repos",
"events_url": "https://api.github.com/users/avisil/events{/privacy}",
"received_events_url": "https://api.github.com/users/avisil/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This is similar to what the authors ran in the paper (except I could fit only this on 3 v100 GPUs):\r\n\r\n`python run_squad.py --do_lower_case --do_train --do_eval --train_file $SQUAD_DIR/train-v2.0.json --predict_file $SQUAD_DIR/dev-v2.0.json --output_dir $SQUAD_DIR/output --version_2_with_negative --model_name xlnet-large-cased --save_steps 5000 --num_train_epochs 3 --overwrite_output_dir --model_type xlnet --per_gpu_train_batch_size 2 --gradient_accumulation_steps 1 --max_seq_length 512 --max_answer_length 64 --adam_epsilon 1e-6 --learning_rate 3e-5 --num_train_epochs 2`\r\n\r\ngives: \r\n\r\n`07/18/2019 06:20:54 - INFO - __main__ - Results: {'exact': 2.0382380190347846, 'f1': 6.232918462554391, 'total': 11873, 'HasAns_exact': 3.9979757085020244, 'HasAns_f1': 12.399365874815837, 'HasAns_total': 5928, 'NoAns_exact': 0.08410428931875526, 'NoAns_f1': 0.08410428931875526, 'NoAns_total': 5945, 'best_exact': 50.07159100480081, 'best_exact_thresh': 0.0, 'best_f1': 50.07159100480081, 'best_f1_thresh': 0.0}`\r\n",
"@thomwolf are you already working on this? I can work with you to try to solve it :) ",
"with the same question... Also got weird results on other QA datasets like BoolQ, MultiRC.",
"@avisil not yet, I won't have time to work on this before ACL but you can start to have a look if you want. Such discrepancies pretty much always come, not from the model it-self but, from different settings for pre/post-processing the dataset or for the optimizer/optimization process.\r\n\r\nIf you want to start giving it a look, the way I usually check exact reproducibility on downstream tasks like GLUE/SQuAD is to directly import the pytorch-transformer's model in the tensorflow code (that's the main reason the library is python 2 compatible), load the pytorch model with the initialized tf model and run the models side by side on the same inputs (on separate GPUs) to check-in details the inputs/outputs/hidden-states and so-on. It's better to do it on a GPU version of the TF code so you can setup the optimizer your-self. I think somebody did a GPU version of the official SQuAD example, but you can also take inspiration from the multi-GPU adaptation I did of the TensorFlow code for GLUE, which is here: https://github.com/thomwolf/xlnet/blob/master/run_classifier_gpu.py.\r\nIn this fork, you can see how I import and run the PyTorch model along the TensorFlow one side by side.\r\n\r\nIn the case of SQuAD, I already know that they are a few differences which should be fixed:\r\n- the pre-processing of the dataset is not exactly the same (parsing and tokenization logic is a lot more complex in the XLNet repo),\r\n- XLNet was trained using discriminative learning (progressively decreasing learning rate along with the depth of the model).",
"I found similar problem on GLEU dataset.\r\n\r\nWith the command:\r\npython run_glue.py --data_dir=./glue_data/SST-2 --model_type=xlnet --task_name=sst-2 --output_dir=./xlnet_glue --model_name_or_path=xlnet-base-cased --do_train --evaluate_during_training\r\n\r\nThe final result of SST-2 is only 0.836, which is way lower than the current SoTA.\r\n\r\nDoes anyone have a clue how to solve it?",
"@ntubertchen good parameters for SST-2 are in the (adequately titled) issue #795 ",
"I encountered similar problem with bert-large models. No luck yet.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"looks like xlnet for squad 2.0 is broken:\r\n```\r\npython run_squad.py --version_2_with_negative --cache_dir ${CACHE_DIR} \\\r\n--model_type xlnet --model_name_or_path xlnet-large-cased \\\r\n--do_train --train_file $SQUAD_DIR/train-v2.0.json \\\r\n--do_eval --predict_file $SQUAD_DIR/dev-v2.0.json \\\r\n--gradient_accumulation_steps 4 --overwrite_output_dir \\\r\n--learning_rate \"3e-5\" --num_train_epochs 2 --max_seq_length 512 --doc_stride 128 \\\r\n--output_dir $SQUAD_DIR/output/\" \\\r\n--fp16 --fp16_opt_level \"O2\" --per_gpu_train_batch_size 8 \\\r\n--per_gpu_eval_batch_size 8 --weight_decay=0.00 --save_steps 20000 --adam_epsilon 1e-6\r\n```\r\n\r\ngives:\r\n```\r\nEpoch: 0%| | 0/2 [00:00<?, ?it/s]\r\n\r\nIteration: 0%| | 0/16343 [00:00<?, ?it/s]\u001b[ATraceback (most recent call last):\r\n File \"examples/run_squad.py\", line 830, in <module>\r\n main()\r\n File \"examples/run_squad.py\", line 769, in main\r\n global_step, tr_loss = train(args, train_dataset, model, tokenizer)\r\n File \"examples/run_squad.py\", line 221, in train\r\n inputs.update({\"is_impossible\": batch[7]})\r\nIndexError: tuple index out of range\r\n```\r\nI added `is_impossible` to the features and dataloader, but the result was very low:\r\n```\r\n{'exact': 44.5717173418681, 'f1': 44.82239308319654, 'total': 11873, 'HasAns_exact': 0.0, 'HasAns_f1': 0.5020703570837503, 'HasAns_total': 5928, 'NoAns_exact': 89.01597981497056, 'NoAns_f1': 89.01597981497056, 'NoAns_total': 5945, 'best_exact': 50.07159100480081, 'best_exact_thresh': 0.0, 'best_f1': 50.07159100480081, 'best_f1_thresh': 0.0}\r\n```",
"Thanks for reporting the bug @panl2015, should have been fixed with 073219b.",
"Thanks @LysandreJik ! I think that's how I fixed it locally to make it run but got the low result. Maybe I should try with your version to make sure I don't have other changes."
] | 1,563 | 1,579 | 1,572 | NONE | null | I've been trying to replicate the numbers in the Squad 2.0 dev set (F1=86) with this script and the XLnet embeddings. So far the results are really off..{Opening a new issue as the previous one seems dedicated to SST-2}
`python run_squad.py --do_lower_case --do_train --do_eval --train_file $SQUAD_DIR/train-v2.0.json --predict_file $SQUAD_DIR/dev-v2.0.json --output_dir $SQUAD_DIR/output --version_2_with_negative --model_name xlnet-large-cased --save_steps 5000 --num_train_epochs 3 --overwrite_output_dir --model_type xlnet --per_gpu_train_batch_size 4 --gradient_accumulation_steps 1 --learning_rate 3e-5`
gives:
`07/18/2019 08:43:36 - INFO - __main__ - Results: {'exact': 3.217383980459867, 'f1': 7.001376535240158, 'total': 11873, 'HasAns_exact': 6.359649122807017, 'HasAns_f1': 13.938485762973412, 'HasAns_total': 5928, 'NoAns_exact': 0.08410428931875526, 'NoAns_f1': 0.08410428931875526, 'NoAns_total': 5945, 'best_exact': 50.07159100480081, 'best_exact_thresh': 0.0, 'best_f1': 50.07159100480081, 'best_f1_thresh': 0.0}`
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/822/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/821 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/821/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/821/comments | https://api.github.com/repos/huggingface/transformers/issues/821/events | https://github.com/huggingface/transformers/issues/821 | 469,731,350 | MDU6SXNzdWU0Njk3MzEzNTA= | 821 | Couldn't reach server | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"If I click on the links you provided above, they are currently reachable for me...\r\nIt may be a silly suggestions, but could it be that your internet connection was momentarily down when the code tried download those files or somehow you are not allowed to reach data on s3?",
"I have an idea about it. We can download the file from local computer, and send the file to the location of pytorch_transformers, for example:\r\n/root/anaconda3/lib/python3.6/site-packages/pytorch_transformers\r\nAfter that, we need to modify the modeling_bert.py in this folder:\r\nhttps://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/modeling_bert.py\r\n",
"Are you behind a proxy maybe?\r\n\r\nYou can now give proxies parameters to the `from_pretrained` methods e.g.:\r\n```python\r\nproxies = {\r\n \"http\": \"http://10.10.1.10:3128\",\r\n \"https\": \"https://10.10.1.10:1080\",\r\n}\r\nmodel = BertModel.from_pretrained('bert-base-uncased', proxies=proxies)\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Can you try installing `pyopenssl` using this command. \r\n`pip install pyopenssl`\r\nThis worked for me. I guess the requests library is unable to establish an SSL connection, due to which the downloads are failing. Installing `pyopenssl` should solve the problem.\r\n",
"Hello, I am also facing this same error when running on AWS Lambda: \r\n`module initialization error: Couldn't reach server at '{}' to download vocabulary files.`\r\n\r\nI've added proxies and installed pyopenssl as suggested by @thomwolf and @saradhix. It doesn't solve this issue. \r\n\r\nDo you have any further ideas please? \r\n\r\nI am calling the \"fill-mask\" pipeline with \"camembert-base\". \r\n\r\nThank you!",
"> Can you try installing `pyopenssl` using this command.\r\n> `pip install pyopenssl`\r\n> This worked for me. I guess the requests library is unable to establish an SSL connection, due to which the downloads are failing. Installing `pyopenssl` should solve the problem.\r\n\r\n@saradhix what are the changes you've made on the source code to use `pyopenssl` instead of `requests`? \r\nThanks!",
"@ZiedHY I made no changes other than installing the `pyopenssl` package. I guess the `requests` module might internally use the `pyopenssl` for making secure connections.",
"> Hello, I am also facing this same error when running on AWS Lambda:\r\n> `module initialization error: Couldn't reach server at '{}' to download vocabulary files.`\r\n> \r\n> I've added proxies and installed pyopenssl as suggested by @thomwolf and @saradhix. It doesn't solve this issue.\r\n> \r\n> Do you have any further ideas please?\r\n> \r\n> I am calling the \"fill-mask\" pipeline with \"camembert-base\".\r\n> \r\n> Thank you!\r\n\r\nThis seems to be a different problem. Why is the server url not getting printed in the error message?\r\n`Couldn't reach server at '{}'`\r\nSee the error message posted by @rabeehk at the top, which contains the full url. Your issue seems to be different.",
"Thank you @saradhix. You're right. It comes closer to this [issue](https://github.com/huggingface/transformers/issues/2116). "
] | 1,563 | 1,593 | 1,572 | NONE | null | Hi I am running the very first example in readme. I got these errors, thanks for your help
Couldn't reach server to download vocabulary.
Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json' to download pretrained model configuration file.
Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-pytorch_model.bin' to download pretrained weights.
Traceback (most recent call last):
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/821/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/821/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/820 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/820/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/820/comments | https://api.github.com/repos/huggingface/transformers/issues/820/events | https://github.com/huggingface/transformers/issues/820 | 469,692,641 | MDU6SXNzdWU0Njk2OTI2NDE= | 820 | RuntimeError: Creating MTGP constants failed | {
"login": "dkarmon",
"id": 669552,
"node_id": "MDQ6VXNlcjY2OTU1Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/669552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dkarmon",
"html_url": "https://github.com/dkarmon",
"followers_url": "https://api.github.com/users/dkarmon/followers",
"following_url": "https://api.github.com/users/dkarmon/following{/other_user}",
"gists_url": "https://api.github.com/users/dkarmon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dkarmon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dkarmon/subscriptions",
"organizations_url": "https://api.github.com/users/dkarmon/orgs",
"repos_url": "https://api.github.com/users/dkarmon/repos",
"events_url": "https://api.github.com/users/dkarmon/events{/privacy}",
"received_events_url": "https://api.github.com/users/dkarmon/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Not sure this comes from pytorch-transformers or CUDA, see: https://github.com/pytorch/pytorch/issues/20489",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,573 | 1,573 | NONE | null | Hi,
I successfully fine tuned a BertForTokenClassification model based on bert-base-cased in the past. However, I now encounter with an following error: (see full stack below)
```
RuntimeError: **Creating MTGP constants failed.** at /opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorRandom.cu:33
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: device-side assert triggered (insert_events at /opt/conda/conda-bld/pytorch_1556653099582/work/c10/cuda/CUDACachingAllocator.cpp:564)
```
I cannot seem to track the source of the problem...
I made sure the sequence length is << 512 as required and defined in the bert config.
Please advise
**CUDA= 10
Torch = 1.0.0**
```
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [47,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
Traceback (most recent call last):
File "/tmp/train/named_entity_recognition/bert/train.py", line 288, in <module>
train(model, train_iter, optimizer, criterion, scheduler)
File "/tmp/train/named_entity_recognition/bert/train.py", line 69, in train
attention_mask=input_mask, labels=y)
File "/data/anaconda/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/data/anaconda/envs/py36/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/data/anaconda/envs/py36/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/data/anaconda/envs/py36/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 83, in parallel_apply
raise output
File "/data/anaconda/envs/py36/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 59, in _worker
output = module(*input, **kwargs)
File "/data/anaconda/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/data/anaconda/envs/py36/lib/python3.6/site-packages/pytorch_transformers/modeling_bert.py", line 1146, in forward
attention_mask=attention_mask, head_mask=head_mask)
File "/data/anaconda/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/data/anaconda/envs/py36/lib/python3.6/site-packages/pytorch_transformers/modeling_bert.py", line 706, in forward
embedding_output = self.embeddings(input_ids, position_ids=position_ids, token_type_ids=token_type_ids)
File "/data/anaconda/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/data/anaconda/envs/py36/lib/python3.6/site-packages/pytorch_transformers/modeling_bert.py", line 270, in forward
embeddings = self.dropout(embeddings)
File "/data/anaconda/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/data/anaconda/envs/py36/lib/python3.6/site-packages/torch/nn/modules/dropout.py", line 58, in forward
return F.dropout(input, self.p, self.training, self.inplace)
File "/data/anaconda/envs/py36/lib/python3.6/site-packages/torch/nn/functional.py", line 830, in dropout
else _VF.dropout(input, p, training))
RuntimeError: Creating MTGP constants failed. at /opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/THC/THCTensorRandom.cu:33
terminate called after throwing an instance of 'c10::Error'
what(): CUDA error: device-side assert triggered (insert_events at /opt/conda/conda-bld/pytorch_1556653099582/work/c10/cuda/CUDACachingAllocator.cpp:564)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x45 (0x7f16db6a1dc5 in /data/anaconda/envs/py36/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x14792 (0x7f16d9213792 in /data/anaconda/envs/py36/lib/python3.6/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::TensorImpl::release_resources() + 0x50 (0x7f16db691640 in /data/anaconda/envs/py36/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #3: <unknown function> + 0x3067fb (0x7f168947f7fb in /data/anaconda/envs/py36/lib/python3.6/site-packages/torch/lib/libtorch.so.1)
frame #4: <unknown function> + 0x13ff1b (0x7f16db9faf1b in /data/anaconda/envs/py36/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #5: <unknown function> + 0x3bf384 (0x7f16dbc7a384 in /data/anaconda/envs/py36/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #6: <unknown function> + 0x3bf3d1 (0x7f16dbc7a3d1 in /data/anaconda/envs/py36/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #7: <unknown function> + 0x1993cf (0x5590367ed3cf in /data/anaconda/envs/py36/bin/python)
frame #8: <unknown function> + 0xf1a07 (0x559036745a07 in /data/anaconda/envs/py36/bin/python)
frame #9: <unknown function> + 0xf1a07 (0x559036745a07 in /data/anaconda/envs/py36/bin/python)
frame #10: <unknown function> + 0xf12b7 (0x5590367452b7 in /data/anaconda/envs/py36/bin/python)
frame #11: <unknown function> + 0xf1147 (0x559036745147 in /data/anaconda/envs/py36/bin/python)
frame #12: <unknown function> + 0xf115d (0x55903674515d in /data/anaconda/envs/py36/bin/python)
frame #13: <unknown function> + 0xf115d (0x55903674515d in /data/anaconda/envs/py36/bin/python)
frame #14: <unknown function> + 0xf115d (0x55903674515d in /data/anaconda/envs/py36/bin/python)
frame #15: <unknown function> + 0xe3ba7 (0x559036737ba7 in /data/anaconda/envs/py36/bin/python)
frame #16: <unknown function> + 0x168ea2 (0x5590367bcea2 in /data/anaconda/envs/py36/bin/python)
frame #17: _PyGC_CollectNoFail + 0x2a (0x559036844cfa in /data/anaconda/envs/py36/bin/python)
frame #18: PyImport_Cleanup + 0x278 (0x5590367f78e8 in /data/anaconda/envs/py36/bin/python)
frame #19: Py_FinalizeEx + 0x61 (0x5590368635f1 in /data/anaconda/envs/py36/bin/python)
frame #20: Py_Main + 0x35e (0x55903686e1fe in /data/anaconda/envs/py36/bin/python)
frame #21: main + 0xee (0x55903673702e in /data/anaconda/envs/py36/bin/python)
frame #22: __libc_start_main + 0xf0 (0x7f16e0060830 in /lib/x86_64-linux-gnu/libc.so.6)
frame #23: <unknown function> + 0x1c3e0e (0x559036817e0e in /data/anaconda/envs/py36/bin/python)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/820/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/819 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/819/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/819/comments | https://api.github.com/repos/huggingface/transformers/issues/819/events | https://github.com/huggingface/transformers/issues/819 | 469,607,950 | MDU6SXNzdWU0Njk2MDc5NTA= | 819 | Output of BertModel does not match the last hidden layer from fixed feature vectors | {
"login": "sasaadi",
"id": 7882383,
"node_id": "MDQ6VXNlcjc4ODIzODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7882383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sasaadi",
"html_url": "https://github.com/sasaadi",
"followers_url": "https://api.github.com/users/sasaadi/followers",
"following_url": "https://api.github.com/users/sasaadi/following{/other_user}",
"gists_url": "https://api.github.com/users/sasaadi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sasaadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sasaadi/subscriptions",
"organizations_url": "https://api.github.com/users/sasaadi/orgs",
"repos_url": "https://api.github.com/users/sasaadi/repos",
"events_url": "https://api.github.com/users/sasaadi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sasaadi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"What is your exact command to _extract the last hidden layer (layer -1)_?\r\nAnd what is your exact command to get _the outputs[0] from pytorch_transformers.BertModel()_ ?",
"To extract the last hidden layer (layer -1) from BERT, I run the `extract_features.py` as follows:\r\n`python extract_features.py --input_file=tmp/input.txt --output_file=tmp/output.json --vocab_file=cased_L-12_H-768_A-12/vocab.txt --bert_config_file=cased_L-12_H-768_A-12/bert_config.json --init_checkpoint=cased_L-12_H-768_A-12/bert_model.ckpt --layers=-1 --max_seq_length=128 --batch_size=1`\r\n\r\nwhere the `input_file` contains only one line e.g. 'here is an example .'\r\nThe output gives me the -1 hidden layer of each token separately.\r\n\r\nTo get the embeddings from the `outputs[0]`:\r\n\r\n```\r\nconfig = BertConfig.from_pretrained('bert-base-cased')\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-cased')\r\nmodel = BertModel(config)\r\ninput_ids = torch.tensor(tokenizer.encode(\"here is an example .\")).unsqueeze(0) # Batch size 1\r\noutputs = model(input_ids)\r\nlast_hidden_states = outputs[0]\r\n```\r\nwhere `last_hidden_states` gives me a list of embeddings. I presume one for each token in the sentence in the same order they appear in the sentence.\r\n\r\nThanks\r\n\r\n\r\n",
"Help me, i having same problem, how to extract feature from tuned .bin file, in bert's original doc, only init ckpt checkpoint used ",
"@sasaadi, you should load the pretrained model with `model = BertModel.from_pretrained('bert-base-cased')`. In your example only the config (a dict of hyper-parameters) is loaded from the pretrained model, not the weights. ",
"@thomwolf pytorch_transformers.BertModel.from_pretrained('bert-base-multilingual-cased', state_dict=model_state_dict)\r\nIs this solution when you load from tuned model ?",
"@hungph-dev-ict to load from a fine-tuned checkpoint you reference it directly: `BertModel.from_pretrained('/path/to/finetuned/model')`.",
"The doc for the method referenced by @LysandreJik is [here](https://huggingface.co/pytorch-transformers/main_classes/model.html#pytorch_transformers.PreTrainedModel.from_pretrained)",
"@LysandreJik @thomwolf thank you very much.\r\nNow this library has just added RoBERTa, I want tune it with my corpus, do you have any solution ?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,572 | 1,572 | NONE | null | Based on BERT documentation (https://github.com/google-research/bert#using-bert-to-extract-fixed-feature-vectors-like-elmo) we can extract the contextualized token embeddings of each hidden layer separately. However, when I extract the last hidden layer (layer -1), it does not match the `outputs[0]` from `pytorch_transformers.BertModel()` as described here: https://huggingface.co/pytorch-transformers/model_doc/bert.html#bertmodel
Just to remind that I am using the same pre-trained model (e.g. `bert-base-uncased`) and the same input (e.g. 'here is an example .') for both. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/819/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/818 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/818/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/818/comments | https://api.github.com/repos/huggingface/transformers/issues/818/events | https://github.com/huggingface/transformers/issues/818 | 469,605,157 | MDU6SXNzdWU0Njk2MDUxNTc= | 818 | GPT sentence log loss: average or summed loss? | {
"login": "jhlau",
"id": 4261132,
"node_id": "MDQ6VXNlcjQyNjExMzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4261132?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jhlau",
"html_url": "https://github.com/jhlau",
"followers_url": "https://api.github.com/users/jhlau/followers",
"following_url": "https://api.github.com/users/jhlau/following{/other_user}",
"gists_url": "https://api.github.com/users/jhlau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jhlau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jhlau/subscriptions",
"organizations_url": "https://api.github.com/users/jhlau/orgs",
"repos_url": "https://api.github.com/users/jhlau/repos",
"events_url": "https://api.github.com/users/jhlau/events{/privacy}",
"received_events_url": "https://api.github.com/users/jhlau/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, it's the average",
"Thanks for the prompt reply. Much appreciated."
] | 1,563 | 1,563 | 1,563 | NONE | null | >>> config = GPT2Config.from_pretrained('gpt2')
>>> tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
>>> model = GPT2LMHeadModel(config)
>>> input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
>>> outputs = model(input_ids, labels=input_ids)
>>> loss, logits = outputs[:2]
For the loss value computed for the sentence, is it an average log loss or summed log loss? I had a look at CrossEntropyLoss in torch.nn and it seems to be an average loss, but thought I'd double check.
If there are multiple sentences in the input instead (so batch size > 1), what does it return? The average logloss over all tokens in the two sentences? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/818/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/817 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/817/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/817/comments | https://api.github.com/repos/huggingface/transformers/issues/817/events | https://github.com/huggingface/transformers/issues/817 | 469,593,272 | MDU6SXNzdWU0Njk1OTMyNzI= | 817 | from pytorch-pretrained-bert to pytorch-transformers,some problem | {
"login": "SStarLib",
"id": 10860531,
"node_id": "MDQ6VXNlcjEwODYwNTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/10860531?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SStarLib",
"html_url": "https://github.com/SStarLib",
"followers_url": "https://api.github.com/users/SStarLib/followers",
"following_url": "https://api.github.com/users/SStarLib/following{/other_user}",
"gists_url": "https://api.github.com/users/SStarLib/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SStarLib/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SStarLib/subscriptions",
"organizations_url": "https://api.github.com/users/SStarLib/orgs",
"repos_url": "https://api.github.com/users/SStarLib/repos",
"events_url": "https://api.github.com/users/SStarLib/events{/privacy}",
"received_events_url": "https://api.github.com/users/SStarLib/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"now you should use:\r\n```\r\nmodel = BertModel.from_pretrained('bert-base-cased', output_hidden_states=True)\r\noutputs = model(input_ids)\r\nall_hidden_states = outputs[-1]\r\n```\r\nNote that the first element in `all_hidden_states` (`all_hidden_states[0]`) is the output of the embedding layers (hence the fact that there is `num_layers + 1` elements in `all_hidden_states`).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,569 | 1,569 | NONE | null | TypeError: forward() got an unexpected keyword argument 'output_all_encoded_layers'
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/817/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/816 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/816/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/816/comments | https://api.github.com/repos/huggingface/transformers/issues/816/events | https://github.com/huggingface/transformers/pull/816 | 469,588,687 | MDExOlB1bGxSZXF1ZXN0Mjk4NzY5Mzkx | 816 | typos | {
"login": "lpq29743",
"id": 12952648,
"node_id": "MDQ6VXNlcjEyOTUyNjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/12952648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lpq29743",
"html_url": "https://github.com/lpq29743",
"followers_url": "https://api.github.com/users/lpq29743/followers",
"following_url": "https://api.github.com/users/lpq29743/following{/other_user}",
"gists_url": "https://api.github.com/users/lpq29743/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lpq29743/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lpq29743/subscriptions",
"organizations_url": "https://api.github.com/users/lpq29743/orgs",
"repos_url": "https://api.github.com/users/lpq29743/repos",
"events_url": "https://api.github.com/users/lpq29743/events{/privacy}",
"received_events_url": "https://api.github.com/users/lpq29743/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/816?src=pr&el=h1) Report\n> Merging [#816](https://codecov.io/gh/huggingface/pytorch-transformers/pull/816?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/71d597dad0a28ccc397308146844486e0031d701?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/816?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #816 +/- ##\n======================================\n Coverage 78.9% 78.9% \n======================================\n Files 34 34 \n Lines 6192 6192 \n======================================\n Hits 4886 4886 \n Misses 1306 1306\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/816?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/816?src=pr&el=footer). Last update [71d597d...e5a18b3](https://codecov.io/gh/huggingface/pytorch-transformers/pull/816?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"Thanks!"
] | 1,563 | 1,563 | 1,563 | CONTRIBUTOR | null | README.md: "formely known as" -> "formerly known as" | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/816/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/816",
"html_url": "https://github.com/huggingface/transformers/pull/816",
"diff_url": "https://github.com/huggingface/transformers/pull/816.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/816.patch",
"merged_at": 1563458046000
} |
https://api.github.com/repos/huggingface/transformers/issues/815 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/815/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/815/comments | https://api.github.com/repos/huggingface/transformers/issues/815/events | https://github.com/huggingface/transformers/pull/815 | 469,581,781 | MDExOlB1bGxSZXF1ZXN0Mjk4NzY0MzY1 | 815 | Update Readme link for Fine Tune/Usage section | {
"login": "praateekmahajan",
"id": 7589415,
"node_id": "MDQ6VXNlcjc1ODk0MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7589415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/praateekmahajan",
"html_url": "https://github.com/praateekmahajan",
"followers_url": "https://api.github.com/users/praateekmahajan/followers",
"following_url": "https://api.github.com/users/praateekmahajan/following{/other_user}",
"gists_url": "https://api.github.com/users/praateekmahajan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/praateekmahajan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/praateekmahajan/subscriptions",
"organizations_url": "https://api.github.com/users/praateekmahajan/orgs",
"repos_url": "https://api.github.com/users/praateekmahajan/repos",
"events_url": "https://api.github.com/users/praateekmahajan/events{/privacy}",
"received_events_url": "https://api.github.com/users/praateekmahajan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/815?src=pr&el=h1) Report\n> Merging [#815](https://codecov.io/gh/huggingface/pytorch-transformers/pull/815?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/71d597dad0a28ccc397308146844486e0031d701?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/815?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #815 +/- ##\n======================================\n Coverage 78.9% 78.9% \n======================================\n Files 34 34 \n Lines 6192 6192 \n======================================\n Hits 4886 4886 \n Misses 1306 1306\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/815?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/815?src=pr&el=footer). Last update [71d597d...0d46b17](https://codecov.io/gh/huggingface/pytorch-transformers/pull/815?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n"
] | 1,563 | 1,563 | 1,563 | CONTRIBUTOR | null | Incorrect link for `Quick tour: Fine-tuning/usage scripts` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/815/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/815",
"html_url": "https://github.com/huggingface/transformers/pull/815",
"diff_url": "https://github.com/huggingface/transformers/pull/815.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/815.patch",
"merged_at": 1563467433000
} |
https://api.github.com/repos/huggingface/transformers/issues/814 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/814/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/814/comments | https://api.github.com/repos/huggingface/transformers/issues/814/events | https://github.com/huggingface/transformers/issues/814 | 469,535,376 | MDU6SXNzdWU0Njk1MzUzNzY= | 814 | Is there any plan of developing softmax-weight function for using 12 hidden BERT layer? | {
"login": "izuna385",
"id": 35322641,
"node_id": "MDQ6VXNlcjM1MzIyNjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/35322641?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/izuna385",
"html_url": "https://github.com/izuna385",
"followers_url": "https://api.github.com/users/izuna385/followers",
"following_url": "https://api.github.com/users/izuna385/following{/other_user}",
"gists_url": "https://api.github.com/users/izuna385/gists{/gist_id}",
"starred_url": "https://api.github.com/users/izuna385/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/izuna385/subscriptions",
"organizations_url": "https://api.github.com/users/izuna385/orgs",
"repos_url": "https://api.github.com/users/izuna385/repos",
"events_url": "https://api.github.com/users/izuna385/events{/privacy}",
"received_events_url": "https://api.github.com/users/izuna385/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Yes we might add a module for scalar mixture of layers like the one of AllenNLP, for instance (https://github.com/allenai/allennlp/blob/master/allennlp/modules/scalar_mix.py).",
"I'm looking forward to see that in also pytorch-transformer.\r\nAgain, thanks! I'll keep track on this repository.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,563 | 1,569 | 1,569 | NONE | null | Thanks for developing very nice/useful library.
My question is about using 12/(or in large model, more) hidden layer.
First, Does how to use hidden layers depend on the downstream task?
(Say, concat, average, only final layer, only mean of top 4 layer, etc...)
For using all layer, I think it's good to use softmax weight. During training , hidden layer's feature is fix but weight is learned for the task. So second question is, Is there any plan of developing softmax-weight function for using 12 hidden BERT layer?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/814/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/814/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/813 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/813/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/813/comments | https://api.github.com/repos/huggingface/transformers/issues/813/events | https://github.com/huggingface/transformers/issues/813 | 469,515,555 | MDU6SXNzdWU0Njk1MTU1NTU= | 813 | How to use BertModel ? | {
"login": "g-jing",
"id": 44223191,
"node_id": "MDQ6VXNlcjQ0MjIzMTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/44223191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-jing",
"html_url": "https://github.com/g-jing",
"followers_url": "https://api.github.com/users/g-jing/followers",
"following_url": "https://api.github.com/users/g-jing/following{/other_user}",
"gists_url": "https://api.github.com/users/g-jing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-jing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-jing/subscriptions",
"organizations_url": "https://api.github.com/users/g-jing/orgs",
"repos_url": "https://api.github.com/users/g-jing/repos",
"events_url": "https://api.github.com/users/g-jing/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-jing/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been discussed at [#64](https://github.com/huggingface/pytorch-transformers/issues/64#issuecomment-443703063).",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Hi - I tried to implement it, you can have a look at my implementation here: https://github.com/chnsh/BERT-NER-CoNLL, hope that helps",
"@chnsh I don't see any crf layer on your github repo.",
"@RoderickGu Were you able to implement bert-crf?"
] | 1,563 | 1,612 | 1,569 | NONE | null | I want to use bert-crf in my NER task. But this github only provide softmax as the classifier, I decided to write my own crf. But I am not sure how to use it. Here is an example. Please correct me if I am wrong.
sentence: Here is some text to encode
input: torch.tensor([tokenizer.encode("[CLS]" + "Here is some text to encode" + "[SEP]")]), which is
tensor([[ 101, 3446, 1110, 1199, 3087, 1106, 4035, 13775, 102]])
output shape: [1, 9, 768],
Becasue "encode" is devided into two pieceword.
Then I select output[1, 2, 3, 4, 5, 6] from output[0,1,2,3,4,5,6,7,8]to get the crf_input that is of shape [1, 6, 768].
That is how I think it should work. Any suggestion will be appreciated. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/813/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/812 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/812/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/812/comments | https://api.github.com/repos/huggingface/transformers/issues/812/events | https://github.com/huggingface/transformers/issues/812 | 469,510,942 | MDU6SXNzdWU0Njk1MTA5NDI= | 812 | do I need to add sep and cls token in each sequence ? | {
"login": "g-jing",
"id": 44223191,
"node_id": "MDQ6VXNlcjQ0MjIzMTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/44223191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-jing",
"html_url": "https://github.com/g-jing",
"followers_url": "https://api.github.com/users/g-jing/followers",
"following_url": "https://api.github.com/users/g-jing/following{/other_user}",
"gists_url": "https://api.github.com/users/g-jing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-jing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-jing/subscriptions",
"organizations_url": "https://api.github.com/users/g-jing/orgs",
"repos_url": "https://api.github.com/users/g-jing/repos",
"events_url": "https://api.github.com/users/g-jing/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-jing/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"They are not added automatically.",
"@thomwolf Thanks !"
] | 1,563 | 1,563 | 1,563 | NONE | null | It might be a stupid question, but I just notice the authors did not add "[cls]" and "[sep]" token in the example. I think whether those tokens are added automatically inside the module ? Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/812/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/811 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/811/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/811/comments | https://api.github.com/repos/huggingface/transformers/issues/811/events | https://github.com/huggingface/transformers/pull/811 | 469,465,244 | MDExOlB1bGxSZXF1ZXN0Mjk4NjgzOTM4 | 811 | Fix openai-gpt ROCStories example's issues with AdamW optimizer | {
"login": "prrao87",
"id": 35005448,
"node_id": "MDQ6VXNlcjM1MDA1NDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/35005448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prrao87",
"html_url": "https://github.com/prrao87",
"followers_url": "https://api.github.com/users/prrao87/followers",
"following_url": "https://api.github.com/users/prrao87/following{/other_user}",
"gists_url": "https://api.github.com/users/prrao87/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prrao87/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prrao87/subscriptions",
"organizations_url": "https://api.github.com/users/prrao87/orgs",
"repos_url": "https://api.github.com/users/prrao87/repos",
"events_url": "https://api.github.com/users/prrao87/events{/privacy}",
"received_events_url": "https://api.github.com/users/prrao87/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"# [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/811?src=pr&el=h1) Report\n> Merging [#811](https://codecov.io/gh/huggingface/pytorch-transformers/pull/811?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/pytorch-transformers/commit/71d597dad0a28ccc397308146844486e0031d701?src=pr&el=desc) will **not change** coverage.\n> The diff coverage is `n/a`.\n\n[](https://codecov.io/gh/huggingface/pytorch-transformers/pull/811?src=pr&el=tree)\n\n```diff\n@@ Coverage Diff @@\n## master #811 +/- ##\n======================================\n Coverage 78.9% 78.9% \n======================================\n Files 34 34 \n Lines 6192 6192 \n======================================\n Hits 4886 4886 \n Misses 1306 1306\n```\n\n\n\n------\n\n[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/811?src=pr&el=continue).\n> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)\n> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`\n> Powered by [Codecov](https://codecov.io/gh/huggingface/pytorch-transformers/pull/811?src=pr&el=footer). Last update [71d597d...51d66f1](https://codecov.io/gh/huggingface/pytorch-transformers/pull/811?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).\n",
"We should also probably add a linearly decreasing schedule (not in the optimizer anymore).\r\nDid you try this version with flat learning rate? Does it have good performances?",
"I didn't run the training loop to completion but I looked at it now and it seems there's an issue with the evaluation routine (it crashes with an error). Will look into this as soon as I have time. ",
"@thomwolf I used the code from #845 and the flat learning rate does the trick - eval accuracy of 87.2% after 3 training epochs. The defaults args in `run_openai_gpt.py` are good, just confirming some of them used for this result below. \r\n\r\n```\r\npython run_openai_gpt.py \\\r\n --model_name openai-gpt \\\r\n --do_train \\\r\n --do_eval \\\r\n --train_dataset \"./ROCStories/cloze_test_val__spring2016 - cloze_test_ALL_val.csv\" \\\r\n --eval_dataset \"./ROCStories/cloze_test_test__spring2016 - cloze_test_ALL_test.csv\" \\\r\n --train_batch_size 8 \\\r\n --eval_batch_size 16 \\\r\n --num_train_epochs 3\r\n```\r\nIt makes sense to go ahead and close this. Thanks! "
] | 1,563 | 1,563 | 1,563 | NONE | null | Fixes the `AdamW` optimizer instance in the `openai-gpt` ROCStories example as per the new API. The default arguments for it are now set as per the [documentation](https://huggingface.co/pytorch-transformers/model_doc/bert.html?highlight=adamw#pytorch_transformers.AdamW). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/811/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/811",
"html_url": "https://github.com/huggingface/transformers/pull/811",
"diff_url": "https://github.com/huggingface/transformers/pull/811.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/811.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/810 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/810/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/810/comments | https://api.github.com/repos/huggingface/transformers/issues/810/events | https://github.com/huggingface/transformers/issues/810 | 469,436,460 | MDU6SXNzdWU0Njk0MzY0NjA= | 810 | SEG_ID constants for XLNet misleading/off | {
"login": "sleepinyourhat",
"id": 1284441,
"node_id": "MDQ6VXNlcjEyODQ0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1284441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sleepinyourhat",
"html_url": "https://github.com/sleepinyourhat",
"followers_url": "https://api.github.com/users/sleepinyourhat/followers",
"following_url": "https://api.github.com/users/sleepinyourhat/following{/other_user}",
"gists_url": "https://api.github.com/users/sleepinyourhat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sleepinyourhat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sleepinyourhat/subscriptions",
"organizations_url": "https://api.github.com/users/sleepinyourhat/orgs",
"repos_url": "https://api.github.com/users/sleepinyourhat/repos",
"events_url": "https://api.github.com/users/sleepinyourhat/events{/privacy}",
"received_events_url": "https://api.github.com/users/sleepinyourhat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes I will remove them. They are used in the `run_glue.py` example (like in the original TF repo) but they don't have any reason to be in the library it-self.\r\n\r\nIn XLNet segment ids (what we call `token_type_ids in the repo) don't correspond to embeddings, they are just numbers and the only important thing is that they have to be different for tokens which belong to different segments, hence the flexibility in the exact values (XLNet is using relative segment difference with just two segment embeddings: 0 if the segment id of two tokens are the same, 1 if not). See [here](https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/modeling_xlnet.py#L926-L928).\r\n\r\nIt's in the XLNet paper but I should probably add a word or two in the docstring as well.",
"Ah, got it. Thanks! I had assumed that these were part of the token vocabulary. I didn't realize that there were more than two segment types.",
"By the way, the default [here](https://github.com/huggingface/pytorch-transformers/blob/master/examples/run_glue.py#L259) seems off—the [CLS] token for BERT is marked as part of segment 1/B, while the paper shows it as part of 0/A with the rest of the first input.",
"But that's another issue, and I'm not fully certain. Closing."
] | 1,563 | 1,563 | 1,563 | NONE | null | https://github.com/huggingface/pytorch-transformers/blob/master/pytorch_transformers/tokenization_xlnet.py#L47 shows:
```
# Segments (not really needed)
SEG_ID_A = 0
SEG_ID_B = 1
SEG_ID_CLS = 2
SEG_ID_SEP = 3
SEG_ID_PAD = 4
```
These don't seem to be used anywhere in the repo, but I tried using them as a shortcut myself, and I'm not sure they're right. In contrast, for xlnet-base-cased, I get:
```
self._sep_id = tokenizer.convert_tokens_to_ids("<sep>")
self._cls_id = tokenizer.convert_tokens_to_ids("<cls>")
self._pad_id = tokenizer.convert_tokens_to_ids("<pad>")
print(self._cls_id, self._sep_id, self._pad_id)
```
```
3 4 5
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/810/timeline | completed | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.