url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
β | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
β | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/309 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/309/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/309/comments | https://api.github.com/repos/huggingface/transformers/issues/309/events | https://github.com/huggingface/transformers/issues/309 | 412,807,997 | MDU6SXNzdWU0MTI4MDc5OTc= | 309 | Tests error: Issue with python3 compatibility, on zope interface implementation | {
"login": "AprilSongRits",
"id": 20080322,
"node_id": "MDQ6VXNlcjIwMDgwMzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/20080322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AprilSongRits",
"html_url": "https://github.com/AprilSongRits",
"followers_url": "https://api.github.com/users/AprilSongRits/followers",
"following_url": "https://api.github.com/users/AprilSongRits/following{/other_user}",
"gists_url": "https://api.github.com/users/AprilSongRits/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AprilSongRits/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AprilSongRits/subscriptions",
"organizations_url": "https://api.github.com/users/AprilSongRits/orgs",
"repos_url": "https://api.github.com/users/AprilSongRits/repos",
"events_url": "https://api.github.com/users/AprilSongRits/events{/privacy}",
"received_events_url": "https://api.github.com/users/AprilSongRits/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
}
] | closed | false | null | [] | [
"any solution here?",
"This looks like an incompatibility between apex and zope.\r\nHave you tried without installing apex?",
"> This looks like an incompatibility between apex and zope.\r\n> Have you tried without installing apex?\r\n\r\nI uninstalled apex, it works now!\r\nThank you so much!!!! "
] | 1,550 | 1,551 | 1,551 | NONE | null | Hi, I came across the following error after run **python -m pytest tests/modeling_test.py**
________________________________________________________________________________ ERROR collecting tests/modeling_test.py __________________________________________________________________________________
modeling_test.py:25: in <module>
from pytorch_pretrained_bert import (BertConfig, BertModel, BertForMaskedLM,
/usr/local/lib/python3.6/site-packages/pytorch_pretrained_bert/__init__.py:7: in <module>
from .modeling import (BertConfig, BertModel, BertForPreTraining,
/usr/local/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py:218: in <module>
from apex.normalization.fused_layer_norm import FusedLayerNorm as BertLayerNorm
/usr/local/lib/python3.6/site-packages/apex/__init__.py:18: in <module>
from apex.interfaces import (ApexImplementation,
/usr/local/lib/python3.6/site-packages/apex/interfaces.py:10: in <module>
class ApexImplementation(object):
/usr/local/lib/python3.6/site-packages/apex/interfaces.py:14: in ApexImplementation
implements(IApex)
/usr/local/lib/python3.6/site-packages/zope/interface/declarations.py:483: in implements
raise TypeError(_ADVICE_ERROR % 'implementer')
E TypeError: Class advice impossible in Python3. Use the @implementer class decorator instead.
**My configurations are as follows:
python version 3.6.4
CUDA Version 8.0.61
torch==1.0.1.post2
apex==0.9.10.dev0
zope.interface==4.6.0**
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/309/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/309/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/308 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/308/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/308/comments | https://api.github.com/repos/huggingface/transformers/issues/308/events | https://github.com/huggingface/transformers/issues/308 | 412,742,435 | MDU6SXNzdWU0MTI3NDI0MzU= | 308 | It seems the eval speed of transformer-xl is not faster than bert-base-uncased. | {
"login": "guotong1988",
"id": 4702353,
"node_id": "MDQ6VXNlcjQ3MDIzNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4702353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guotong1988",
"html_url": "https://github.com/guotong1988",
"followers_url": "https://api.github.com/users/guotong1988/followers",
"following_url": "https://api.github.com/users/guotong1988/following{/other_user}",
"gists_url": "https://api.github.com/users/guotong1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guotong1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guotong1988/subscriptions",
"organizations_url": "https://api.github.com/users/guotong1988/orgs",
"repos_url": "https://api.github.com/users/guotong1988/repos",
"events_url": "https://api.github.com/users/guotong1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/guotong1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,550 | 1,551 | 1,551 | CONTRIBUTOR | null | I run `run_classifier.py` with `bert-base-uncased` and `max_seq_length=128` on the MRPC task.
The log:
```
Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex.
02/21/2019 12:11:44 - INFO - __main__ - device: cpu n_gpu: 1, distributed training: False, 16-bits training: False
02/21/2019 12:11:45 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt from cache at /home/tong.guo/.pytorch_pretrained_bert/26bc1ad6c0ac742e9b52263248f6d0f00068293b33709fae12320c0e35ccfbbb.542ce4285a40d23a559526243235df47c5f75c197f04f37d1a0c124c32c9a084
02/21/2019 12:11:45 - INFO - pytorch_pretrained_bert.modeling - loading archive file ../model_file/bert-base-uncased.tar.gz
02/21/2019 12:11:45 - INFO - pytorch_pretrained_bert.modeling - extracting archive file ../model_file/bert-base-uncased.tar.gz to temp dir /tmp/tmpaho9_3dk
02/21/2019 12:11:50 - INFO - pytorch_pretrained_bert.modeling - Model config {
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"type_vocab_size": 2,
"vocab_size": 30522
}
02/21/2019 12:11:55 - INFO - pytorch_pretrained_bert.modeling - Weights of BertForSequenceClassification not initialized from pretrained model: ['classifier.weight', 'classifier.bias']
02/21/2019 12:11:55 - INFO - pytorch_pretrained_bert.modeling - Weights from pretrained model not used in BertForSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
02/21/2019 12:11:55 - INFO - pytorch_pretrained_bert.modeling - loading archive file ../model_file/bert-base-uncased.tar.gz
02/21/2019 12:11:55 - INFO - pytorch_pretrained_bert.modeling - extracting archive file ../model_file/bert-base-uncased.tar.gz to temp dir /tmp/tmpfehb71wu
02/21/2019 12:11:59 - INFO - pytorch_pretrained_bert.modeling - Model config {
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"type_vocab_size": 2,
"vocab_size": 30522
}
02/21/2019 12:12:03 - INFO - pytorch_pretrained_bert.modeling - Weights of BertForSequenceClassification not initialized from pretrained model: ['classifier.weight', 'classifier.bias']
02/21/2019 12:12:03 - INFO - pytorch_pretrained_bert.modeling - Weights from pretrained model not used in BertForSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
02/21/2019 12:12:03 - INFO - __main__ - *** Example ***
02/21/2019 12:12:03 - INFO - __main__ - guid: dev-1
02/21/2019 12:12:03 - INFO - __main__ - tokens: [CLS] [UNK] ' s chief operating officer , [UNK] [UNK] , and [UNK] [UNK] , the chief financial officer , will report directly to [UNK] [UNK] . [SEP] [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] and [UNK] [UNK] [UNK] [UNK] [UNK] [UNK] will report to [UNK] . [SEP]
02/21/2019 12:12:03 - INFO - __main__ - input_ids: 101 100 1005 1055 2708 4082 2961 1010 100 100 1010 1998 100 100 1010 1996 2708 3361 2961 1010 2097 3189 3495 2000 100 100 1012 102 100 100 100 100 100 100 1998 100 100 100 100 100 100 2097 3189 2000 100 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/21/2019 12:12:03 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/21/2019 12:12:03 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/21/2019 12:12:03 - INFO - __main__ - label: 1 (id = 1)
02/21/2019 12:12:03 - INFO - __main__ - *** Example ***
02/21/2019 12:12:03 - INFO - __main__ - guid: dev-2
02/21/2019 12:12:03 - INFO - __main__ - tokens: [CLS] [UNK] world ' s two largest auto ##makers said their [UNK] . [UNK] . sales declined more than predicted last month as a late summer sales frenzy caused more of an industry backlash than expected . [SEP] [UNK] sales at both [UNK] and [UNK] . 2 [UNK] [UNK] [UNK] . declined more than predicted as a late summer sales frenzy prompted a larger - than - expected industry backlash . [SEP]
02/21/2019 12:12:03 - INFO - __main__ - input_ids: 101 100 2088 1005 1055 2048 2922 8285 12088 2056 2037 100 1012 100 1012 4341 6430 2062 2084 10173 2197 3204 2004 1037 2397 2621 4341 21517 3303 2062 1997 2019 3068 25748 2084 3517 1012 102 100 4341 2012 2119 100 1998 100 1012 1016 100 100 100 1012 6430 2062 2084 10173 2004 1037 2397 2621 4341 21517 9469 1037 3469 1011 2084 1011 3517 3068 25748 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/21/2019 12:12:03 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/21/2019 12:12:03 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/21/2019 12:12:03 - INFO - __main__ - label: 1 (id = 1)
02/21/2019 12:12:03 - INFO - __main__ - *** Example ***
02/21/2019 12:12:03 - INFO - __main__ - guid: dev-3
02/21/2019 12:12:03 - INFO - __main__ - tokens: [CLS] [UNK] to the federal [UNK] for [UNK] [UNK] and [UNK] ( news - web sites ) , there were 19 reported cases of me ##as ##les in the [UNK] [UNK] in 2002 . [SEP] [UNK] [UNK] for [UNK] [UNK] and [UNK] said there were 19 reported cases of me ##as ##les in the [UNK] [UNK] in 2002 . [SEP]
02/21/2019 12:12:03 - INFO - __main__ - input_ids: 101 100 2000 1996 2976 100 2005 100 100 1998 100 1006 2739 1011 4773 4573 1007 1010 2045 2020 2539 2988 3572 1997 2033 3022 4244 1999 1996 100 100 1999 2526 1012 102 100 100 2005 100 100 1998 100 2056 2045 2020 2539 2988 3572 1997 2033 3022 4244 1999 1996 100 100 1999 2526 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/21/2019 12:12:03 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/21/2019 12:12:03 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/21/2019 12:12:03 - INFO - __main__ - label: 1 (id = 1)
02/21/2019 12:12:03 - INFO - __main__ - *** Example ***
02/21/2019 12:12:03 - INFO - __main__ - guid: dev-4
02/21/2019 12:12:03 - INFO - __main__ - tokens: [CLS] [UNK] tropical storm rapidly developed in the [UNK] of [UNK] [UNK] and was expected to hit somewhere along the [UNK] or [UNK] coasts by [UNK] night . [SEP] [UNK] tropical storm rapidly developed in the [UNK] of [UNK] on [UNK] and could have hurricane - force winds when it hits land somewhere along the [UNK] coast [UNK] night . [SEP]
02/21/2019 12:12:03 - INFO - __main__ - input_ids: 101 100 5133 4040 5901 2764 1999 1996 100 1997 100 100 1998 2001 3517 2000 2718 4873 2247 1996 100 2030 100 20266 2011 100 2305 1012 102 100 5133 4040 5901 2764 1999 1996 100 1997 100 2006 100 1998 2071 2031 7064 1011 2486 7266 2043 2009 4978 2455 4873 2247 1996 100 3023 100 2305 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/21/2019 12:12:03 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/21/2019 12:12:03 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/21/2019 12:12:03 - INFO - __main__ - label: 0 (id = 0)
02/21/2019 12:12:03 - INFO - __main__ - *** Example ***
02/21/2019 12:12:03 - INFO - __main__ - guid: dev-5
02/21/2019 12:12:03 - INFO - __main__ - tokens: [CLS] [UNK] company didn ' t detail the costs of the replacement and repairs . [SEP] [UNK] company officials expect the costs of the replacement work to run into the millions of dollars . [SEP]
02/21/2019 12:12:03 - INFO - __main__ - input_ids: 101 100 2194 2134 1005 1056 6987 1996 5366 1997 1996 6110 1998 10315 1012 102 100 2194 4584 5987 1996 5366 1997 1996 6110 2147 2000 2448 2046 1996 8817 1997 6363 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/21/2019 12:12:03 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/21/2019 12:12:03 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
02/21/2019 12:12:03 - INFO - __main__ - label: 0 (id = 0)
02/21/2019 12:12:04 - INFO - __main__ - ***** Running evaluation *****
02/21/2019 12:12:04 - INFO - __main__ - Num examples = 1725
02/21/2019 12:12:04 - INFO - __main__ - Batch size = 8
Evaluating: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 216/216 [06:06<00:00, 1.70s/it]
02/21/2019 12:18:11 - INFO - __main__ - ***** Eval results *****
02/21/2019 12:18:11 - INFO - __main__ - eval_accuracy = 0.33507246376811595
02/21/2019 12:18:11 - INFO - __main__ - eval_loss = 1.002936492777533
02/21/2019 12:18:11 - INFO - __main__ - global_step = 0
02/21/2019 12:18:11 - INFO - __main__ - loss = Non
```
The speed is about 1.7 s/batch
------------------
I run `run_transfo_xl.py` on the `wikitext-103` task.
The log:
```
Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex.
02/20/2019 19:49:30 - INFO - __main__ - device: cuda
02/20/2019 19:49:30 - INFO - pytorch_pretrained_bert.tokenization_transfo_xl - loading vocabulary file ../model_file/transfo-xl-wt103-vocab.bin
02/20/2019 19:49:30 - INFO - pytorch_pretrained_bert.tokenization_transfo_xl - loading corpus file ../model_file/transfo-xl-wt103-corpus.bin
02/20/2019 19:49:36 - INFO - pytorch_pretrained_bert.modeling_transfo_xl - loading weights file ../model_file/transfo-xl-wt103-pytorch_model.bin
02/20/2019 19:49:36 - INFO - pytorch_pretrained_bert.modeling_transfo_xl - loading configuration file ../model_file/transfo-xl-wt103-config.json
02/20/2019 19:49:36 - INFO - pytorch_pretrained_bert.modeling_transfo_xl - Model config {
"adaptive": true,
"attn_type": 0,
"clamp_len": 1000,
"cutoffs": [
20000,
40000,
200000
],
"d_embed": 1024,
"d_head": 64,
"d_inner": 4096,
"d_model": 1024,
"div_val": 4,
"dropatt": 0.0,
"dropout": 0.1,
"ext_len": 0,
"init": "normal",
"init_range": 0.01,
"init_std": 0.02,
"mem_len": 1600,
"n_head": 16,
"n_layer": 18,
"n_token": 267735,
"pre_lnorm": false,
"proj_init_std": 0.01,
"same_length": true,
"sample_softmax": -1,
"tgt_len": 128,
"tie_projs": [
false,
true,
true,
true
],
"tie_weight": true,
"untie_r": true
}
02/20/2019 19:49:51 - INFO - __main__ - Evaluating with bsz 10 tgt_len 128 ext_len 0 mem_len 1600 clamp_len 1000
02/20/2019 19:57:35 - INFO - __main__ - Time : 464.00s, 2416.66ms/segment
02/20/2019 19:57:35 - INFO - __main__ - ====================================================================================================
02/20/2019 19:57:35 - INFO - __main__ - | test loss 2.90 | test ppl 18.213
02/20/2019 19:57:35 - INFO - __main__ - ====================================================================================================
```
The speed is about 2.4 s/batch
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/308/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/308/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/307 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/307/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/307/comments | https://api.github.com/repos/huggingface/transformers/issues/307/events | https://github.com/huggingface/transformers/pull/307 | 412,731,345 | MDExOlB1bGxSZXF1ZXN0MjU0ODczOTY0 | 307 | Update README.md | {
"login": "guotong1988",
"id": 4702353,
"node_id": "MDQ6VXNlcjQ3MDIzNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4702353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guotong1988",
"html_url": "https://github.com/guotong1988",
"followers_url": "https://api.github.com/users/guotong1988/followers",
"following_url": "https://api.github.com/users/guotong1988/following{/other_user}",
"gists_url": "https://api.github.com/users/guotong1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guotong1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guotong1988/subscriptions",
"organizations_url": "https://api.github.com/users/guotong1988/orgs",
"repos_url": "https://api.github.com/users/guotong1988/repos",
"events_url": "https://api.github.com/users/guotong1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/guotong1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"π "
] | 1,550 | 1,550 | 1,550 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/307/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/307/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/307",
"html_url": "https://github.com/huggingface/transformers/pull/307",
"diff_url": "https://github.com/huggingface/transformers/pull/307.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/307.patch",
"merged_at": 1550737519000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/306 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/306/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/306/comments | https://api.github.com/repos/huggingface/transformers/issues/306/events | https://github.com/huggingface/transformers/issues/306 | 412,720,358 | MDU6SXNzdWU0MTI3MjAzNTg= | 306 | Issue happens while using convert_tf_checkpoint_to_pytorch | {
"login": "weiczhu",
"id": 11749368,
"node_id": "MDQ6VXNlcjExNzQ5MzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/11749368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/weiczhu",
"html_url": "https://github.com/weiczhu",
"followers_url": "https://api.github.com/users/weiczhu/followers",
"following_url": "https://api.github.com/users/weiczhu/following{/other_user}",
"gists_url": "https://api.github.com/users/weiczhu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/weiczhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/weiczhu/subscriptions",
"organizations_url": "https://api.github.com/users/weiczhu/orgs",
"repos_url": "https://api.github.com/users/weiczhu/repos",
"events_url": "https://api.github.com/users/weiczhu/events{/privacy}",
"received_events_url": "https://api.github.com/users/weiczhu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I resolved this issue by adding the global_step to the skipping list. I think global_step is not required for using pretrained model. Please correct me if I am wrong.",
"Is Pytorch requires a TF check point converted? am finding hard to load the checkpoint I generated.BTW is it safe to convert TF checkpoint ?",
"> I resolved this issue by adding the global_step to the skipping list. I think global_step is not required for using pretrained model. Please correct me if I am wrong.\r\n\r\ncan you explain me what is skipping list?",
"In the file `modeling.py` add it to the list at:\r\n`if any(n in [\"adam_v\", \"adam_m\"] for n in name):`",
"Is it possible to load Tensorflow checkpoint using pytorch and do fine tunning? \r\nI can load pytorch_model.bin and finding hard to load my TF checkpoint.Documentation says it can load a archive with bert_config.json and model.chkpt but I have bert_model_ckpt.data-0000-of-00001 in my TF checkpoint folder so am confused. Is there specific example how to do this?\r\n\r\n\r\n\r\n\r\n",
"There is a conversion script to convert a tf checkpoint to pytorch: https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py",
"> In the file `modeling.py` add it to the list at:\r\n> `if any(n in [\"adam_v\", \"adam_m\"] for n in name):`\r\n\r\nadded global_step in skipping list but still getting same issue.\r\n\r\n",
"@naga-dsalgo Is it fixed? I too added \"global_step\" to the list. But still get the error",
"Yes it is fixed for me ... I edited installed version not the downloaded\ngit version ..\n\nOn Tue, Apr 2, 2019 at 4:37 AM Shivam Akhauri <[email protected]>\nwrote:\n\n> @naga-dsalgo <https://github.com/naga-dsalgo> Is it fixed? I too added\n> \"global_step\" to the list. But still get the error\n>\n> β\n> You are receiving this because you were mentioned.\n>\n>\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/pytorch-pretrained-BERT/issues/306#issuecomment-478899861>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/AttINdN0Pj9IU0kcwNg_BtrnZdwF6Qjwks5vcxbZgaJpZM4bGhdh>\n> .\n>\n",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,550 | 1,563 | 1,563 | NONE | null | Hi,
We are using your brilliant project for working on the Japanese BERT model with Sentence Piece.
https://github.com/yoheikikuta/bert-japanese
We are trying to use the convert to to convert below TF BERT model to PyTorch.
https://drive.google.com/drive/folders/1Zsm9DD40lrUVu6iAnIuTH2ODIkh-WM-O
But we see error logs:
Traceback (most recent call last):
File "/Users/weicheng.zhu/PycharmProjects/pytorch-pretrained-BERT-master/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py", line 66, in <module>
args.pytorch_dump_path)
File "/Users/weicheng.zhu/PycharmProjects/pytorch-pretrained-BERT-master/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py", line 37, in convert_tf_checkpoint_to_pytorch
load_tf_weights_in_bert(model, tf_checkpoint_path)
File "/Users/weicheng.zhu/PycharmProjects/pytorch-pretrained-BERT-master/pytorch_pretrained_bert/modeling.py", line 95, in load_tf_weights_in_bert
pointer = getattr(pointer, l[0])
File "/usr/local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 535, in __getattr__
type(self).__name__, name))
AttributeError: 'BertForPreTraining' object has no attribute 'global_step'
Could you kindly help with how we can avoid this?
Thank you so much!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/306/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/306/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/305 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/305/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/305/comments | https://api.github.com/repos/huggingface/transformers/issues/305/events | https://github.com/huggingface/transformers/pull/305 | 412,578,975 | MDExOlB1bGxSZXF1ZXN0MjU0NzU1NTQw | 305 | Update run_openai_gpt.py | {
"login": "bkj",
"id": 6086781,
"node_id": "MDQ6VXNlcjYwODY3ODE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6086781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bkj",
"html_url": "https://github.com/bkj",
"followers_url": "https://api.github.com/users/bkj/followers",
"following_url": "https://api.github.com/users/bkj/following{/other_user}",
"gists_url": "https://api.github.com/users/bkj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bkj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bkj/subscriptions",
"organizations_url": "https://api.github.com/users/bkj/orgs",
"repos_url": "https://api.github.com/users/bkj/repos",
"events_url": "https://api.github.com/users/bkj/events{/privacy}",
"received_events_url": "https://api.github.com/users/bkj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"π "
] | 1,550 | 1,550 | 1,550 | CONTRIBUTOR | null | Adding invocation to the top of `run_openai_gpt.py` so that's it's easy to find. Previously, the header said that running the script w/ default values works, but actually you need to set some paths. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/305/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/305/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/305",
"html_url": "https://github.com/huggingface/transformers/pull/305",
"diff_url": "https://github.com/huggingface/transformers/pull/305.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/305.patch",
"merged_at": 1550694247000
} |
https://api.github.com/repos/huggingface/transformers/issues/304 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/304/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/304/comments | https://api.github.com/repos/huggingface/transformers/issues/304/events | https://github.com/huggingface/transformers/issues/304 | 412,565,139 | MDU6SXNzdWU0MTI1NjUxMzk= | 304 | Can I do a code reference in implementing my code? | {
"login": "graykode",
"id": 10525011,
"node_id": "MDQ6VXNlcjEwNTI1MDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/10525011?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/graykode",
"html_url": "https://github.com/graykode",
"followers_url": "https://api.github.com/users/graykode/followers",
"following_url": "https://api.github.com/users/graykode/following{/other_user}",
"gists_url": "https://api.github.com/users/graykode/gists{/gist_id}",
"starred_url": "https://api.github.com/users/graykode/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/graykode/subscriptions",
"organizations_url": "https://api.github.com/users/graykode/orgs",
"repos_url": "https://api.github.com/users/graykode/repos",
"events_url": "https://api.github.com/users/graykode/events{/privacy}",
"received_events_url": "https://api.github.com/users/graykode/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @graykode,\r\nWhat do you mean by \"code reference\"?",
"@thomwolf \r\nHello thomwolf!\r\nIt mean that I apply your code about `GPT-2 model and transferring tensorflow checkpoint to pytorch` in my project!\r\n I show the origin of the information in my project code comment when I refer to your code.\r\nThanks",
"Oh yes, no problem.\r\nJust reference the origin of the work and the licences (inherited from the relevant authors and code I started from)",
"@thomwolf Sure, What license i follow? I already known original [openAi/gpt-2](https://github.com/openai/gpt-2) is MIT license, but pytorch-pretrained-BERT is Apache 2.0!\r\nI want to just use gpt-2 model and model transferring code!!"
] | 1,550 | 1,550 | 1,550 | NONE | null | @thomwolf
I am trying simply implementing gpt-2 on Pytorch
I have trouble in trasfering tensorflow checkpoint to pytorch :(
https://github.com/graykode/gpt-2-Pytorch
Could I do this code reference in implementing my code? I'll write reference in my code!!
Thanks for awesome sharing! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/304/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/304/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/303 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/303/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/303/comments | https://api.github.com/repos/huggingface/transformers/issues/303/events | https://github.com/huggingface/transformers/issues/303 | 412,468,953 | MDU6SXNzdWU0MTI0Njg5NTM= | 303 | Example Code in README fails. | {
"login": "blester125",
"id": 10950530,
"node_id": "MDQ6VXNlcjEwOTUwNTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/10950530?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/blester125",
"html_url": "https://github.com/blester125",
"followers_url": "https://api.github.com/users/blester125/followers",
"following_url": "https://api.github.com/users/blester125/following{/other_user}",
"gists_url": "https://api.github.com/users/blester125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/blester125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/blester125/subscriptions",
"organizations_url": "https://api.github.com/users/blester125/orgs",
"repos_url": "https://api.github.com/users/blester125/repos",
"events_url": "https://api.github.com/users/blester125/events{/privacy}",
"received_events_url": "https://api.github.com/users/blester125/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This could be related to #266 - are you using the latest version of `pytorch-pretrained-BERT`?",
"No, I was on 0.4, I upgraded to 0.6.1 and it worked."
] | 1,550 | 1,550 | 1,550 | NONE | null | There is an assertion error in the example code in the README.
The text that is input to the model
`"[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]"`
is expected to be tokenized and masked like so
`['[CLS]', 'who', 'was', 'jim', 'henson', '?', '[SEP]', 'jim', '[MASK]', 'was', 'a', 'puppet', '##eer', '[SEP]']`.
However the `[CLS]` and `[SEP]` tokens are split up and in the class of `[CLS]` it is broken into word pieces. The actual tokenized results looks like this
`['[', 'cl', '##s', ']', 'who', 'was', 'jim', 'henson', '[MASK]', '[', 'sep', ']', 'jim', 'henson', 'was', 'a', 'puppet', '##eer', '[', 'sep', ']']` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/303/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/303/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/302 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/302/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/302/comments | https://api.github.com/repos/huggingface/transformers/issues/302/events | https://github.com/huggingface/transformers/pull/302 | 412,418,364 | MDExOlB1bGxSZXF1ZXN0MjU0NjMxMTQw | 302 | typo | {
"login": "yongbowin",
"id": 20198500,
"node_id": "MDQ6VXNlcjIwMTk4NTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/20198500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yongbowin",
"html_url": "https://github.com/yongbowin",
"followers_url": "https://api.github.com/users/yongbowin/followers",
"following_url": "https://api.github.com/users/yongbowin/following{/other_user}",
"gists_url": "https://api.github.com/users/yongbowin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yongbowin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yongbowin/subscriptions",
"organizations_url": "https://api.github.com/users/yongbowin/orgs",
"repos_url": "https://api.github.com/users/yongbowin/repos",
"events_url": "https://api.github.com/users/yongbowin/events{/privacy}",
"received_events_url": "https://api.github.com/users/yongbowin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"π "
] | 1,550 | 1,550 | 1,550 | CONTRIBUTOR | null | typo in annotation | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/302/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/302/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/302",
"html_url": "https://github.com/huggingface/transformers/pull/302",
"diff_url": "https://github.com/huggingface/transformers/pull/302.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/302.patch",
"merged_at": 1550668443000
} |
https://api.github.com/repos/huggingface/transformers/issues/301 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/301/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/301/comments | https://api.github.com/repos/huggingface/transformers/issues/301/events | https://github.com/huggingface/transformers/issues/301 | 412,222,150 | MDU6SXNzdWU0MTIyMjIxNTA= | 301 | `train_dataset` and `eval_dataset` in run_openai_gpt.py | {
"login": "bkj",
"id": 6086781,
"node_id": "MDQ6VXNlcjYwODY3ODE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6086781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bkj",
"html_url": "https://github.com/bkj",
"followers_url": "https://api.github.com/users/bkj/followers",
"following_url": "https://api.github.com/users/bkj/following{/other_user}",
"gists_url": "https://api.github.com/users/bkj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bkj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bkj/subscriptions",
"organizations_url": "https://api.github.com/users/bkj/orgs",
"repos_url": "https://api.github.com/users/bkj/repos",
"events_url": "https://api.github.com/users/bkj/events{/privacy}",
"received_events_url": "https://api.github.com/users/bkj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi Ben,\r\nPlease read the [relevant example section in the readme](https://github.com/huggingface/pytorch-pretrained-BERT#fine-tuning-openai-gpt-on-the-rocstories-dataset).",
"@thomwolf I see that the data is downloaded and cached in case of not providing the `train_dataset` and `eval_dataset` parameters: https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_openai_gpt.py#L152-L153, but it fails here: https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_openai_gpt.py#L162. So either we make the parameters required or do some additional steps which requires untar of the downloaded data and pointing towards that data. Let me know what you think and I can send a PR."
] | 1,550 | 1,560 | 1,550 | CONTRIBUTOR | null | Running `examples/run_openai_gpt.py` w/ the default arguments throws an error:
```
$ python run_openai_gpt.py --output_dir tmp --do_eval
Traceback (most recent call last):
File "run_openai_gpt.py", line 259, in <module>
main()
File "run_openai_gpt.py", line 153, in main
train_dataset = load_rocstories_dataset(args.train_dataset)
File "run_openai_gpt.py", line 49, in load_rocstories_dataset
with open(dataset_path, encoding='utf_8') as f:
FileNotFoundError: [Errno 2] No such file or directory: ''
```
Looking at the code, it looks like `train_dataset` and `eval_dataset` need to be explicitly set. Any suggestions on what they should be set to?
Thanks!
cc @thomwolf | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/301/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/300 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/300/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/300/comments | https://api.github.com/repos/huggingface/transformers/issues/300/events | https://github.com/huggingface/transformers/issues/300 | 412,220,468 | MDU6SXNzdWU0MTIyMjA0Njg= | 300 | bert.pooler.dense initialization | {
"login": "SinghJasdeep",
"id": 33911313,
"node_id": "MDQ6VXNlcjMzOTExMzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/33911313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SinghJasdeep",
"html_url": "https://github.com/SinghJasdeep",
"followers_url": "https://api.github.com/users/SinghJasdeep/followers",
"following_url": "https://api.github.com/users/SinghJasdeep/following{/other_user}",
"gists_url": "https://api.github.com/users/SinghJasdeep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SinghJasdeep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SinghJasdeep/subscriptions",
"organizations_url": "https://api.github.com/users/SinghJasdeep/orgs",
"repos_url": "https://api.github.com/users/SinghJasdeep/repos",
"events_url": "https://api.github.com/users/SinghJasdeep/events{/privacy}",
"received_events_url": "https://api.github.com/users/SinghJasdeep/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi Jasdeep,\r\nNo, they are initialized from Google's pretrained model (they are trained for next sentence prediction task during pretraining)."
] | 1,550 | 1,550 | 1,550 | CONTRIBUTOR | null | Hey! Sorry if this is redundant.
I saw 3 other issues asking similar questions but couldn't find these exact layers mentioned.
Are bert.pooler.dense.weight & bert.pooler.dense.bias randomly initialized?
Thank you so much!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/300/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/300/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/299 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/299/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/299/comments | https://api.github.com/repos/huggingface/transformers/issues/299/events | https://github.com/huggingface/transformers/issues/299 | 412,197,859 | MDU6SXNzdWU0MTIxOTc4NTk= | 299 | Tests failure | {
"login": "g-karthik",
"id": 3851993,
"node_id": "MDQ6VXNlcjM4NTE5OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3851993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/g-karthik",
"html_url": "https://github.com/g-karthik",
"followers_url": "https://api.github.com/users/g-karthik/followers",
"following_url": "https://api.github.com/users/g-karthik/following{/other_user}",
"gists_url": "https://api.github.com/users/g-karthik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/g-karthik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/g-karthik/subscriptions",
"organizations_url": "https://api.github.com/users/g-karthik/orgs",
"repos_url": "https://api.github.com/users/g-karthik/repos",
"events_url": "https://api.github.com/users/g-karthik/events{/privacy}",
"received_events_url": "https://api.github.com/users/g-karthik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Update:\r\n\r\nI manually uninstalled PyTorch (`torch | 1.0.1.post2 | 1.0.1.post2` in the list above) from my venv and installed PyTorch 0.4.0 (I couldn't find the whl file for 0.4.1 for Mac OS, which is my platform) by running\r\n\r\n`pip install https://download.pytorch.org/whl/torch-0.4.0-cp36-cp36m-macosx_10_7_x86_64.whl`\r\n\r\nI then ran the tests and I saw that 25 tests passed.\r\n\r\nSo I have just a couple questions at this point:\r\n1. Given that the requirements.txt file states that the expected torch version is >= 0.4.1, and the above observation that 25 tests passed with torch 0.4.0, would all the code in this repo work with 0.4.0?\r\n\r\n2. It is clear that the error described above occurs when using the latest version of torch, specifically torch 1.0.1.post2, but not with 0.4.0. Could you please make the necessary changes to your repo to support 1.0.1.post2?\r\n\r\nThanks!",
"Update:\r\n\r\nWas able to find the whl for 0.4.1, so I uninstalled 0.4.0 from my venv and installed 0.4.1. Ran the tests again and 25 tests passed. So question 1 above is irrelevant now. Could I have an answer for question 2?\r\n\r\n",
"Hi Karthik,\r\nOn my fresh install with pytorch 1.0.1.post2 (python 3.7) all 25 tests pass without error (also on the continuous integration by the way).\r\nMaybe try to create a clean environment? You don't have to install pytorch prior to pytorch-pretrained-bert, it's a dependency.",
"Hi Thomas,\r\nI did start off with a clean virtual environment and I didn't install PyTorch prior to pytorch-pretrained-bert because I saw it's a dependency. The only difference I see between what you've described above and what I did is the version of Python: you used 3.7 while I used 3.6. Maybe that has something to do with this? Could you try with Python 3.6?",
"Tested on a clean python 3.6 install and all the tests pass.\r\nHonestly there is not much more I can do at this stage.\r\nClosing for now. Feel free to re-open if you find something."
] | 1,550 | 1,551 | 1,551 | NONE | null | Steps to reproduce:
1. Clone the repo.
2. Set up a plain virtual environment `venv` for the repo with Python 3.6.
3. Run `pip install .` (using the `[--editable]` didn't work and there was some error, so I just removed it)
4. Run `pip install spacy ftfy==4.4.3` and `python -m spacy download en` -- SUCCESSFUL.
5. Run `pip install pytest` -- SUCCESSFUL.
6. Run `python -m pytest -sv tests/` -- FAILURE with error below.
```
tests/modeling_gpt2_test.py::GPT2ModelTest::test_config_to_json_string PASSED
tests/modeling_gpt2_test.py::GPT2ModelTest::test_default dyld: lazy symbol binding failed: Symbol not found: _PySlice_Unpack
Referenced from: /Users/XYZ/PycharmProjects/pytorch-pretrained-BERT/venv/lib/python3.6/site-packages/torch/lib/libtorch_python.dylib
Expected in: flat namespace
dyld: Symbol not found: _PySlice_Unpack
Referenced from: /Users/XYZ/PycharmProjects/pytorch-pretrained-BERT/venv/lib/python3.6/site-packages/torch/lib/libtorch_python.dylib
Expected in: flat namespace
Abort trap: 6
```
In case it helps, these are the packages in my `venv` after I perform the first 5 steps.
```
atomicwrites | 1.3.0 | 1.3.0
attrs | 18.2.0 | 18.2.0
boto3 | 1.9.98 | 1.9.98
botocore | 1.12.98 | 1.12.98
certifi | 2018.11.29 | 2018.11.29
chardet | 3.0.4 | 3.0.4
cymem | 2.0.2 | 2.0.2
cytoolz | 0.9.0.1 | 0.9.0.1
dill | 0.2.9 | 0.2.9
docutils | 0.14 | 0.14
en-core-web-sm | 2.0.0 | Β
ftfy | 4.4.3 | 5.5.1
html5lib | 1.0.1 | 1.0.1
idna | 2.8 | 2.8
jmespath | 0.9.3 | 0.9.3
more-itertools | 6.0.0 | 6.0.0
msgpack | 0.5.6 | 0.6.1
msgpack-numpy | 0.4.3.2 | 0.4.4.2
murmurhash | 1.0.2 | 1.0.2
numpy | 1.16.1 | 1.16.1
pip | 19.0.2 | 19.0.2
plac | 0.9.6 | 1.0.0
pluggy | 0.8.1 | 0.8.1
preshed | 2.0.1 | 2.0.1
py | 1.7.0 | 1.7.0
pytest | 4.3.0 | 4.3.0
python-dateutil | 2.8.0 | 2.8.0
pytorch-pretrained-bert | 0.6.1 | 0.6.1
regex | 2018.1.10 | 2019.02.20
requests | 2.21.0 | 2.21.0
s3transfer | 0.2.0 | 0.2.0
setuptools | 39.1.0 | 40.8.0
six | 1.12.0 | 1.12.0
spacy | 2.0.18 | 2.0.18
thinc | 6.12.1 | 7.0.1
toolz | 0.9.0 | 0.9.0
torch | 1.0.1.post2 | 1.0.1.post2
tqdm | 4.31.1 | 4.31.1
ujson | 1.35 | 1.35
urllib3 | 1.24.1 | 1.24.1
wcwidth | 0.1.7 | 0.1.7
webencodings | 0.5.1 | 0.5.1
wrapt | 1.10.11 | 1.11.1
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/299/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/299/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/298 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/298/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/298/comments | https://api.github.com/repos/huggingface/transformers/issues/298/events | https://github.com/huggingface/transformers/issues/298 | 412,161,440 | MDU6SXNzdWU0MTIxNjE0NDA= | 298 | Transformer-XL: Convert lm1b model to PyTorch | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hm, I guess I was using the wrong `checkpoint` file? When I used `/mnt/transformer-xl/tf/sota/pretrained_xl/tf_lm1b/model/model.ckpt-1191000` weights are loaded, but another error occurs:\r\n\r\n```bash\r\nLoading TF weight transformer/r_r_bias/Adam_1 with shape [24, 16, 80]\r\nLoading TF weight transformer/r_w_bias with shape [24, 16, 80]\r\nLoading TF weight transformer/r_w_bias/Adam with shape [24, 16, 80]\r\nLoading TF weight transformer/r_w_bias/Adam_1 with shape [24, 16, 80]\r\nTraceback (most recent call last):\r\n File \"convert_transfo_xl_checkpoint_to_pytorch.py\", line 116, in <module>\r\n args.transfo_xl_dataset_file)\r\n File \"convert_transfo_xl_checkpoint_to_pytorch.py\", line 81, in convert_transfo_xl_checkpoint_to_pytorch\r\n model = load_tf_weights_in_transfo_xl(model, config, tf_path)\r\n File \"/usr/local/lib/python3.6/dist-packages/pytorch_pretrained_bert/modeling_transfo_xl.py\", line 169, in load_tf_weights_in_transfo_xl\r\n assert pointer.shape == array.shape\r\nAssertionError: (torch.Size([3, 1024]), (3, 1280))\r\n```",
"Ok, the `TransfoXLConfig` for the `lm1b` model is a bit different. I tried:\r\n\r\n```python\r\nconfig = TransfoXLConfig(vocab_size_or_config_json_file=793472,\r\n cutoffs=[0, 60000, 100000, 640000, 793472],\r\n d_model=1280,\r\n d_embed=1280,\r\n n_head=16,\r\n d_head=80,\r\n d_inner=8192,\r\n div_val=4,\r\n pre_lnorm=False,\r\n n_layer=24,\r\n tgt_len=32,\r\n ext_len=0,\r\n mem_len=128,\r\n clamp_len=-1,\r\n same_length=True,\r\n proj_share_all_but_first=False,\r\n attn_type=0,\r\n sample_softmax=-1,\r\n adaptive=True,\r\n tie_weight=True,\r\n dropout=0.0,\r\n dropatt=0.0,\r\n untie_r=True,\r\n init=\"normal\",\r\n init_range=0.01,\r\n proj_init_std=0.01,\r\n init_std=0.02)\r\n```\r\n\r\nwhich seems not to be 100% correct. Where do I get the model json configuration from (so I can easily pass it to the `convert_transfo_xl_checkpoint_to_pytorch.py` script π€",
"Hi Stefan,\r\nYou have to create the configuration yourself indeed π\r\nI usually do it by looking at the training parameters of the Tensorflow code related to the model you are trying to load.",
"The voab `cutoffs` were wrong. I changed the configuration to:\r\n\r\n```python\r\nconfig = TransfoXLConfig(vocab_size_or_config_json_file=793472,\r\n cutoffs=[60000, 100000, 640000],\r\n d_model=1280,\r\n d_embed=1280,\r\n n_head=16,\r\n d_head=80,\r\n d_inner=8192,\r\n div_val=4,\r\n pre_lnorm=False,\r\n n_layer=24,\r\n tgt_len=32,\r\n ext_len=0,\r\n mem_len=128,\r\n clamp_len=-1,\r\n same_length=True,\r\n proj_share_all_but_first=False,\r\n attn_type=0,\r\n sample_softmax=-1,\r\n adaptive=True,\r\n tie_weight=True,\r\n dropout=0.0,\r\n dropatt=0.0,\r\n untie_r=True,\r\n init=\"normal\",\r\n init_range=0.01,\r\n proj_init_std=0.01,\r\n init_std=0.02,\r\n )\r\n```\r\n\r\nAnd then the `transformer/adaptive_softmax/cutoff_0/proj` key wasn't found in the `tf_weights` dict:\r\n\r\n```python\r\ntransformer/adaptive_softmax/cutoff_0/proj\r\nTraceback (most recent call last):\r\n File \"convert_transfo_xl_checkpoint_to_pytorch.py\", line 142, in <module>\r\n args.transfo_xl_dataset_file)\r\n File \"convert_transfo_xl_checkpoint_to_pytorch.py\", line 107, in convert_transfo_xl_checkpoint_to_pytorch\r\n model = load_tf_weights_in_transfo_xl(model, config, tf_path)\r\n File \"/mnt/pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling_transfo_xl.py\", line 150, in load_tf_weights_in_transfo_xl\r\n assert name in tf_weights\r\nAssertionError\r\n```\r\n",
"That's probably a question of weights or projection tying, try to set `tie_weight` or `proj_share_all_but_first` to `False` (the correct value should be indicated in Google/CMU hyper-parameters for lm1b).\r\n\r\n(I can convert this model later if you don't manage to but not before next week unfortunately)",
"Thanks for your help @thomwolf ! I'll try to find the correct configuration settings.\r\n\r\nWe are currently trying to integrate the Transformer-XL model into [flair](https://github.com/zalandoresearch/flair), and we would really like to use a larger (in terms of training size) model for downstream tasks like NER :)",
"Here's the last configuration I tried:\r\n\r\n```json\r\n{\r\n \"adaptive\": true,\r\n \"attn_type\": 0,\r\n \"clamp_len\": -1,\r\n \"cutoffs\": [\r\n 60000,\r\n 100000,\r\n 640000\r\n ],\r\n \"d_embed\": 1280,\r\n \"d_head\": 80,\r\n \"d_inner\": 8192,\r\n \"d_model\": 1280,\r\n \"div_val\": 4,\r\n \"dropatt\": 0.0,\r\n \"dropout\": 0.1,\r\n \"ext_len\": 0,\r\n \"init\": \"normal\",\r\n \"init_range\": 0.01,\r\n \"init_std\": 0.02,\r\n \"mem_len\": 32,\r\n \"n_head\": 16,\r\n \"n_layer\": 24,\r\n \"n_token\": 793472,\r\n \"pre_lnorm\": false,\r\n \"proj_init_std\": 0.01,\r\n \"same_length\": true,\r\n \"sample_softmax\": -1,\r\n \"tgt_len\": 32,\r\n \"tie_weight\": true,\r\n \"untie_r\": true,\r\n \"proj_share_all_but_first\": false,\r\n \"proj_same_dim\": false,\r\n \"tie_projs\": [\r\n true,\r\n false,\r\n true\r\n ]\r\n}\r\n````\r\n\r\nUnfortunately, an error is thrown. @thomwolf it would be awesome if you can take a look on this :)",
"Did you manage to convert this model @stefan-it?",
"Sadly, I couldn't managed to convert it (I tried several options)",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
" @stefan-it did you ever manage to convert this model?",
"Hi @irugina, unfortunately, I wasn't able to convert the model π"
] | 1,550 | 1,575 | 1,557 | COLLABORATOR | null | Hi,
I wanted to convert the TensorFlow checkpoint for the ` lm1b` model to PyTorch with the `convert_transfo_xl_checkpoint_to_pytorch.py` script.
I downloaded the checkpoint with the [download.sh](https://github.com/kimiyoung/transformer-xl/blob/master/tf/sota/download.sh) script.
Then I called the convert script with:
```bash
$ python3 convert_transfo_xl_checkpoint_to_pytorch.py --pytorch_dump_folder_path converted --tf_checkpoint_path
/mnt/transformer-xl/tf/sota/pretrained_xl/tf_lm1b/model/checkpoint
```
Then the following error message is returned:
```bash
2019-02-19 22:46:54.693060: W tensorflow/core/util/tensor_slice_reader.cc:95] Could not open /mnt/transformer-xl/tf/sota/pretrained_xl/tf_lm1b/model/checkpoint: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
Traceback (most recent call last):
File "convert_transfo_xl_checkpoint_to_pytorch.py", line 116, in <module>
args.transfo_xl_dataset_file)
File "convert_transfo_xl_checkpoint_to_pytorch.py", line 81, in convert_transfo_xl_checkpoint_to_pytorch
model = load_tf_weights_in_transfo_xl(model, config, tf_path)
File "/usr/local/lib/python3.6/dist-packages/pytorch_pretrained_bert/modeling_transfo_xl.py", line 141, in load_tf_weights_in_transfo_xl
init_vars = tf.train.list_variables(tf_path)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/checkpoint_utils.py", line 95, in list_variables
reader = load_checkpoint(ckpt_dir_or_file)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/checkpoint_utils.py", line 64, in load_checkpoint
return pywrap_tensorflow.NewCheckpointReader(filename)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 382, in NewCheckpointReader
return CheckpointReader(compat.as_bytes(filepattern), status)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/errors_impl.py", line 548, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.DataLossError: Unable to open table file /mnt/transformer-xl/tf/sota/pretrained_xl/tf_lm1b/model/checkpoint: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
```
I'm using the *0.6.1* version of `pytorch-pretrained-BERT` and the latest `tf-nightly-gpu` package that ships TensorFlow 1.13dev. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/298/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/298/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/297 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/297/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/297/comments | https://api.github.com/repos/huggingface/transformers/issues/297/events | https://github.com/huggingface/transformers/issues/297 | 412,147,861 | MDU6SXNzdWU0MTIxNDc4NjE= | 297 | Sudden catastrophic classification output during NER training | {
"login": "fabiocapsouza",
"id": 15973165,
"node_id": "MDQ6VXNlcjE1OTczMTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/15973165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fabiocapsouza",
"html_url": "https://github.com/fabiocapsouza",
"followers_url": "https://api.github.com/users/fabiocapsouza/followers",
"following_url": "https://api.github.com/users/fabiocapsouza/following{/other_user}",
"gists_url": "https://api.github.com/users/fabiocapsouza/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fabiocapsouza/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fabiocapsouza/subscriptions",
"organizations_url": "https://api.github.com/users/fabiocapsouza/orgs",
"repos_url": "https://api.github.com/users/fabiocapsouza/repos",
"events_url": "https://api.github.com/users/fabiocapsouza/events{/privacy}",
"received_events_url": "https://api.github.com/users/fabiocapsouza/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I manage to solve this problem. There is an issue in the calculation of the total optimization steps in `run_squad.py` example that results in a negative learning rate because of the `warmup_linear` schedule. This happens because `t_total` is calculated based on `len(train_examples)` instead of `len(train_features)`. That may not be a problem for datasets with short sentences, but, for long sentences, one example may generate many entries in `train_features` due to the strategy of dividing an example in `DocSpan's`.",
"@fabiocapsouza I am trying to handle text classification but my dataset is also highly unbalanced. I am trying to find where I can adjust the class weights when training transformers. Which parameter you changed in your case?",
"@MendesSP , since the provided BERT model classes have the loss function hardcoded in the `forward`method, I had to write a subclass to override the `CrossEntropyLoss` definition passing a `weight` tensor."
] | 1,550 | 1,569 | 1,550 | CONTRIBUTOR | null | Hi,
I am fine-tuning BERT model (based on `BertForTokenClassification`) to a NER task with 9 labels ("O" + BILU tags for 2 classes) and sometimes during training I run into this odd behavior: a network with 99% accuracy that is showing a converging trend suddenly shifts all of its predictions to a single class. This happens during the interval of a single epoch.
Below are the confusion matrices and some other metrics one epoch before the event and after the event:
```
Epoch 7/10: 150.57s/it, val_acc=99.718% (53391/53542), val_acc_bilu=87.568% (162/185), val_rec=98.780%, val_prec=55.862%, val_f1=71.366%
Confusion matrix:
[[53229 2 66 25 2 25 8]
[ 0 7 0 7 0 0 0]
[ 0 0 14 0 0 0 0]
[ 0 0 0 67 0 0 1]
[ 1 0 0 3 11 0 1]
[ 1 0 1 0 0 14 0]
[ 0 0 0 7 1 0 49]]
Epoch 8/10: 150.64s/it, val_acc=0.030% (16/53542), val_acc_bilu=8.649% (16/185), val_rec=100.000%, val_prec=0.030%, val_f1=0.060%
Confusion matrix:
[[ 0 0 0 0 53357 0 0]
[ 0 0 0 0 14 0 0]
[ 0 0 0 0 14 0 0]
[ 0 0 0 0 68 0 0]
[ 0 0 0 0 16 0 0]
[ 0 0 0 0 16 0 0]
[ 0 0 0 0 57 0 0]]
```
I am using the default configs for `bert-base-multilingual-cased` and standard `CrossEntropyLoss`. The optimizer is `BertAdam` untouched with learning rate 1e-5. The dataset is highly unbalanced (very few named entities, so >99% of the tokens are "O" tags), so I use a weight of 0.01 to the "O" tag in CE.
Has anyone faced a similar issue?
Thanks in advance | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/297/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/297/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/296 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/296/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/296/comments | https://api.github.com/repos/huggingface/transformers/issues/296/events | https://github.com/huggingface/transformers/issues/296 | 412,063,102 | MDU6SXNzdWU0MTIwNjMxMDI= | 296 | How to change config parameters when loading the model with `from_pretrained` | {
"login": "danyaljj",
"id": 2441454,
"node_id": "MDQ6VXNlcjI0NDE0NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danyaljj",
"html_url": "https://github.com/danyaljj",
"followers_url": "https://api.github.com/users/danyaljj/followers",
"following_url": "https://api.github.com/users/danyaljj/following{/other_user}",
"gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions",
"organizations_url": "https://api.github.com/users/danyaljj/orgs",
"repos_url": "https://api.github.com/users/danyaljj/repos",
"events_url": "https://api.github.com/users/danyaljj/events{/privacy}",
"received_events_url": "https://api.github.com/users/danyaljj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I have the same question. I need to change the `hidden_dropout_prob`. How is that possible?",
"Same question. Any best practice? ",
"Oh, I find this code works:\r\n```python\r\nhidden_droput_prob = 0.3\r\nconfig = BertConfig.from_pretrained(\"bert-base-uncased\", num_labels=num_labels, hidden_dropout_prob=hidden_dropout_prob)\r\nmodel = BertForMultiLabelClassification.from_pretrained(\"bert-base-uncased\", config=config)\r\n```\r\nAnd `print(model)` could see that drop_out changes:\r\n```sh\r\n (output): BertOutput(\r\n (dense): Linear(in_features=3072, out_features=768, bias=True)\r\n (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.3, inplace=False)\r\n```\r\n",
"Hi @Opdoop - this code overrides the config parameters of the pertained BERT model. Did I understand correctly? \r\n\r\nAlso, how can we ensure that the `num_labels` parameter is also updated? I don't see it in the output of `print(model)`.",
"@kaankork \r\n* For override:\r\nYes. Your understand is correct.\r\n* For `num_labels`:\r\nIt's been a long time. I didn't sure. But you should see `num_labels` by the last layer shape in `print(model)` .",
"Any update on this?",
"I need to pass some values to config as well, will save me a lot of time....",
"Just use the `update` method. \r\nFor example, if you want to change the number of hidden layers, simply use `config.update({'num_hidden_layers': 1})`."
] | 1,550 | 1,657 | 1,550 | CONTRIBUTOR | null | I have created a model by extending `PreTrainedBertModel`:
```python
class BertForMultiLabelClassification(PreTrainedBertModel):
def __init__(self, config, num_labels=2):
super(BertForMultiLabelClassification, self).__init__(config)
self.num_labels = num_labels
self.bert = BertModel(config)
self.dropout = nn.Dropout(config.attention_probs_dropout_prob)
self.classifier = nn.Linear(config.hidden_size, num_labels)
self.apply(self.init_bert_weights)
# some code here ...
```
I am creating an instance of this model:
```python
model = BertForMultiLabelClassification.from_pretrained(args.bert_model,
cache_dir=PYTORCH_PRETRAINED_BERT_CACHE / 'distributed_{}'.format(
args.local_rank),
num_labels=num_labels)
```
what is an effective way to modify parameters of the default config, when creating an instance of `BertForMultiLabelClassification`? (say, setting a different value for `config.hidden_dropout_prob`).
Any thoughts on what is an effective way to do this?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/296/reactions",
"total_count": 8,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/296/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/295 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/295/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/295/comments | https://api.github.com/repos/huggingface/transformers/issues/295/events | https://github.com/huggingface/transformers/pull/295 | 411,902,273 | MDExOlB1bGxSZXF1ZXN0MjU0MjM2MDA0 | 295 | fix broken link in readme | {
"login": "tnlin",
"id": 5557403,
"node_id": "MDQ6VXNlcjU1NTc0MDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5557403?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tnlin",
"html_url": "https://github.com/tnlin",
"followers_url": "https://api.github.com/users/tnlin/followers",
"following_url": "https://api.github.com/users/tnlin/following{/other_user}",
"gists_url": "https://api.github.com/users/tnlin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tnlin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tnlin/subscriptions",
"organizations_url": "https://api.github.com/users/tnlin/orgs",
"repos_url": "https://api.github.com/users/tnlin/repos",
"events_url": "https://api.github.com/users/tnlin/events{/privacy}",
"received_events_url": "https://api.github.com/users/tnlin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,550 | 1,550 | 1,550 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/295/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/295/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/295",
"html_url": "https://github.com/huggingface/transformers/pull/295",
"diff_url": "https://github.com/huggingface/transformers/pull/295.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/295.patch",
"merged_at": 1550581229000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/294 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/294/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/294/comments | https://api.github.com/repos/huggingface/transformers/issues/294/events | https://github.com/huggingface/transformers/issues/294 | 411,855,459 | MDU6SXNzdWU0MTE4NTU0NTk= | 294 | Extract Features for GPT2 and Transformer-XL | {
"login": "danlou",
"id": 16802508,
"node_id": "MDQ6VXNlcjE2ODAyNTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/16802508?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danlou",
"html_url": "https://github.com/danlou",
"followers_url": "https://api.github.com/users/danlou/followers",
"following_url": "https://api.github.com/users/danlou/following{/other_user}",
"gists_url": "https://api.github.com/users/danlou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danlou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danlou/subscriptions",
"organizations_url": "https://api.github.com/users/danlou/orgs",
"repos_url": "https://api.github.com/users/danlou/repos",
"events_url": "https://api.github.com/users/danlou/events{/privacy}",
"received_events_url": "https://api.github.com/users/danlou/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi Dan,\r\nYou can extract all the hidden-states of Transformer-XL using the snippet indicated in the readme [here](https://github.com/huggingface/pytorch-pretrained-BERT#12-transfoxlmodel).\r\nFor the GPT-2 it's not possible right now.\r\nI can add it in the next release (or you can submit a PR).",
"@thomwolf Can we extract final hidden layer representations from GPT-2 1.5 billion models now?"
] | 1,550 | 1,584 | 1,550 | NONE | null | Hi everyone,
I'm interested in extracting token-level embeddings from the pre-trained GPT2 and Transformer-XL models and noticed that extract_features.py seems to be specific to BERT.
Can you let us know if you have any plans to provide a similar implementation for models other than BERT?
Alternatively, could you possibly provide some hints for us to extract the token-level embeddings the code you already made available with the models?
Many thanks, great work! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/294/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/294/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/293 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/293/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/293/comments | https://api.github.com/repos/huggingface/transformers/issues/293/events | https://github.com/huggingface/transformers/pull/293 | 411,633,490 | MDExOlB1bGxSZXF1ZXN0MjU0MDM0MTM1 | 293 | Minor README typos corrected | {
"login": "davidefiocco",
"id": 4547987,
"node_id": "MDQ6VXNlcjQ1NDc5ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4547987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidefiocco",
"html_url": "https://github.com/davidefiocco",
"followers_url": "https://api.github.com/users/davidefiocco/followers",
"following_url": "https://api.github.com/users/davidefiocco/following{/other_user}",
"gists_url": "https://api.github.com/users/davidefiocco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidefiocco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidefiocco/subscriptions",
"organizations_url": "https://api.github.com/users/davidefiocco/orgs",
"repos_url": "https://api.github.com/users/davidefiocco/repos",
"events_url": "https://api.github.com/users/davidefiocco/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidefiocco/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,550 | 1,550 | 1,550 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/293/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/293/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/293",
"html_url": "https://github.com/huggingface/transformers/pull/293",
"diff_url": "https://github.com/huggingface/transformers/pull/293.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/293.patch",
"merged_at": 1550563202000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/292 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/292/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/292/comments | https://api.github.com/repos/huggingface/transformers/issues/292/events | https://github.com/huggingface/transformers/pull/292 | 411,581,192 | MDExOlB1bGxSZXF1ZXN0MjUzOTk1MTQw | 292 | Fix typo in `GPT2Model` code sample | {
"login": "sam-writer",
"id": 47401552,
"node_id": "MDQ6VXNlcjQ3NDAxNTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/47401552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sam-writer",
"html_url": "https://github.com/sam-writer",
"followers_url": "https://api.github.com/users/sam-writer/followers",
"following_url": "https://api.github.com/users/sam-writer/following{/other_user}",
"gists_url": "https://api.github.com/users/sam-writer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sam-writer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sam-writer/subscriptions",
"organizations_url": "https://api.github.com/users/sam-writer/orgs",
"repos_url": "https://api.github.com/users/sam-writer/repos",
"events_url": "https://api.github.com/users/sam-writer/events{/privacy}",
"received_events_url": "https://api.github.com/users/sam-writer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,550 | 1,550 | 1,550 | CONTRIBUTOR | null | Typo prevented code from running | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/292/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/292/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/292",
"html_url": "https://github.com/huggingface/transformers/pull/292",
"diff_url": "https://github.com/huggingface/transformers/pull/292.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/292.patch",
"merged_at": 1550520459000
} |
https://api.github.com/repos/huggingface/transformers/issues/291 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/291/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/291/comments | https://api.github.com/repos/huggingface/transformers/issues/291/events | https://github.com/huggingface/transformers/issues/291 | 411,558,891 | MDU6SXNzdWU0MTE1NTg4OTE= | 291 | Too much info @ stdout | {
"login": "mahnerak",
"id": 1367529,
"node_id": "MDQ6VXNlcjEzNjc1Mjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1367529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mahnerak",
"html_url": "https://github.com/mahnerak",
"followers_url": "https://api.github.com/users/mahnerak/followers",
"following_url": "https://api.github.com/users/mahnerak/following{/other_user}",
"gists_url": "https://api.github.com/users/mahnerak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mahnerak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mahnerak/subscriptions",
"organizations_url": "https://api.github.com/users/mahnerak/orgs",
"repos_url": "https://api.github.com/users/mahnerak/repos",
"events_url": "https://api.github.com/users/mahnerak/events{/privacy}",
"received_events_url": "https://api.github.com/users/mahnerak/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Oh that's right, this one should be a logging.info event like the other ones.",
"Fixed"
] | 1,550 | 1,551 | 1,551 | NONE | null | As a library, it is preferred to have no unnecessary `print`s in the repo. Using the `pytorch-pretrained-BERT` makes it impossible to use `stdout` as main output mechanism for my code.
For example: it prints directly to `stdout` "Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex.".
I guess printing this kind of messages to `stderr` will be a better idea. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/291/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/291/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/290 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/290/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/290/comments | https://api.github.com/repos/huggingface/transformers/issues/290/events | https://github.com/huggingface/transformers/pull/290 | 411,431,440 | MDExOlB1bGxSZXF1ZXN0MjUzODgxMTA3 | 290 | Typo/formatting fixes in README | {
"login": "davidefiocco",
"id": 4547987,
"node_id": "MDQ6VXNlcjQ1NDc5ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4547987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidefiocco",
"html_url": "https://github.com/davidefiocco",
"followers_url": "https://api.github.com/users/davidefiocco/followers",
"following_url": "https://api.github.com/users/davidefiocco/following{/other_user}",
"gists_url": "https://api.github.com/users/davidefiocco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidefiocco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidefiocco/subscriptions",
"organizations_url": "https://api.github.com/users/davidefiocco/orgs",
"repos_url": "https://api.github.com/users/davidefiocco/repos",
"events_url": "https://api.github.com/users/davidefiocco/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidefiocco/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Changes are... even more minor then. I'll close this and open another (hope that's OK)."
] | 1,550 | 1,550 | 1,550 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/290/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/290/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/290",
"html_url": "https://github.com/huggingface/transformers/pull/290",
"diff_url": "https://github.com/huggingface/transformers/pull/290.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/290.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/289 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/289/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/289/comments | https://api.github.com/repos/huggingface/transformers/issues/289/events | https://github.com/huggingface/transformers/issues/289 | 411,430,245 | MDU6SXNzdWU0MTE0MzAyNDU= | 289 | HugginFace or HuggingFace? | {
"login": "davidefiocco",
"id": 4547987,
"node_id": "MDQ6VXNlcjQ1NDc5ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4547987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidefiocco",
"html_url": "https://github.com/davidefiocco",
"followers_url": "https://api.github.com/users/davidefiocco/followers",
"following_url": "https://api.github.com/users/davidefiocco/following{/other_user}",
"gists_url": "https://api.github.com/users/davidefiocco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidefiocco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidefiocco/subscriptions",
"organizations_url": "https://api.github.com/users/davidefiocco/orgs",
"repos_url": "https://api.github.com/users/davidefiocco/repos",
"events_url": "https://api.github.com/users/davidefiocco/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidefiocco/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks, I'll fix that in a future release."
] | 1,550 | 1,551 | 1,551 | CONTRIBUTOR | null | Thought to flag, also given the terrific work on this repo (and others), that the company name in the code here seems to be systematically spelt wrong (?)
https://github.com/huggingface/pytorch-pretrained-BERT/search?q=hugginface&unscoped_q=hugginface | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/289/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 4,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/289/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/288 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/288/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/288/comments | https://api.github.com/repos/huggingface/transformers/issues/288/events | https://github.com/huggingface/transformers/pull/288 | 411,417,662 | MDExOlB1bGxSZXF1ZXN0MjUzODcwNTEz | 288 | forgot to add regex to requirements.txt :( | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,550 | 1,566 | 1,550 | MEMBER | null | Updating requirements to add `regex` for GPT-2 tokenizer.
A test on OpenAI GPT-2 tokenizer module would have caught that.
But (byte-level) BPE tokenization tests are such a pain to make properly.
Let's add one in the next release, after the ACL deadline. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/288/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/288/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/288",
"html_url": "https://github.com/huggingface/transformers/pull/288",
"diff_url": "https://github.com/huggingface/transformers/pull/288.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/288.patch",
"merged_at": 1550487612000
} |
https://api.github.com/repos/huggingface/transformers/issues/287 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/287/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/287/comments | https://api.github.com/repos/huggingface/transformers/issues/287/events | https://github.com/huggingface/transformers/pull/287 | 411,398,147 | MDExOlB1bGxSZXF1ZXN0MjUzODU1MzI5 | 287 | Gpt2 | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,550 | 1,550 | 1,550 | MEMBER | null | Adding GPT-2... | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/287/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/287/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/287",
"html_url": "https://github.com/huggingface/transformers/pull/287",
"diff_url": "https://github.com/huggingface/transformers/pull/287.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/287.patch",
"merged_at": 1550486286000
} |
https://api.github.com/repos/huggingface/transformers/issues/286 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/286/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/286/comments | https://api.github.com/repos/huggingface/transformers/issues/286/events | https://github.com/huggingface/transformers/pull/286 | 411,109,226 | MDExOlB1bGxSZXF1ZXN0MjUzNjY1NjMx | 286 | Update activation function docstring | {
"login": "hendrycks",
"id": 11670606,
"node_id": "MDQ6VXNlcjExNjcwNjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/11670606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hendrycks",
"html_url": "https://github.com/hendrycks",
"followers_url": "https://api.github.com/users/hendrycks/followers",
"following_url": "https://api.github.com/users/hendrycks/following{/other_user}",
"gists_url": "https://api.github.com/users/hendrycks/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hendrycks/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hendrycks/subscriptions",
"organizations_url": "https://api.github.com/users/hendrycks/orgs",
"repos_url": "https://api.github.com/users/hendrycks/repos",
"events_url": "https://api.github.com/users/hendrycks/events{/privacy}",
"received_events_url": "https://api.github.com/users/hendrycks/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks!"
] | 1,550 | 1,550 | 1,550 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/286/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/286",
"html_url": "https://github.com/huggingface/transformers/pull/286",
"diff_url": "https://github.com/huggingface/transformers/pull/286.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/286.patch",
"merged_at": 1550413847000
} |
|
https://api.github.com/repos/huggingface/transformers/issues/285 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/285/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/285/comments | https://api.github.com/repos/huggingface/transformers/issues/285/events | https://github.com/huggingface/transformers/issues/285 | 411,074,179 | MDU6SXNzdWU0MTEwNzQxNzk= | 285 | Anyone tried this model to write a next sentence? | {
"login": "qnkhuat",
"id": 25661381,
"node_id": "MDQ6VXNlcjI1NjYxMzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/25661381?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qnkhuat",
"html_url": "https://github.com/qnkhuat",
"followers_url": "https://api.github.com/users/qnkhuat/followers",
"following_url": "https://api.github.com/users/qnkhuat/following{/other_user}",
"gists_url": "https://api.github.com/users/qnkhuat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qnkhuat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qnkhuat/subscriptions",
"organizations_url": "https://api.github.com/users/qnkhuat/orgs",
"repos_url": "https://api.github.com/users/qnkhuat/repos",
"events_url": "https://api.github.com/users/qnkhuat/events{/privacy}",
"received_events_url": "https://api.github.com/users/qnkhuat/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Closing for now.",
"Why ? @thomwolf ",
"I'm trying to clean up the issues to get a better view of what needs to be fixed.\r\nBut you are right opening/closing issue is too binary. Let's add labels instead.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,550 | 1,557 | 1,557 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/285/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/285/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/284 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/284/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/284/comments | https://api.github.com/repos/huggingface/transformers/issues/284/events | https://github.com/huggingface/transformers/issues/284 | 410,782,598 | MDU6SXNzdWU0MTA3ODI1OTg= | 284 | Error in Apex's FusedLayerNorm | {
"login": "Hyperparticle",
"id": 8497170,
"node_id": "MDQ6VXNlcjg0OTcxNzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8497170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hyperparticle",
"html_url": "https://github.com/Hyperparticle",
"followers_url": "https://api.github.com/users/Hyperparticle/followers",
"following_url": "https://api.github.com/users/Hyperparticle/following{/other_user}",
"gists_url": "https://api.github.com/users/Hyperparticle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hyperparticle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hyperparticle/subscriptions",
"organizations_url": "https://api.github.com/users/Hyperparticle/orgs",
"repos_url": "https://api.github.com/users/Hyperparticle/repos",
"events_url": "https://api.github.com/users/Hyperparticle/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hyperparticle/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This was an error in `apex`, due to mismatched compiled libraries. Fix can be found [here](https://github.com/NVIDIA/apex/issues/156#issuecomment-465301976).\r\n\r\n> Try a full `pip uninstall apex`, then `cd apex_repo_dir; rm-rf build; python setup.py install --cuda_ext --cpp_ext` and see if the segfault persists."
] | 1,550 | 1,550 | 1,550 | NONE | null | After installing `apex` with the cuda extensions and running BERT, I get the following error in `FusedLayerNormAffineFunction`, [apex/normalization/fused_layer_norm.py](https://github.com/NVIDIA/apex/blob/master/apex/normalization/fused_layer_norm.py#L16) (line 21).
```
RuntimeError: a Tensor with 2482176 elements cannot be converted to Scalar (item at /pytorch/aten/src/ATen/native/Scalar.cpp:9)
```
Here are the shapes of my tensors:
```
input_ - [32, 101, 768] (this is the embeddings tensor in BertEmbeddings)
bias_ - [768]
weight_ - [768]
self.normalized_shape - [768]
```
I'm not sure if it's a problem with `pytorch-pretrained-BERT` or `apex`. Any idea?
Full stacktrace below.
```
File "/home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 710, in forward
embedding_output = self.embeddings(input_ids, token_type_ids)
File "/home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 261, in forward
embeddings = self.LayerNorm(embeddings)
File "/home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/apex/normalization/fused_layer_norm.py", line 149, in forward
input, self.weight, self.bias)
File "/home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/apex/normalization/fused_layer_norm.py", line 21, in forward
input_, self.normalized_shape, weight_, bias_, self.eps)
RuntimeError: a Tensor with 2482176 elements cannot be converted to Scalar (item at /pytorch/aten/src/ATen/native/Scalar.cpp:9)
frame #0: std::function<std::string ()>::operator()() const + 0x11 (0x7f1aa5da3021 in /home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x2a (0x7f1aa5da28ea in /home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #2: at::native::item(at::Tensor const&) + 0x12c3 (0x7f1aa690d5b3 in /home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
frame #3: at::TypeDefault::item(at::Tensor const&) const + 0x55 (0x7f1aa6b1c905 in /home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
frame #4: torch::autograd::VariableType::eye_out(at::Tensor&, long, long) const + 0x184 (0x7f1aa4faeec4 in /home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/torch/lib/libtorch.so.1)
frame #5: <unknown function> + 0x89ca (0x7f1a82e739ca in /home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #6: layer_norm_affine(at::Tensor, c10::ArrayRef<long>, at::Tensor, at::Tensor, double) + 0x185 (0x7f1a82e762a5 in /home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #7: <unknown function> + 0x18d44 (0x7f1a82e83d44 in /home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #8: <unknown function> + 0x16495 (0x7f1a82e81495 in /home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/apex-0.1-py3.6-linux-x86_64.egg/fused_layer_norm_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #9: _PyCFunction_FastCallDict + 0x154 (0x55a8f9925744 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #10: <unknown function> + 0x198610 (0x55a8f99ac610 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #11: _PyEval_EvalFrameDefault + 0x30a (0x55a8f99d138a in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #12: <unknown function> + 0x71e1 (0x7f1af51ee1e1 in /home/hyper/.PyCharm2018.1/system/cythonExtensions/_pydevd_frame_eval_ext/pydevd_frame_evaluator.cpython-36m-x86_64-linux-gnu.so)
frame #13: _PyFunction_FastCallDict + 0x11b (0x55a8f99a6bab in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #14: _PyObject_FastCallDict + 0x26f (0x55a8f9925b0f in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #15: _PyObject_Call_Prepend + 0x63 (0x55a8f992a6a3 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #16: PyObject_Call + 0x3e (0x55a8f992554e in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #17: THPFunction_do_forward(THPFunction*, _object*) + 0x15c (0x7f1ae02e21ec in /home/hyper/Documents/anaconda3/envs/allennlp/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #18: PyCFunction_Call + 0x5f (0x55a8f992863f in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #19: PyObject_Call + 0x3e (0x55a8f992554e in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #20: <unknown function> + 0x16ba91 (0x55a8f997fa91 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #21: _PyObject_FastCallDict + 0x8b (0x55a8f992592b in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #22: <unknown function> + 0x19857e (0x55a8f99ac57e in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #23: _PyEval_EvalFrameDefault + 0x30a (0x55a8f99d138a in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #24: <unknown function> + 0x71e1 (0x7f1af51ee1e1 in /home/hyper/.PyCharm2018.1/system/cythonExtensions/_pydevd_frame_eval_ext/pydevd_frame_evaluator.cpython-36m-x86_64-linux-gnu.so)
frame #25: _PyFunction_FastCallDict + 0x11b (0x55a8f99a6bab in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #26: _PyObject_FastCallDict + 0x26f (0x55a8f9925b0f in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #27: _PyObject_Call_Prepend + 0x63 (0x55a8f992a6a3 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #28: PyObject_Call + 0x3e (0x55a8f992554e in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #29: _PyEval_EvalFrameDefault + 0x19ec (0x55a8f99d2a6c in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #30: <unknown function> + 0x71e1 (0x7f1af51ee1e1 in /home/hyper/.PyCharm2018.1/system/cythonExtensions/_pydevd_frame_eval_ext/pydevd_frame_evaluator.cpython-36m-x86_64-linux-gnu.so)
frame #31: <unknown function> + 0x1918e4 (0x55a8f99a58e4 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #32: _PyFunction_FastCallDict + 0x1bc (0x55a8f99a6c4c in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #33: _PyObject_FastCallDict + 0x26f (0x55a8f9925b0f in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #34: _PyObject_Call_Prepend + 0x63 (0x55a8f992a6a3 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #35: PyObject_Call + 0x3e (0x55a8f992554e in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #36: <unknown function> + 0x16ba91 (0x55a8f997fa91 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #37: _PyObject_FastCallDict + 0x8b (0x55a8f992592b in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #38: <unknown function> + 0x19857e (0x55a8f99ac57e in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #39: _PyEval_EvalFrameDefault + 0x30a (0x55a8f99d138a in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #40: <unknown function> + 0x71e1 (0x7f1af51ee1e1 in /home/hyper/.PyCharm2018.1/system/cythonExtensions/_pydevd_frame_eval_ext/pydevd_frame_evaluator.cpython-36m-x86_64-linux-gnu.so)
frame #41: <unknown function> + 0x1918e4 (0x55a8f99a58e4 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #42: _PyFunction_FastCallDict + 0x3da (0x55a8f99a6e6a in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #43: _PyObject_FastCallDict + 0x26f (0x55a8f9925b0f in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #44: _PyObject_Call_Prepend + 0x63 (0x55a8f992a6a3 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #45: PyObject_Call + 0x3e (0x55a8f992554e in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #46: _PyEval_EvalFrameDefault + 0x19ec (0x55a8f99d2a6c in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #47: <unknown function> + 0x71e1 (0x7f1af51ee1e1 in /home/hyper/.PyCharm2018.1/system/cythonExtensions/_pydevd_frame_eval_ext/pydevd_frame_evaluator.cpython-36m-x86_64-linux-gnu.so)
frame #48: <unknown function> + 0x1918e4 (0x55a8f99a58e4 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #49: _PyFunction_FastCallDict + 0x1bc (0x55a8f99a6c4c in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #50: _PyObject_FastCallDict + 0x26f (0x55a8f9925b0f in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #51: _PyObject_Call_Prepend + 0x63 (0x55a8f992a6a3 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #52: PyObject_Call + 0x3e (0x55a8f992554e in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #53: <unknown function> + 0x16ba91 (0x55a8f997fa91 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #54: _PyObject_FastCallDict + 0x8b (0x55a8f992592b in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #55: <unknown function> + 0x19857e (0x55a8f99ac57e in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #56: _PyEval_EvalFrameDefault + 0x30a (0x55a8f99d138a in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #57: <unknown function> + 0x71e1 (0x7f1af51ee1e1 in /home/hyper/.PyCharm2018.1/system/cythonExtensions/_pydevd_frame_eval_ext/pydevd_frame_evaluator.cpython-36m-x86_64-linux-gnu.so)
frame #58: <unknown function> + 0x1918e4 (0x55a8f99a58e4 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #59: _PyFunction_FastCallDict + 0x3da (0x55a8f99a6e6a in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #60: _PyObject_FastCallDict + 0x26f (0x55a8f9925b0f in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #61: _PyObject_Call_Prepend + 0x63 (0x55a8f992a6a3 in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #62: PyObject_Call + 0x3e (0x55a8f992554e in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
frame #63: _PyEval_EvalFrameDefault + 0x19ec (0x55a8f99d2a6c in /home/hyper/Documents/anaconda3/envs/allennlp/bin/python)
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/284/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/284/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/283 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/283/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/283/comments | https://api.github.com/repos/huggingface/transformers/issues/283/events | https://github.com/huggingface/transformers/issues/283 | 410,723,439 | MDU6SXNzdWU0MTA3MjM0Mzk= | 283 | unicode | {
"login": "elensergwork",
"id": 29634858,
"node_id": "MDQ6VXNlcjI5NjM0ODU4",
"avatar_url": "https://avatars.githubusercontent.com/u/29634858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elensergwork",
"html_url": "https://github.com/elensergwork",
"followers_url": "https://api.github.com/users/elensergwork/followers",
"following_url": "https://api.github.com/users/elensergwork/following{/other_user}",
"gists_url": "https://api.github.com/users/elensergwork/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elensergwork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elensergwork/subscriptions",
"organizations_url": "https://api.github.com/users/elensergwork/orgs",
"repos_url": "https://api.github.com/users/elensergwork/repos",
"events_url": "https://api.github.com/users/elensergwork/events{/privacy}",
"received_events_url": "https://api.github.com/users/elensergwork/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, the examples are not adapted for Python 2, only the library.\r\nI don't plan to adapt or maintain them but feel free to submit a PR!",
"env: python2.7\r\nline 662: writer.write(json.dumps(all_predictions, indent=4) + \"\\n\")\r\nchange as :writer.write(json.dumps(all_predictions, indent=4).decode('utf-8') + \"\\n\")"
] | 1,550 | 1,560 | 1,550 | NONE | null | The general run_squad.py doesn't appear to work properly for python 2.7 because of the json dumping string vs unicode issues during the eval.
python2.7 run_squad.py \
--bert_model bert-base-uncased \
--do_train \
--do_predict \
--do_lower_case \
--train_file $SQUAD_DIR/train-v1.1.json \
--predict_file $SQUAD_DIR/dev-v1.1.json \
--train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /pytorch-pretrained-BERT/tmp/debug_squad/
02/15/2019 04:55:17 - INFO - __main__ - Writing predictions to: /pytorch-pretrained-BERT/tmp/debug_squad/predictions.json
02/15/2019 04:55:17 - INFO - __main__ - Writing nbest to: /pytorch-pretrained-BERT/tmp/debug_squad/nbest_predictions.json
Traceback (most recent call last):
File "run_squad.py", line 1077, in <module>
main()
File "run_squad.py", line 1073, in main
args.version_2_with_negative, args.null_score_diff_threshold)
File "run_squad.py", line 619, in write_predictions
writer.write(json.dumps(all_predictions, indent=4) + "\n")
TypeError: write() argument 1 must be unicode, not str
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/283/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/283/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/282 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/282/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/282/comments | https://api.github.com/repos/huggingface/transformers/issues/282/events | https://github.com/huggingface/transformers/pull/282 | 410,648,826 | MDExOlB1bGxSZXF1ZXN0MjUzMzM5NDI4 | 282 | Fix some bug about SQuAD code | {
"login": "wlhgtc",
"id": 16603773,
"node_id": "MDQ6VXNlcjE2NjAzNzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/16603773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wlhgtc",
"html_url": "https://github.com/wlhgtc",
"followers_url": "https://api.github.com/users/wlhgtc/followers",
"following_url": "https://api.github.com/users/wlhgtc/following{/other_user}",
"gists_url": "https://api.github.com/users/wlhgtc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wlhgtc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wlhgtc/subscriptions",
"organizations_url": "https://api.github.com/users/wlhgtc/orgs",
"repos_url": "https://api.github.com/users/wlhgtc/repos",
"events_url": "https://api.github.com/users/wlhgtc/events{/privacy}",
"received_events_url": "https://api.github.com/users/wlhgtc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ok, thanks @wlhgtc!"
] | 1,550 | 1,550 | 1,550 | CONTRIBUTOR | null | Fix issue in #207

This error occurs when 'nbest' only contain 1 item, but none 'text'. So the code to add empty will not work.
I add another condition to solve it.
https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_squad.py#L570-L590 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/282/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/282/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/282",
"html_url": "https://github.com/huggingface/transformers/pull/282",
"diff_url": "https://github.com/huggingface/transformers/pull/282.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/282.patch",
"merged_at": 1550221611000
} |
https://api.github.com/repos/huggingface/transformers/issues/281 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/281/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/281/comments | https://api.github.com/repos/huggingface/transformers/issues/281/events | https://github.com/huggingface/transformers/issues/281 | 410,646,108 | MDU6SXNzdWU0MTA2NDYxMDg= | 281 | Conversion of gpt-2 small model | {
"login": "lahwran",
"id": 550498,
"node_id": "MDQ6VXNlcjU1MDQ5OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/550498?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lahwran",
"html_url": "https://github.com/lahwran",
"followers_url": "https://api.github.com/users/lahwran/followers",
"following_url": "https://api.github.com/users/lahwran/following{/other_user}",
"gists_url": "https://api.github.com/users/lahwran/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lahwran/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lahwran/subscriptions",
"organizations_url": "https://api.github.com/users/lahwran/orgs",
"repos_url": "https://api.github.com/users/lahwran/repos",
"events_url": "https://api.github.com/users/lahwran/events{/privacy}",
"received_events_url": "https://api.github.com/users/lahwran/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'd like to help out on this. I will have a look and try to understand the earlier bridges in this repo. Let me know if you see anywhere a newcomer can be helpful with.",
"Sure, would be happy to welcome a PR.\r\nYou can start from `modeling_openai.py` and `tokenization_openai.py`'s codes.\r\nIt's pretty much the same architecture (read OpenAI's paper first!)\r\nYou should mostly reimplement the BPE tokenization to work byte-level and move the layer norms modules to the input rather than the output of the layers.",
"Ok, GPT-2 should be in the coming 0.6.0 release (see #287)",
"Ok it's on pip: https://github.com/huggingface/pytorch-pretrained-BERT/releases/tag/v0.6.0\r\nPlease read the updated README for details. All should be there (model and examples).\r\nHave a nice week y'all.",
"@thomwolf do we have pytorch implementation of GPT-2 small? ",
"Yes, just read the README"
] | 1,550 | 1,552 | 1,550 | NONE | null | Hey! This seems like something a lot of folks will want. I'd like to be able to load GPT-2 117M and fine-tune it. What's necessary to convert it? I looked at the tensorflow code a little and it looks vaguely related to transformer xl, but I haven't looked at the paper yet or etc. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/281/reactions",
"total_count": 13,
"+1": 8,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/281/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/280 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/280/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/280/comments | https://api.github.com/repos/huggingface/transformers/issues/280/events | https://github.com/huggingface/transformers/issues/280 | 410,591,310 | MDU6SXNzdWU0MTA1OTEzMTA= | 280 | Have you eval the inference speed of transformer-xl? | {
"login": "guotong1988",
"id": 4702353,
"node_id": "MDQ6VXNlcjQ3MDIzNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4702353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guotong1988",
"html_url": "https://github.com/guotong1988",
"followers_url": "https://api.github.com/users/guotong1988/followers",
"following_url": "https://api.github.com/users/guotong1988/following{/other_user}",
"gists_url": "https://api.github.com/users/guotong1988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guotong1988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guotong1988/subscriptions",
"organizations_url": "https://api.github.com/users/guotong1988/orgs",
"repos_url": "https://api.github.com/users/guotong1988/repos",
"events_url": "https://api.github.com/users/guotong1988/events{/privacy}",
"received_events_url": "https://api.github.com/users/guotong1988/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,550 | 1,551 | 1,551 | CONTRIBUTOR | null | Thank you very much! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/280/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/280/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/279 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/279/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/279/comments | https://api.github.com/repos/huggingface/transformers/issues/279/events | https://github.com/huggingface/transformers/issues/279 | 410,143,066 | MDU6SXNzdWU0MTAxNDMwNjY= | 279 | DataParallel imbalanced memory usage | {
"login": "hongkahjun",
"id": 29894605,
"node_id": "MDQ6VXNlcjI5ODk0NjA1",
"avatar_url": "https://avatars.githubusercontent.com/u/29894605?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hongkahjun",
"html_url": "https://github.com/hongkahjun",
"followers_url": "https://api.github.com/users/hongkahjun/followers",
"following_url": "https://api.github.com/users/hongkahjun/following{/other_user}",
"gists_url": "https://api.github.com/users/hongkahjun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hongkahjun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hongkahjun/subscriptions",
"organizations_url": "https://api.github.com/users/hongkahjun/orgs",
"repos_url": "https://api.github.com/users/hongkahjun/repos",
"events_url": "https://api.github.com/users/hongkahjun/events{/privacy}",
"received_events_url": "https://api.github.com/users/hongkahjun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Managed to get volatile GPU to work properly but memory allocation is sitll imbalanced\r\n\r\n+-----------------------------------------------------------------------------+\r\n| NVIDIA-SMI 410.78 Driver Version: 410.78 CUDA Version: 10.0 |\r\n|-------------------------------+----------------------+----------------------+\r\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\r\n|===============================+======================+======================|\r\n| 0 Tesla P40 Off | 00000275:00:00.0 Off | 0 |\r\n| N/A 43C P0 61W / 250W | 11151MiB / 22919MiB | 99% Default |\r\n+-------------------------------+----------------------+----------------------+\r\n| 1 Tesla P40 Off | 00003984:00:00.0 Off | 0 |\r\n| N/A 41C P0 60W / 250W | 5979MiB / 22919MiB | 99% Default |\r\n+-------------------------------+----------------------+----------------------+\r\n| 2 Tesla P40 Off | 00005B97:00:00.0 Off | 0 |\r\n| N/A 38C P0 63W / 250W | 5979MiB / 22919MiB | 100% Default |\r\n+-------------------------------+----------------------+----------------------+\r\n| 3 Tesla P40 Off | 0000EA90:00:00.0 Off | 0 |\r\n| N/A 42C P0 61W / 250W | 5979MiB / 22919MiB | 99% Default |\r\n+-------------------------------+----------------------+----------------------+\r\n\r\n+-----------------------------------------------------------------------------+\r\n| Processes: GPU Memory |\r\n| GPU PID Type Process name Usage |\r\n|=============================================================================|\r\n| 0 5927 C python 11141MiB |\r\n| 1 5927 C python 5969MiB |\r\n| 2 5927 C python 5969MiB |\r\n| 3 5927 C python 5969MiB |\r\n+-----------------------------------------------------------------------------+",
"Yes, there is no mechanism to balance memory in the examples. In NVIDIA's tests, it didn't help."
] | 1,550 | 1,551 | 1,551 | NONE | null | Simialr to this issue: https://discuss.pytorch.org/t/dataparallel-imbalanced-memory-usage/22551/12, when I run run_lm_finetuning.py using 4 GPUs on Microsoft Azure, the first GPU will have 4000MB Memory usage while the other 3 are at 700MB. The Volatile Util for the first GPU also is at 100% while the rest are at 0%.
It seems that the solution might be something to do with incorporating the loss calculation in the forward pass but I do not know how to solve it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/279/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/278 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/278/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/278/comments | https://api.github.com/repos/huggingface/transformers/issues/278/events | https://github.com/huggingface/transformers/issues/278 | 410,074,977 | MDU6SXNzdWU0MTAwNzQ5Nzc= | 278 | PAD symbols change the output | {
"login": "juditacs",
"id": 1611053,
"node_id": "MDQ6VXNlcjE2MTEwNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1611053?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/juditacs",
"html_url": "https://github.com/juditacs",
"followers_url": "https://api.github.com/users/juditacs/followers",
"following_url": "https://api.github.com/users/juditacs/following{/other_user}",
"gists_url": "https://api.github.com/users/juditacs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/juditacs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juditacs/subscriptions",
"organizations_url": "https://api.github.com/users/juditacs/orgs",
"repos_url": "https://api.github.com/users/juditacs/repos",
"events_url": "https://api.github.com/users/juditacs/events{/privacy}",
"received_events_url": "https://api.github.com/users/juditacs/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi Judit:\r\n- Regarding the padding: you should send an `attention_mask` with the input if the input is smaller than the tensor you are sending in (see the description on `BertModel` in the README).\r\n- Regarding the seed: don't forget to put your model in eval mode (`model.eval()`) to disable the dropout layers.",
"@thomwolf \r\n\r\nDespite the `attention_mask` the values are a slightly different.\r\n\r\nIt is normal that `[PAD]` vectors have different values?\r\n\r\n```\r\nfrom pytorch_transformers import BertModel\r\nfrom rest.run_glue import *\r\n\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=False)\r\nmodel = BertModel.from_pretrained('bert-base-uncased')\r\nmodel.eval()\r\n\r\ntorch.manual_seed(0)\r\nsent = \"this is a complicated sentence [SEP]\"\r\ntokens = ['[CLS]'] + tokenizer.tokenize(sent)\r\nids = tokenizer.convert_tokens_to_ids(tokens)\r\nt = torch.LongTensor([ids])\r\n\r\nwith torch.no_grad():\r\n out = model(t)[0]\r\n\r\ntorch.manual_seed(0)\r\nsent = \"this is a complicated sentence [SEP]\"\r\ntokens = ['[CLS]'] + tokenizer.tokenize(sent)\r\ntokens.extend(['[PAD]'] * 3)\r\nids = torch.tensor(tokenizer.convert_tokens_to_ids(tokens)).unsqueeze(0)\r\nmask = torch.zeros((1, ids.shape[1], ids.shape[1]), dtype=torch.float)\r\nmask[:, :, 0:-3] = 1.0\r\n\r\nwith torch.no_grad():\r\n out2 = model(ids, attention_mask = mask[:, 0])[0]\r\n\r\nprint('------------')\r\nfor i in range(out.shape[1]):\r\n print(i, out[0][0, i].item())\r\n\r\nprint('------------')\r\nfor i in range(out2.shape[1]):\r\n torch.manual_seed(0)\r\n print(i, out2[0][0, i].item())\r\n```\r\nhere is the output\r\n\r\n```\r\n------------\r\n0 -0.10266201943159103\r\n1 0.11214534193277359\r\n2 -0.1575649380683899\r\n3 -0.3163739740848541\r\n4 -0.4168904423713684\r\n5 -0.4069269001483917\r\n6 0.28849801421165466\r\n------------\r\n0 -0.10266169905662537\r\n1 0.1121453121304512\r\n2 -0.15756472945213318\r\n3 -0.3163738548755646\r\n4 -0.41689014434814453\r\n5 -0.40692687034606934\r\n6 0.288497656583786\r\n7 0.28312715888023376\r\n8 0.08457585424184799\r\n9 -0.3077544569969177\r\n```\r\n\r\n`[PAD]`'s are different, is that normal? \r\n\r\n**7 0.28312715888023376\r\n8 0.08457585424184799\r\n9 -0.3077544569969177**",
"I am having same problem and couldn't find a reason or fix yet.",
"Due to Position Embeddings every token results in different vectors.\r\nYou might want to google \"How the Embedding Layers in BERT Were Implemented\"",
"> Due to Position Embeddings every token results in different vectors.\r\n\r\nCould you be more specific what is the source of this numerical instability? Perhaps refer to exact code? I am still not exactly sure why output changes slightly when using attention mask, when I use differently padded inputs. There should be no self-attention over padded inputs. Self-attention scores are set to large negative number before softmax:\r\n`attention_scores = attention_scores + attention_mask`\r\nCould it be that sometimes -10_000 might not be enough to get 0 from softmax? I have recorded differences at most in the order of 2e-6.\r\n\r\nOr is it because of arithmetic errors? According to https://en.wikipedia.org/wiki/Machine_epsilon, upped bound for the relative error in 32bit format is somewhere at 1.19e-07, which is still an order away. Could that be because of the error propagation through many FP32 operations?"
] | 1,550 | 1,594 | 1,550 | NONE | null | Adding `[PAD]` symbols to an input sentence changes the output of the model. I put together a small example here:
https://gist.github.com/juditacs/8be068d5f9063ad68e3098a473b497bd
I also noticed that the seed state affects the output as well. Resetting it in every run ensures that the output is always the same. Is this because of layernorm? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/278/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/278/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/277 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/277/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/277/comments | https://api.github.com/repos/huggingface/transformers/issues/277/events | https://github.com/huggingface/transformers/issues/277 | 409,870,543 | MDU6SXNzdWU0MDk4NzA1NDM= | 277 | 80min training time to fine-tune BERT-base on the SQuAD dataset instead of 24min? | {
"login": "gqoew",
"id": 32342701,
"node_id": "MDQ6VXNlcjMyMzQyNzAx",
"avatar_url": "https://avatars.githubusercontent.com/u/32342701?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gqoew",
"html_url": "https://github.com/gqoew",
"followers_url": "https://api.github.com/users/gqoew/followers",
"following_url": "https://api.github.com/users/gqoew/following{/other_user}",
"gists_url": "https://api.github.com/users/gqoew/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gqoew/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gqoew/subscriptions",
"organizations_url": "https://api.github.com/users/gqoew/orgs",
"repos_url": "https://api.github.com/users/gqoew/repos",
"events_url": "https://api.github.com/users/gqoew/events{/privacy}",
"received_events_url": "https://api.github.com/users/gqoew/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You should use 16bit training (`--fp16` argument). You can use the dynamic loss scaling or tune the loss scale yourself if the results are not the best.",
"@thomwolf Thanks! I enabled 16bit training and it took about 20min/epoch. Is that what you experienced?",
"Sounds good.",
"@thomwolf \r\nMay I know what is the expected EM & F1 score if users train for 2-3 epochs? I got 43 and 48 respectively.",
"You can have a look at the readme examples but it should be a lot higher, around 88-90.\r\nMaybe your batch size is too small, look at the readme for more information."
] | 1,550 | 1,561 | 1,551 | NONE | null | I just fine-tuned BERT-base on the SQuAD dataset with an AWS EC2 `p3.2xlarge` Deep Learning AMI with a single Tesla V100 16GB:
I used the config in your README:
```
export SQUAD_DIR=/path/to/SQUAD
python run_squad.py \
--bert_model bert-base-uncased \
--do_train \
--do_predict \
--do_lower_case \
--train_file $SQUAD_DIR/train-v1.1.json \
--predict_file $SQUAD_DIR/dev-v1.1.json \
--train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/
```
It took 80min. According to your README:
> This example code fine-tunes BERT on the SQuAD dataset. It runs in 24 min (with BERT-base) or 68 min (with BERT-large) on a single tesla V100 16GB.
How to explain this difference? Is there any way to accelerate the training to 24min as well? Thanks
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/277/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/277/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/276 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/276/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/276/comments | https://api.github.com/repos/huggingface/transformers/issues/276/events | https://github.com/huggingface/transformers/issues/276 | 409,861,122 | MDU6SXNzdWU0MDk4NjExMjI= | 276 | Argument do_lower_case is repeated in run_lm_finetuning.py | {
"login": "dileep1996",
"id": 19313195,
"node_id": "MDQ6VXNlcjE5MzEzMTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/19313195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dileep1996",
"html_url": "https://github.com/dileep1996",
"followers_url": "https://api.github.com/users/dileep1996/followers",
"following_url": "https://api.github.com/users/dileep1996/following{/other_user}",
"gists_url": "https://api.github.com/users/dileep1996/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dileep1996/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dileep1996/subscriptions",
"organizations_url": "https://api.github.com/users/dileep1996/orgs",
"repos_url": "https://api.github.com/users/dileep1996/repos",
"events_url": "https://api.github.com/users/dileep1996/events{/privacy}",
"received_events_url": "https://api.github.com/users/dileep1996/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @dileep1996, this has just been fixed in master (#275)!"
] | 1,550 | 1,550 | 1,550 | NONE | null | Hi, I am trying to finetune LM and am facing the following issue.
**argparse.ArgumentError: argument --do_lower_case: conflicting option string: --do_lower_case** | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/276/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/276/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/275 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/275/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/275/comments | https://api.github.com/repos/huggingface/transformers/issues/275/events | https://github.com/huggingface/transformers/pull/275 | 409,832,484 | MDExOlB1bGxSZXF1ZXN0MjUyNzE1NDQ3 | 275 | --do_lower_case is duplicated in parser args | {
"login": "davidefiocco",
"id": 4547987,
"node_id": "MDQ6VXNlcjQ1NDc5ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4547987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidefiocco",
"html_url": "https://github.com/davidefiocco",
"followers_url": "https://api.github.com/users/davidefiocco/followers",
"following_url": "https://api.github.com/users/davidefiocco/following{/other_user}",
"gists_url": "https://api.github.com/users/davidefiocco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidefiocco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidefiocco/subscriptions",
"organizations_url": "https://api.github.com/users/davidefiocco/orgs",
"repos_url": "https://api.github.com/users/davidefiocco/repos",
"events_url": "https://api.github.com/users/davidefiocco/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidefiocco/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks @davidefiocco!"
] | 1,550 | 1,550 | 1,550 | CONTRIBUTOR | null | I'm therefore deleting one repetition (please review!) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/275/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/275/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/275",
"html_url": "https://github.com/huggingface/transformers/pull/275",
"diff_url": "https://github.com/huggingface/transformers/pull/275.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/275.patch",
"merged_at": 1550071942000
} |
https://api.github.com/repos/huggingface/transformers/issues/274 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/274/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/274/comments | https://api.github.com/repos/huggingface/transformers/issues/274/events | https://github.com/huggingface/transformers/issues/274 | 409,715,950 | MDU6SXNzdWU0MDk3MTU5NTA= | 274 | Help: how to get index/symbol from last_hidden, on text8? | {
"login": "moonblue333",
"id": 3936639,
"node_id": "MDQ6VXNlcjM5MzY2Mzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3936639?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moonblue333",
"html_url": "https://github.com/moonblue333",
"followers_url": "https://api.github.com/users/moonblue333/followers",
"following_url": "https://api.github.com/users/moonblue333/following{/other_user}",
"gists_url": "https://api.github.com/users/moonblue333/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moonblue333/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moonblue333/subscriptions",
"organizations_url": "https://api.github.com/users/moonblue333/orgs",
"repos_url": "https://api.github.com/users/moonblue333/repos",
"events_url": "https://api.github.com/users/moonblue333/events{/privacy}",
"received_events_url": "https://api.github.com/users/moonblue333/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\nThere is no pretrained character-level model for text8 right now.\r\nOnly a word-level model trained on wikitext 103."
] | 1,550 | 1,551 | 1,551 | NONE | null | I am trying on text8 dataset. I want to print next token. The model in source code forward() output is loss, but I want to get logits and softmax result, and finally get next token in vocab.
how to get index/symbol from last_hidden, on text8?
Thanks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/274/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/273 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/273/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/273/comments | https://api.github.com/repos/huggingface/transformers/issues/273/events | https://github.com/huggingface/transformers/pull/273 | 409,701,518 | MDExOlB1bGxSZXF1ZXN0MjUyNjE0NzI4 | 273 | Update to fifth release | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,550 | 1,566 | 1,550 | MEMBER | null | Mostly a bug fix update for loading the `TransfoXLModel` from s3:
- this fixes a bug in the loading of the pretrained `TransfoXLModel` from the s3 dump (which is a converted `TransfoXLLMHeadModel`) and the weights were not loaded.
- I also added a fallback of `OpenAIGPTTokenizer` on BERT's `BasicTokenizer` when SpaCy and ftfy are not installed. Using BERT's `BasicTokenizer` instead of SpaCy should be fine in most cases as long as you have a relatively clean input (SpaCy+ftfy were included to exactly reproduce the paper's pre-processing steps on the Toronto Book Corpus) and this also let us use the `never_split` option to avoid splitting special tokens like `[CLS], [SEP]...` which is easier than adding the tokens after tokenization.
- I also updated the README on the tokenizers options and methods which was lagging behind a bit. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/273/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/273",
"html_url": "https://github.com/huggingface/transformers/pull/273",
"diff_url": "https://github.com/huggingface/transformers/pull/273.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/273.patch",
"merged_at": 1550049549000
} |
https://api.github.com/repos/huggingface/transformers/issues/272 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/272/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/272/comments | https://api.github.com/repos/huggingface/transformers/issues/272/events | https://github.com/huggingface/transformers/issues/272 | 409,598,865 | MDU6SXNzdWU0MDk1OTg4NjU= | 272 | Facing issue in Run Fine tune LM | {
"login": "tuhinjubcse",
"id": 3104771,
"node_id": "MDQ6VXNlcjMxMDQ3NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3104771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tuhinjubcse",
"html_url": "https://github.com/tuhinjubcse",
"followers_url": "https://api.github.com/users/tuhinjubcse/followers",
"following_url": "https://api.github.com/users/tuhinjubcse/following{/other_user}",
"gists_url": "https://api.github.com/users/tuhinjubcse/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tuhinjubcse/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tuhinjubcse/subscriptions",
"organizations_url": "https://api.github.com/users/tuhinjubcse/orgs",
"repos_url": "https://api.github.com/users/tuhinjubcse/repos",
"events_url": "https://api.github.com/users/tuhinjubcse/events{/privacy}",
"received_events_url": "https://api.github.com/users/tuhinjubcse/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, you need documents with multiple lines because only sentences from the same doc are used as positive examples for the nextSentence prediction. ",
"Seems like the expected behavior. Feel free to open a PR to extend the example if you want @tuhinjubcse."
] | 1,550 | 1,551 | 1,551 | NONE | null | So my LM sample.txt is such that each doc has only one line
So in BERTDataSet len is giving negative
I tried changing it to self.num_docs - 1
def __len__(self):
print(self.corpus_lines ,self.num_docs)
return self.corpus_lines - self.num_docs - 1
I am also getting errors at multiple steps, Is the code written with the assumption that each document will have multiple lines in it?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/272/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/272/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/271 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/271/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/271/comments | https://api.github.com/repos/huggingface/transformers/issues/271/events | https://github.com/huggingface/transformers/issues/271 | 409,585,974 | MDU6SXNzdWU0MDk1ODU5NzQ= | 271 | Transformer-XL: wrong encoding in the vocab | {
"login": "akhti",
"id": 7470747,
"node_id": "MDQ6VXNlcjc0NzA3NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7470747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akhti",
"html_url": "https://github.com/akhti",
"followers_url": "https://api.github.com/users/akhti/followers",
"following_url": "https://api.github.com/users/akhti/following{/other_user}",
"gists_url": "https://api.github.com/users/akhti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akhti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akhti/subscriptions",
"organizations_url": "https://api.github.com/users/akhti/orgs",
"repos_url": "https://api.github.com/users/akhti/repos",
"events_url": "https://api.github.com/users/akhti/events{/privacy}",
"received_events_url": "https://api.github.com/users/akhti/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Yeah, the re-encoding seems to fix the bug:\r\n```\r\nIn [6]: \"'EnquΓΒͺtes\".encode('latin1').decode('utf8')\r\nOut[6]: \"'EnquΓͺtes\"\r\n```",
"which version of python are you using?",
"It's python3.6. Does the snippet above gives different result on other version?\r\n\r\nJFYI: I'm using the script below to create a fixed vocab and save in the current folder.\r\n```(python)\r\nimport collections\r\nimport urllib.request\r\n\r\nfrom pytorch_pretrained_bert import tokenization_transfo_xl\r\nimport torch\r\n\r\n\r\ndef fix(x):\r\n return x.encode('latin1').decode('utf8')\r\n\r\n\r\ndef main():\r\n basedir = '.'\r\n good_vocab_path = basedir + '/' + tokenization_transfo_xl.VOCAB_NAME\r\n vocab_url = tokenization_transfo_xl.PRETRAINED_VOCAB_ARCHIVE_MAP['transfo-xl-wt103']\r\n urllib.request.urlretrieve(vocab_url, basedir + '/vocab.buggy.bin')\r\n vocab = torch.load(basedir + '/vocab.buggy.bin')\r\n\r\n vocab['counter'] = collections.Counter({fix(k): v for k, v in vocab['counter'].items()})\r\n vocab['sym2idx'] = {fix(k): v for k, v in vocab['sym2idx'].items()}\r\n vocab['idx2sym'] = [fix(k) for k in vocab['idx2sym']]\r\n torch.save(vocab, good_vocab_path)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```",
"Maybe indeed. Do you want to submit a PR to fix this?",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,550 | 1,557 | 1,557 | NONE | null | Seems that something odd happened during vocab serialization as many symbols with non-latin symbols are broken.
E.g.:
```
In [1]: import pytorch_pretrained_bert
In [2]: tokenizer = pytorch_pretrained_bert.TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')
In [4]: print(tokenizer.idx2sym[224178])
'EnquΓΒͺtes
```
The correct token should be "'EnquΓͺtes". And there are around 10k tokens like this.
Could it be 'encoding="latin1"' here?
https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/convert_transfo_xl_checkpoint_to_pytorch.py#L54 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/271/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/271/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/270 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/270/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/270/comments | https://api.github.com/repos/huggingface/transformers/issues/270/events | https://github.com/huggingface/transformers/issues/270 | 409,516,530 | MDU6SXNzdWU0MDk1MTY1MzA= | 270 | Transformer-XL: hidden states are nan | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This is only Transformer-XL related. For GPT the output is:\r\n\r\n```python\r\ntensor([[[ 0.1963, 0.0367, -0.2051, ..., 0.7062, -0.2786, 0.1352],\r\n [-0.4705, 0.1581, 0.0452, ..., 0.7809, -0.2519, 0.4257],\r\n [-0.2602, -0.7126, -0.7966, ..., 0.6364, -0.1560, -0.6084],\r\n ...,\r\n [-0.3665, 1.2743, -2.4027, ..., -1.7271, -1.7892, 0.7689],\r\n [-1.3017, 2.7999, -2.8868, ..., -1.3412, 0.2787, -0.0605],\r\n [ 0.2648, 0.3508, 0.2894, ..., -0.7471, 0.1855, -0.0492]]])\r\n```",
"Same situation, on GPU.\r\n\r\n",
"Indeed, there was a bug in the loading of the `TransfoXLModel` from the S3 dump (which is a converted `TransfoXLLMHeadModel`) so the weights were not loaded.\r\n\r\nYou can see that the weights are not loaded if you activate the logger before loading the model:\r\n```python\r\nimport logging\r\nlogging.basicConfig(level=logging.INFO)\r\n```\r\n\r\nI've fixed it in release 0.5.1.\r\nI've also fixed another issue you (@stefan-it) mentions in https://github.com/zalandoresearch/flair/issues/68 which is the dependency of `OpenAIGPTTokenizer` on SpaCy and ftfy by adding a fallback on BERT's `BasicTokenizer` (should be fine for normal usage, SpaCy+ftfy were included to exactly reproduce the paper's pre-processing steps).",
"Publishing 0.5.1 as soon as all the tests are checked.",
"Ok 0.5.1 is published: https://github.com/huggingface/pytorch-pretrained-BERT/releases/tag/v0.5.1"
] | 1,550 | 1,550 | 1,550 | COLLABORATOR | null | Hi,
I followed the code in the Transformer-XL section:
```python
import torch
from pytorch_pretrained_bert import TransfoXLTokenizer, TransfoXLModel, TransfoXLLMHeadModel
# Load pre-trained model tokenizer (vocabulary from wikitext 103)
tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')
# Tokenized input
text_1 = "Who was Jim Henson ?"
text_2 = "Jim Henson was a puppeteer"
tokenized_text_1 = tokenizer.tokenize(text_1)
tokenized_text_2 = tokenizer.tokenize(text_2)
# Convert token to vocabulary indices
indexed_tokens_1 = tokenizer.convert_tokens_to_ids(tokenized_text_1)
indexed_tokens_2 = tokenizer.convert_tokens_to_ids(tokenized_text_2)
# Convert inputs to PyTorch tensors
tokens_tensor_1 = torch.tensor([indexed_tokens_1])
tokens_tensor_2 = torch.tensor([indexed_tokens_2])
model = TransfoXLModel.from_pretrained('transfo-xl-wt103')
model.eval()
# If you have a GPU, put everything on cuda
tokens_tensor_1 = tokens_tensor_1
tokens_tensor_2 = tokens_tensor_2
with torch.no_grad():
# Predict hidden states features for each layer
hidden_states_1, mems_1 = model(tokens_tensor_1)
# We can re-use the memory cells in a subsequent call to attend a longer context
hidden_states_2, mems_2 = model(tokens_tensor_2, mems=mems_1)
print(hidden_states_1)
print(hidden_states_2)
```
(One modification: I'm running this on CPU). The hidden states of both sentences are:
```bash
tensor([[[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan],
[nan, nan, nan, ..., nan, nan, nan]]])
```
Is this expected? I wanted to get the embeddings of the two sentences π€
Tested with PyTorch 1.0.1 and Python 3.7. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/270/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/270/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/269 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/269/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/269/comments | https://api.github.com/repos/huggingface/transformers/issues/269/events | https://github.com/huggingface/transformers/issues/269 | 409,385,626 | MDU6SXNzdWU0MDkzODU2MjY= | 269 | Get hidden states from all layers of Transformer-XL? | {
"login": "hugochan",
"id": 5065261,
"node_id": "MDQ6VXNlcjUwNjUyNjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/5065261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hugochan",
"html_url": "https://github.com/hugochan",
"followers_url": "https://api.github.com/users/hugochan/followers",
"following_url": "https://api.github.com/users/hugochan/following{/other_user}",
"gists_url": "https://api.github.com/users/hugochan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hugochan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hugochan/subscriptions",
"organizations_url": "https://api.github.com/users/hugochan/orgs",
"repos_url": "https://api.github.com/users/hugochan/repos",
"events_url": "https://api.github.com/users/hugochan/events{/privacy}",
"received_events_url": "https://api.github.com/users/hugochan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @hugochan, actually that what's in the `mems` of the Transformer-XL are (maybe you can read again the paper).\r\n\r\nOne thing to be careful about is that the `mems` have transposed first dimensions and are longer (see the readme). Here is how to extract the hidden states from the model output:\r\n```python\r\nhidden_states, mems = model(tokens_tensor)\r\nseq_length = hidden_states.size(1)\r\nlower_hidden_states = list(t[-seq_length:, ...].transpose(0, 1) for t in mems)\r\nall_hidden_states = lower_hidden_states + [hidden_states]\r\n```",
"Hi @thomwolf , thank you for your answer! Just one quick question. It seems that `mems` already contains a list of num_layer hidden states, what is the difference between `lower_hidden_states[-1]` and `hidden_states` in your code? Thank you!",
"Actually `mems` contains all the hidden states PLUS the output of the embeddings (`lower_hidden_states[0]`) so `lower_hidden_states[-1]` is the output of the hidden state of the layer below the last layer and `hidden_states` is the output of the last layer (before the softmax).\r\n\r\nI will add a note on that in the readme."
] | 1,549 | 1,550 | 1,550 | NONE | null | Hi,
Thank you for supporting the pretrained Transformer-XL model! I was wondering if it makes sense to get hidden states from all layers of Transformer-XL as the output, just as what can be done for BERT. It seems this is not supported currently. Practically I found this strategy worked well for BERT and gave better results. Not sure if it is a good idea for Transformer-XL. Thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/269/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/269/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/268 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/268/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/268/comments | https://api.github.com/repos/huggingface/transformers/issues/268/events | https://github.com/huggingface/transformers/pull/268 | 409,265,270 | MDExOlB1bGxSZXF1ZXN0MjUyMjg5MzEy | 268 | fixed a minor bug in README.md | {
"login": "niuliang42",
"id": 1748165,
"node_id": "MDQ6VXNlcjE3NDgxNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1748165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/niuliang42",
"html_url": "https://github.com/niuliang42",
"followers_url": "https://api.github.com/users/niuliang42/followers",
"following_url": "https://api.github.com/users/niuliang42/following{/other_user}",
"gists_url": "https://api.github.com/users/niuliang42/gists{/gist_id}",
"starred_url": "https://api.github.com/users/niuliang42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/niuliang42/subscriptions",
"organizations_url": "https://api.github.com/users/niuliang42/orgs",
"repos_url": "https://api.github.com/users/niuliang42/repos",
"events_url": "https://api.github.com/users/niuliang42/events{/privacy}",
"received_events_url": "https://api.github.com/users/niuliang42/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks @wangxiaodiu "
] | 1,549 | 1,550 | 1,550 | CONTRIBUTOR | null | Assertion failed if one followed the instructions in README.md->Usage->BERT.
https://github.com/huggingface/pytorch-pretrained-BERT/issues/266#issuecomment-462730151 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/268/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/268/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/268",
"html_url": "https://github.com/huggingface/transformers/pull/268",
"diff_url": "https://github.com/huggingface/transformers/pull/268.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/268.patch",
"merged_at": 1550049566000
} |
https://api.github.com/repos/huggingface/transformers/issues/267 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/267/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/267/comments | https://api.github.com/repos/huggingface/transformers/issues/267/events | https://github.com/huggingface/transformers/issues/267 | 409,194,189 | MDU6SXNzdWU0MDkxOTQxODk= | 267 | Missing files for Transformer-XL examples | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Oh yes, that was a typo, there is only one example for Transformer-XL and it's the `run_transfo_xl.py` file detailed [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/README.md#openai-gpt-and-transformer-xl-running-the-examples). Read the API details in the readme for more information on the input/outputs of the two Transformer-XL models.\r\nI've updated the readme, thanks."
] | 1,549 | 1,549 | 1,549 | COLLABORATOR | null | Hi,
thanks so much for the new *0.5.0* release. I wanted to train a `TransfoXLModel` model, as described in the `README` [here](https://github.com/huggingface/pytorch-pretrained-BERT/blame/master/README.md#L132).
Unfortunately, the files `transfo_xl_train.py` and `transfo_xl_eval.py` are not located in the `examples` directory.
Could you please add them to repository? Thanks :heart: | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/267/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/266 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/266/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/266/comments | https://api.github.com/repos/huggingface/transformers/issues/266/events | https://github.com/huggingface/transformers/issues/266 | 409,141,453 | MDU6SXNzdWU0MDkxNDE0NTM= | 266 | Tokenization Incorrect | {
"login": "dhirajmadan1",
"id": 4920075,
"node_id": "MDQ6VXNlcjQ5MjAwNzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4920075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhirajmadan1",
"html_url": "https://github.com/dhirajmadan1",
"followers_url": "https://api.github.com/users/dhirajmadan1/followers",
"following_url": "https://api.github.com/users/dhirajmadan1/following{/other_user}",
"gists_url": "https://api.github.com/users/dhirajmadan1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhirajmadan1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhirajmadan1/subscriptions",
"organizations_url": "https://api.github.com/users/dhirajmadan1/orgs",
"repos_url": "https://api.github.com/users/dhirajmadan1/repos",
"events_url": "https://api.github.com/users/dhirajmadan1/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhirajmadan1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Though we are not facing the same issueβ¦β¦\r\n\r\nI followed the instruction from the readme, the `tokenized_text` is expected by assertion to be:\r\n `['[CLS]', 'who', 'was', 'jim', 'henson', '?', '[SEP]', 'jim', '[MASK]', 'was', 'a', 'puppet', '##eer', '[SEP]']`. \r\n\r\nHowever, the actual `tokenized_text` is:\r\n`['[CLS]', 'who', 'was', 'jim', 'henson', '?', '[MASK]', 'jim', 'henson', 'was', 'a', 'puppet', '##eer', '[SEP]']`.\r\n\r\nI believe this is because of `masked_index = 6` in the example code from the readme. If you let it to `masked_index = 8`, everything is perfect.\r\n\r\nI opened a PR to fix this minor bug.",
"I think the tokenizer issue has been resolved in the latest version (0.5.0). ",
"Yes!",
"I just encountered the same issue as @dhirajmadan1 with `transformers==2.2.1`. Is this expected somehow?\r\n\r\nI am following the quickstart guide: https://huggingface.co/transformers/quickstart.html\r\n\r\n```\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n\r\n# Run an example text through this:\r\ntext = \"[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]\"\r\ntokenized_text = tokenizer.tokenize(text)\r\n\r\nmasked_index = 8\r\ntokenized_text[masked_index] = '[MASK]'\r\npredicted_tokenized_sentence = ['[CLS]', 'who', 'was', 'jim', 'henson', '?', '[SEP]', 'jim', '[MASK]', 'was', 'a', 'puppet', '##eer', '[SEP]']\r\n \r\n```",
"What is your issue @yenicelik? Do you mind opening a new issue with your problem?",
"ah, my apologies: https://github.com/huggingface/transformers/issues/2047\r\n\r\napparently a PR is on the way!"
] | 1,549 | 1,575 | 1,550 | NONE | null | The tokenizer is not working correctly for me for e.g. [CLS] is gettig broken '[' , 'cl ', '##s', ']'
In [1]: import torch
...: from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMas
...: kedLM
...:
...: # Load pre-trained model tokenizer (vocabulary)
...: tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex.
In [2]: text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP
...: ]"
...: tokenized_text = tokenizer.tokenize(text)
In [3]: tokenized_text
Out[3]:
['[',
'cl',
'##s',
']',
'who',
'was',
'jim',
'henson',
'?',
'[',
'sep',
']',
'jim',
'henson',
'was',
'a',
'puppet',
'##eer',
'[',
'sep',
']']
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/266/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/266/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/265 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/265/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/265/comments | https://api.github.com/repos/huggingface/transformers/issues/265/events | https://github.com/huggingface/transformers/issues/265 | 408,771,087 | MDU6SXNzdWU0MDg3NzEwODc= | 265 | Variance Sources | {
"login": "carolinlawrence",
"id": 5450626,
"node_id": "MDQ6VXNlcjU0NTA2MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5450626?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/carolinlawrence",
"html_url": "https://github.com/carolinlawrence",
"followers_url": "https://api.github.com/users/carolinlawrence/followers",
"following_url": "https://api.github.com/users/carolinlawrence/following{/other_user}",
"gists_url": "https://api.github.com/users/carolinlawrence/gists{/gist_id}",
"starred_url": "https://api.github.com/users/carolinlawrence/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/carolinlawrence/subscriptions",
"organizations_url": "https://api.github.com/users/carolinlawrence/orgs",
"repos_url": "https://api.github.com/users/carolinlawrence/repos",
"events_url": "https://api.github.com/users/carolinlawrence/events{/privacy}",
"received_events_url": "https://api.github.com/users/carolinlawrence/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi Carolin,\r\n\r\nDepending on the model you are using, not all the weights are initialized from the pre-trained models. Check the details in the [overview](https://github.com/huggingface/pytorch-pretrained-BERT#overview) section of the readme to see if it's the case for you.\r\n\r\nApart from weights initialization and dataset shuffling other typical source of variances are the dropout layers.\r\n\r\nBert fine-tuning has been reported to be a high-variance process indeed, in particular on small datasets.",
"Hi Thomas,\r\n\r\nthanks for the quick reply!\r\n\r\nI'm using `BertForMaskedLM`, so the weights should be set. But yes, I didn't think of dropout, thanks for pointing that out!\r\n"
] | 1,549 | 1,549 | 1,549 | NONE | null | Hi,
when I change the `--seed` argument, I get a high variance between different runs on my dataset. So I was wondering where the sources of variance might come from. I see that the seed is set (e.g. in `run_squad.py`) via:
`random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)`
But how can I find out where randomization is actually used?
I found the `RandomSampler` and replaced it with a `SequentialSampler`, but the variance remains high.
I know that `modeling.py` randomly initializes the weights, but these are overwritten by the fixed weights when loading a pre-trained BERT model, e.g. `bert-base-uncased`, correct?
Can anyone point me in any other direction where my source of variance might come from?
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/265/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/265/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/264 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/264/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/264/comments | https://api.github.com/repos/huggingface/transformers/issues/264/events | https://github.com/huggingface/transformers/issues/264 | 408,729,916 | MDU6SXNzdWU0MDg3Mjk5MTY= | 264 | RuntimeError: cuda runtime error while running run_classifier.py with 'bert-large-uncased' bert model | {
"login": "laibamehnaz",
"id": 36405283,
"node_id": "MDQ6VXNlcjM2NDA1Mjgz",
"avatar_url": "https://avatars.githubusercontent.com/u/36405283?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laibamehnaz",
"html_url": "https://github.com/laibamehnaz",
"followers_url": "https://api.github.com/users/laibamehnaz/followers",
"following_url": "https://api.github.com/users/laibamehnaz/following{/other_user}",
"gists_url": "https://api.github.com/users/laibamehnaz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laibamehnaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laibamehnaz/subscriptions",
"organizations_url": "https://api.github.com/users/laibamehnaz/orgs",
"repos_url": "https://api.github.com/users/laibamehnaz/repos",
"events_url": "https://api.github.com/users/laibamehnaz/events{/privacy}",
"received_events_url": "https://api.github.com/users/laibamehnaz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"reduce batch size?",
"It is 32 as of now . What do you think I should reduce it to ?",
"Start very low and increase while looking at `nvidia-smi` or a similar GPU memory visualization tool.",
"Closing this for now, feel free to re-open if you have other issues."
] | 1,549 | 1,551 | 1,551 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/264/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/264/timeline | completed | null | null |
|
https://api.github.com/repos/huggingface/transformers/issues/263 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/263/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/263/comments | https://api.github.com/repos/huggingface/transformers/issues/263/events | https://github.com/huggingface/transformers/issues/263 | 408,343,730 | MDU6SXNzdWU0MDgzNDM3MzA= | 263 | potential bug in extract_features.py | {
"login": "jayleicn",
"id": 15768405,
"node_id": "MDQ6VXNlcjE1NzY4NDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/15768405?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jayleicn",
"html_url": "https://github.com/jayleicn",
"followers_url": "https://api.github.com/users/jayleicn/followers",
"following_url": "https://api.github.com/users/jayleicn/following{/other_user}",
"gists_url": "https://api.github.com/users/jayleicn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jayleicn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jayleicn/subscriptions",
"organizations_url": "https://api.github.com/users/jayleicn/orgs",
"repos_url": "https://api.github.com/users/jayleicn/repos",
"events_url": "https://api.github.com/users/jayleicn/events{/privacy}",
"received_events_url": "https://api.github.com/users/jayleicn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi Jie,\r\n`extract_feature.py` is an example script. If you want to adapt it for sentences-pair, we would be happy to welcome a PR :)"
] | 1,549 | 1,549 | 1,549 | NONE | null | Hi,
`token_type_ids` is not set for this line:
`all_encoder_layers, _ = model(input_ids, token_type_ids=None, attention_mask=input_mask)`
https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/extract_features.py#L267, this does not affect single sequence feature extraction, but for a pair of sequence, the model will process the pair as a single sequence and add `A` embedding to the two sequences, which should add `A`, `B` respectively. Seems like a bug.
Best,
Jie | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/263/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/263/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/262 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/262/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/262/comments | https://api.github.com/repos/huggingface/transformers/issues/262/events | https://github.com/huggingface/transformers/issues/262 | 407,849,628 | MDU6SXNzdWU0MDc4NDk2Mjg= | 262 | speed becomes slow | {
"login": "taesikna",
"id": 25067105,
"node_id": "MDQ6VXNlcjI1MDY3MTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/25067105?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/taesikna",
"html_url": "https://github.com/taesikna",
"followers_url": "https://api.github.com/users/taesikna/followers",
"following_url": "https://api.github.com/users/taesikna/following{/other_user}",
"gists_url": "https://api.github.com/users/taesikna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/taesikna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/taesikna/subscriptions",
"organizations_url": "https://api.github.com/users/taesikna/orgs",
"repos_url": "https://api.github.com/users/taesikna/repos",
"events_url": "https://api.github.com/users/taesikna/events{/privacy}",
"received_events_url": "https://api.github.com/users/taesikna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1108649070,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Need%20more%20information",
"name": "Need more information",
"color": "d876e3",
"default": false,
"description": "Further information is requested"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"I know this is closed, but I'm running into a similar issue. For the first ~100 batches, the script runs with an OK speed (1s/it, batch size 64, 512 tokens, 8x GTX 1080Ti) (specific for `DistilBertForSequenceClassification` in my case).\r\nAfter that, the speed drops significantly, to about 10s/it, with the GPUs mostly sitting idle. (0% on `gpustat`.\r\n\r\nAny idea on what could be causing this?",
"Update: Looks like I was accumulating the gradient for too long.",
"> Update: Looks like I was accumulating the gradient for too long.\r\n\r\n@ArthurCamara So, how did you fix the problem in your case? Did you change the `gradient_accumulation_steps` parameter of `Trainer`? What were the initial value and the value which helped to resolve the problem?\r\nAnd how did you understand that the problem was really in this?",
"I have the same problem. I noticed that the training speed slows down as GPU temperature goes up... When the temperature goes down (if I wait after terminating the process), the speed becomes okay again.\r\n\r\nThis issue happens only when I use `Trainer`. When I don't use it (i.e. use PyTorch utilities directly), the training speed is stable and the temperature doesn't go up.\r\n\r\n@taesikna Did you fix your issue?",
"> > Update: Looks like I was accumulating the gradient for too long.\r\n> \r\n> @ArthurCamara So, how did you fix the problem in your case? Did you change the `gradient_accumulation_steps` parameter of `Trainer`? What were the initial value and the value which helped to resolve the problem?\r\n> And how did you understand that the problem was really in this?\r\n\r\nI think the initial setting was 5 or something. I dropped to 1 and it was fine then. ",
"> > > Update: Looks like I was accumulating the gradient for too long.\n> > \n> > @ArthurCamara So, how did you fix the problem in your case? Did you change the `gradient_accumulation_steps` parameter of `Trainer`? What were the initial value and the value which helped to resolve the problem?\n> > And how did you understand that the problem was really in this?\n> \n> I think the initial setting was 5 or something. I dropped to 1 and it was fine then. \n\nAs I see now the default value is 1, but I still observe the slow down. @ArthurCamara Do you have any idea on that?",
"> > > > Update: Looks like I was accumulating the gradient for too long.\r\n> > > \r\n> > > \r\n> > > @ArthurCamara So, how did you fix the problem in your case? Did you change the `gradient_accumulation_steps` parameter of `Trainer`? What were the initial value and the value which helped to resolve the problem?\r\n> > > And how did you understand that the problem was really in this?\r\n> > \r\n> > \r\n> > I think the initial setting was 5 or something. I dropped to 1 and it was fine then.\r\n> \r\n> As I see now the default value is 1, but I still observe the slow down. @ArthurCamara Do you have any idea on that?\r\n\r\nI was not using the Trainer, but my own training loop. dropping the accumulation steps to 1 helped because it was overwhelming the GPUs memory and that makes the GPUs sit idly. If the GPUs on `nvidia-smi` are idle, but their memory is full, it's probably something related to memory usage. Otherwise, no idea. ",
"\r\n\r\n> facing this same issue too, on 2080. will not use Trainer.\r\n\r\n```\r\nen ignored: tokens, ner_tags, id.\r\n[INFO|trainer.py:1156] 2021-06-05 16:46:31,375 >> ***** Running training *****\r\n[INFO|trainer.py:1157] 2021-06-05 16:46:31,386 >> Num examples = 2021\r\n[INFO|trainer.py:1158] 2021-06-05 16:46:31,397 >> Num Epochs = 1\r\n[INFO|trainer.py:1159] 2021-06-05 16:46:31,407 >> Instantaneous batch size per device = 10\r\n[INFO|trainer.py:1160] 2021-06-05 16:46:31,418 >> Total train batch size (w. parallel, distributed & accumulation) = 10\r\n[INFO|trainer.py:1161] 2021-06-05 16:46:31,428 >> Gradient Accumulation steps = 1\r\n[INFO|trainer.py:1162] 2021-06-05 16:46:31,439 >> Total optimization steps = 203\r\n 1%|ββ | 2/203 [00:12<16:46, 5.01s/it\r\n 1%|ββ | 3/203 [00:20<19:53, 5.97s/it\r\n 2%|βββ | 4/203 [00:37<30:55, 9.33s/i\r\n 2%|ββββ | 5/203 [00:50<34:44, 10.53s/\r\n 3%|βββββ | 6/203 [01:01<34:07, 10.39s\r\n 3%|βββββ | 7/203 [01:02<25:16, 7.74s\r\n 4%|ββββββ | 8/203 [01:16<31:19, 9.64\r\n 4%|βββββββ | 9/203 [01:30<34:47, 10.7\r\n 5%|βββββββ | 10/203 [01:43<37:30, 11.6\r\n 5%|ββββββββ | 11/203 [01:56<38:22, 11.\r\n 6%|βββββββββ | 12/203 [02:09<38:37, 12\r\n 6%|βββββββββ | 13/203 [02:23<40:23, 12\r\n 7%|ββββββββββ | 14/203 [02:33<37:51, 1\r\n 7%|βββββββββββ | 15/203 [02:44<36:31,\r\n 8%|βββββββββββ | 16/203 [02:55<35:39,\r\n 8%|ββββββββββββ | 17/203 [03:07<36:25,\r\n 9%|βββββββββββββ | 18/203 [03:20<37:19\r\n 9%|βββββββββββββ | 19/203 [03:34<38:25\r\n 10%|ββββββββββββββ | 20/203 [03:47<38:4\r\n 10%|βββββββββββββββ | 21/203 [04:00<38:\r\n 11%|ββββββββββββββββ | 22/203 [04:13<39\r\n 11%|ββββββββββββββββ | 23/203 [04:27<39\r\n:42, 13.24s/it]\r\n```"
] | 1,549 | 1,622 | 1,557 | NONE | null | Hi,
I'm trying to fine-tune Bert-base-uncased model for Squad v1.1 on microsoft azure.
And I'm experiencing slow speed as the training continues.
[logs from first epoch]
Iteration: 3%|β | 452/14774 [01:46<57:06, 4.18it/s][A
Iteration: 3%|β | 453/14774 [01:46<57:07, 4.18it/s][A
Iteration: 3%|β | 454/14774 [01:47<57:02, 4.18it/s][A
Iteration: 3%|β | 455/14774 [01:47<57:14, 4.17it/s][A
Iteration: 3%|β | 456/14774 [01:47<57:12, 4.17it/s][A
Iteration: 3%|β | 457/14774 [01:47<57:23, 4.16it/s][A
Iteration: 3%|β | 458/14774 [01:48<57:26, 4.15it/s][A
[logs from 2nd epoch]
Iteration: 29%|βββ | 4313/14774 [3:51:45<10:33:14, 3.63s/it][A
Iteration: 29%|βββ | 4314/14774 [3:51:49<10:31:50, 3.62s/it][A
Iteration: 29%|βββ | 4315/14774 [3:51:52<10:31:40, 3.62s/it][A
Iteration: 29%|βββ | 4316/14774 [3:51:56<10:28:11, 3.60s/it][A
Iteration: 29%|βββ | 4317/14774 [3:51:59<10:29:19, 3.61s/it][A
Iteration: 29%|βββ | 4318/14774 [3:52:03<10:27:00, 3.60s/it][A
I have seen you were also using Microsoft Azure, and I wonder if you could help me to figure out what was wrong with my setting.
[Azure Cluster configuration]
VM size : STANDARD_NC6S_V3 (single Tesla V100)
Operating system : Canonical UbuntuServer 16.04-LTS (latest)
Auto scale : true
Target number of nodes : 1 (Min: 0, Max: 50)
[json file used to submit the job]
"containerSettings": {
"imageSourceRegistry": {
"image": "pytorch/pytorch:latest"
}
},
"jobPreparation": {
"commandLine": "conda install python==3.7 && pip install requests boto3 tqdm"
},
And I have used the same setting in the repo except tran_batch_size=6.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/262/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/262/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/261 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/261/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/261/comments | https://api.github.com/repos/huggingface/transformers/issues/261/events | https://github.com/huggingface/transformers/pull/261 | 407,599,866 | MDExOlB1bGxSZXF1ZXN0MjUxMDM3Mjc3 | 261 | removing unused argument eval_batch_size from LM finetuning #256 | {
"login": "tholor",
"id": 1563902,
"node_id": "MDQ6VXNlcjE1NjM5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1563902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tholor",
"html_url": "https://github.com/tholor",
"followers_url": "https://api.github.com/users/tholor/followers",
"following_url": "https://api.github.com/users/tholor/following{/other_user}",
"gists_url": "https://api.github.com/users/tholor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tholor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tholor/subscriptions",
"organizations_url": "https://api.github.com/users/tholor/orgs",
"repos_url": "https://api.github.com/users/tholor/repos",
"events_url": "https://api.github.com/users/tholor/events{/privacy}",
"received_events_url": "https://api.github.com/users/tholor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Nice!"
] | 1,549 | 1,549 | 1,549 | CONTRIBUTOR | null | Removing unused eval_batch_size argument for simplification. As requested in #256. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/261/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/261",
"html_url": "https://github.com/huggingface/transformers/pull/261",
"diff_url": "https://github.com/huggingface/transformers/pull/261.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/261.patch",
"merged_at": 1549618564000
} |
https://api.github.com/repos/huggingface/transformers/issues/260 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/260/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/260/comments | https://api.github.com/repos/huggingface/transformers/issues/260/events | https://github.com/huggingface/transformers/issues/260 | 407,499,189 | MDU6SXNzdWU0MDc0OTkxODk= | 260 | pretrained model(s) in onnx format | {
"login": "WilliamTambellini",
"id": 109458,
"node_id": "MDQ6VXNlcjEwOTQ1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/109458?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WilliamTambellini",
"html_url": "https://github.com/WilliamTambellini",
"followers_url": "https://api.github.com/users/WilliamTambellini/followers",
"following_url": "https://api.github.com/users/WilliamTambellini/following{/other_user}",
"gists_url": "https://api.github.com/users/WilliamTambellini/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WilliamTambellini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WilliamTambellini/subscriptions",
"organizations_url": "https://api.github.com/users/WilliamTambellini/orgs",
"repos_url": "https://api.github.com/users/WilliamTambellini/repos",
"events_url": "https://api.github.com/users/WilliamTambellini/events{/privacy}",
"received_events_url": "https://api.github.com/users/WilliamTambellini/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @WilliamTambellini, have you tried to follow the standard ONNX procedure for converting a PyTorch model?\r\nThe model in this repo are just regular PyTorch models.",
"Hello Thomas, I ve not yet tried, just seen : \r\nhttps://github.com/onnx/models/issues/130\r\nhttps://stackoverflow.com/questions/54220042/how-do-you-generate-an-onnx-representation-of-a-pytorch-bert-pretrained-neural-n\r\nWill try, tks. ",
"Hi, when I try to export a TokenClassification model to a ONNX model, I encounter `RuntimeError: ONNX export failed: Couldn't export operator aten::erf`, does that mean some part of BERT model layers not supported by ONNX?\r\nI think that problem comes from the definition of GELU function, which is `x * 0.5 * (1.0 + torch.erf(x / math.sqrt(2.0)))`. Should I try to use other way to calculate this function or wait for ONNX to support this opertator? ",
"@geekboood update your pytorch version to latest and the problem will most likely go away.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"for anyone who is looking for the answer:\r\ntorch=1.1.0 \r\npython=3.6\r\n\r\n`torch.onnx.export(model, (input_ids, segment_ids, input_mask), \"bert.onnx\", verbose=False)`\r\n\r\nworks well for me",
"> for anyone who is looking for the answer:\r\n> torch=1.1.0\r\n> python=3.6\r\n> \r\n> `torch.onnx.export(model, (input_ids, segment_ids, input_mask), \"bert.onnx\", verbose=False)`\r\n> \r\n> works well for me\r\n\r\nHi, thanks for the answer. Do you get good results when using the exported model for inference in another framework? I exported a BertForQuestionAnswering model to ONNX without errors, but I'm getting wrong predictions when using onnxruntime or a second export to TF Serving and I can't figure out why!",
"Not sure if this is still an issue for you but in the BertForSequenceClassification model the parameters are in a different order\r\n\r\n`torch.onnx.export(model, (input_ids, input_mask, segment_ids), \"bert.onnx\", verbose=False)`\r\n\r\nworks as intended",
"@chessgecko wow you're right, thanks! working now",
"cc @mfuntowicz :)"
] | 1,549 | 1,570 | 1,564 | CONTRIBUTOR | null | hi, would you assist/help in order to export/convert at least 1 model into the onnx format ?
https://onnx.ai
kind | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/260/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/259 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/259/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/259/comments | https://api.github.com/repos/huggingface/transformers/issues/259/events | https://github.com/huggingface/transformers/issues/259 | 407,475,195 | MDU6SXNzdWU0MDc0NzUxOTU= | 259 | please add option to load fine-tuned file to CPU if trained on GPU | {
"login": "bsugerman",
"id": 22836928,
"node_id": "MDQ6VXNlcjIyODM2OTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/22836928?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bsugerman",
"html_url": "https://github.com/bsugerman",
"followers_url": "https://api.github.com/users/bsugerman/followers",
"following_url": "https://api.github.com/users/bsugerman/following{/other_user}",
"gists_url": "https://api.github.com/users/bsugerman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bsugerman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bsugerman/subscriptions",
"organizations_url": "https://api.github.com/users/bsugerman/orgs",
"repos_url": "https://api.github.com/users/bsugerman/repos",
"events_url": "https://api.github.com/users/bsugerman/events{/privacy}",
"received_events_url": "https://api.github.com/users/bsugerman/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Indeed, this will be in the next release, thanks!",
"is there is a change according the CPU issue ?",
"Should be fixed now. Do you still have an error?",
"i am loading the model and i dont know how to load on CPU , gives me \"model.to\" to is not defined. Can you tell me how to send the model to run on CPU if trained on GPU.",
"Would love some explanation on how to do this as well!",
"I have the same question. how to load a model trained on GPU to CPU?",
"I am watching this as well.",
"I have the same question. Is there any option for using CPU-only?",
"watching this as well",
"same"
] | 1,549 | 1,652 | 1,549 | NONE | null | I fine-tuned the pytorch_model.bin on a GPU machine (google cloud) but need to use it on my home computer (no GPU). When I tried to open it using `model = BertForMaskedLM.from_pretrained(bert_version)` I got the following error:
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available()
is False. If you are running on a CPU-only machine, please use torch.load with map_location='cpu'
to map your storages to the CPU.
Perhaps you can add an option into `from_pretrained()` such as `cpu=True` which will then call
`torch.load(weights_path, map_location=lambda storage, location: 'cpu')` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/259/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/259/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/258 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/258/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/258/comments | https://api.github.com/repos/huggingface/transformers/issues/258/events | https://github.com/huggingface/transformers/pull/258 | 407,373,796 | MDExOlB1bGxSZXF1ZXN0MjUwODYzMTI1 | 258 | Fix the undefined variable in squad example | {
"login": "BoeingX",
"id": 12154983,
"node_id": "MDQ6VXNlcjEyMTU0OTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/12154983?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BoeingX",
"html_url": "https://github.com/BoeingX",
"followers_url": "https://api.github.com/users/BoeingX/followers",
"following_url": "https://api.github.com/users/BoeingX/following{/other_user}",
"gists_url": "https://api.github.com/users/BoeingX/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BoeingX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BoeingX/subscriptions",
"organizations_url": "https://api.github.com/users/BoeingX/orgs",
"repos_url": "https://api.github.com/users/BoeingX/repos",
"events_url": "https://api.github.com/users/BoeingX/events{/privacy}",
"received_events_url": "https://api.github.com/users/BoeingX/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks @BoeingX !"
] | 1,549 | 1,549 | 1,549 | CONTRIBUTOR | null | `train_dataset` is undefined | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/258/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/258/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/258",
"html_url": "https://github.com/huggingface/transformers/pull/258",
"diff_url": "https://github.com/huggingface/transformers/pull/258.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/258.patch",
"merged_at": 1549481599000
} |
https://api.github.com/repos/huggingface/transformers/issues/257 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/257/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/257/comments | https://api.github.com/repos/huggingface/transformers/issues/257/events | https://github.com/huggingface/transformers/issues/257 | 407,218,110 | MDU6SXNzdWU0MDcyMTgxMTA= | 257 | Minor redundancy in model defintion? | {
"login": "mttk",
"id": 3007947,
"node_id": "MDQ6VXNlcjMwMDc5NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3007947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mttk",
"html_url": "https://github.com/mttk",
"followers_url": "https://api.github.com/users/mttk/followers",
"following_url": "https://api.github.com/users/mttk/following{/other_user}",
"gists_url": "https://api.github.com/users/mttk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mttk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mttk/subscriptions",
"organizations_url": "https://api.github.com/users/mttk/orgs",
"repos_url": "https://api.github.com/users/mttk/repos",
"events_url": "https://api.github.com/users/mttk/events{/privacy}",
"received_events_url": "https://api.github.com/users/mttk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes, feel free to submit a PR. Otherwise, I'll fix it in the next release."
] | 1,549 | 1,551 | 1,551 | NONE | null | this is a _major_ nitpick but it was a bit confusing at first:
https://github.com/huggingface/pytorch-pretrained-BERT/blob/822915142b2f201c0b01acd7cffe1b05994d2d82/pytorch_pretrained_bert/modeling.py#L206-L212
L212 can simply be replaced by `self.all_head_size = config.hidden_size` as you already error out if the result of division isn't a whole number. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/257/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/257/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/256 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/256/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/256/comments | https://api.github.com/repos/huggingface/transformers/issues/256/events | https://github.com/huggingface/transformers/issues/256 | 407,051,972 | MDU6SXNzdWU0MDcwNTE5NzI= | 256 | does run_lm_finetuning.py actually use --eval_batch_size? | {
"login": "bsugerman",
"id": 22836928,
"node_id": "MDQ6VXNlcjIyODM2OTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/22836928?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bsugerman",
"html_url": "https://github.com/bsugerman",
"followers_url": "https://api.github.com/users/bsugerman/followers",
"following_url": "https://api.github.com/users/bsugerman/following{/other_user}",
"gists_url": "https://api.github.com/users/bsugerman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bsugerman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bsugerman/subscriptions",
"organizations_url": "https://api.github.com/users/bsugerman/orgs",
"repos_url": "https://api.github.com/users/bsugerman/repos",
"events_url": "https://api.github.com/users/bsugerman/events{/privacy}",
"received_events_url": "https://api.github.com/users/bsugerman/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"No, there's no evaluation step in the example script yet. What I can recommend is using your downstream task for evaluation of the pretrained BERT. Alternatively, you could of course also add some evaluation of the LM / nextSentence loss on a validation set.",
"Perhaps for clarity then, that parameter should be taken out of the script? ",
"Sure, makes sense. I created a PR. Thanks for pointing this out @bsugerman .",
"Fixed in master now."
] | 1,549 | 1,551 | 1,551 | NONE | null | I'm looking through this code (thanks so much for writing it, btw) and I'm not seeing whether it actually uses eval_batch_size at all. If it doesn't, is it still performing an evaluation step to assess goodness of fit? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/256/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/256/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/255 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/255/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/255/comments | https://api.github.com/repos/huggingface/transformers/issues/255/events | https://github.com/huggingface/transformers/issues/255 | 407,024,928 | MDU6SXNzdWU0MDcwMjQ5Mjg= | 255 | Error while using Apex | {
"login": "chenyangh",
"id": 8120212,
"node_id": "MDQ6VXNlcjgxMjAyMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8120212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chenyangh",
"html_url": "https://github.com/chenyangh",
"followers_url": "https://api.github.com/users/chenyangh/followers",
"following_url": "https://api.github.com/users/chenyangh/following{/other_user}",
"gists_url": "https://api.github.com/users/chenyangh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chenyangh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chenyangh/subscriptions",
"organizations_url": "https://api.github.com/users/chenyangh/orgs",
"repos_url": "https://api.github.com/users/chenyangh/repos",
"events_url": "https://api.github.com/users/chenyangh/events{/privacy}",
"received_events_url": "https://api.github.com/users/chenyangh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @chenyangh,\r\nYou need to install `apex` with the C++ and CUDA extensions:\r\n```bash\r\ngit clone https://github.com/NVIDIA/apex.git\r\ncd apex\r\npython setup.py install --cuda_ext --cpp_ext\r\n```",
"@thomwolf \r\nThanks!",
"@thomwolf After doing what you wrote, I got this error.\r\n\r\ntorch.__version__ = 1.0.1.post2\r\nTraceback (most recent call last):\r\n File \"setup.py\", line 60, in <module>\r\n raise RuntimeError(\"--cuda_ext was requested, but nvcc was not found. Are you sure your environment has nvcc available? If you're installing within a container from https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc.\")\r\nRuntimeError: --cuda_ext was requested, but nvcc was not found. Are you sure your environment has nvcc available? If you're installing within a container from https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc.\r\n\r\nWhat else should I do for this?",
"You should refer to apex installation instructions.\r\nApex has slightly changed since my comment so best is to go read NVIDIA's README and installation instructions here: https://github.com/NVIDIA/apex",
"@kbulutozler You can change pytorch docker image version to pytorch/pytorch:1.3-cuda10.1-cudnn7-devel"
] | 1,549 | 1,576 | 1,549 | NONE | null | Hi, I am trying to do mixed precision training, but I have countered a problem that seems to be related to the LayerNorm implementation of Apex. I have the following error msg while running the example (same error for my other code).
`
Traceback (most recent call last):
File "run_lm_finetuning.py", line 648, in <module>
main()
File "run_lm_finetuning.py", line 529, in main
model = BertForPreTraining.from_pretrained(args.bert_model)
File "/home/chenyang/anaconda3/envs/pytorch10/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 506, in from_pretrained
model = cls(config, *inputs, **kwargs)
File "/home/chenyang/anaconda3/envs/pytorch10/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 689, in __init__
self.bert = BertModel(config)
File "/home/chenyang/anaconda3/envs/pytorch10/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 600, in __init__
self.embeddings = BertEmbeddings(config)
File "/home/chenyang/anaconda3/envs/pytorch10/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 183, in __init__
self.LayerNorm = BertLayerNorm(config.hidden_size, eps=1e-12)
File "/home/chenyang/anaconda3/envs/pytorch10/lib/python3.6/site-packages/apex-0.1-py3.6.egg/apex/normalization/fused_layer_norm.py", line 126, in __init__
File "/home/chenyang/anaconda3/envs/pytorch10/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'fused_layer_norm_cuda'
`
I am wondering if it is related to the version of Apex, so may I know which Apex checkpoint you used. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/255/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/255/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/254 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/254/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/254/comments | https://api.github.com/repos/huggingface/transformers/issues/254/events | https://github.com/huggingface/transformers/pull/254 | 407,014,649 | MDExOlB1bGxSZXF1ZXN0MjUwNTg0MTgx | 254 | Python 2 | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,549 | 1,549 | 1,549 | MEMBER | null | Make the package compatible with python 2.7+ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/254/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/254",
"html_url": "https://github.com/huggingface/transformers/pull/254",
"diff_url": "https://github.com/huggingface/transformers/pull/254.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/254.patch",
"merged_at": 1549891167000
} |
https://api.github.com/repos/huggingface/transformers/issues/253 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/253/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/253/comments | https://api.github.com/repos/huggingface/transformers/issues/253/events | https://github.com/huggingface/transformers/pull/253 | 406,979,960 | MDExOlB1bGxSZXF1ZXN0MjUwNTU2Njk0 | 253 | Merge pull request #1 from huggingface/master | {
"login": "sashank06",
"id": 8636933,
"node_id": "MDQ6VXNlcjg2MzY5MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8636933?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashank06",
"html_url": "https://github.com/sashank06",
"followers_url": "https://api.github.com/users/sashank06/followers",
"following_url": "https://api.github.com/users/sashank06/following{/other_user}",
"gists_url": "https://api.github.com/users/sashank06/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashank06/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashank06/subscriptions",
"organizations_url": "https://api.github.com/users/sashank06/orgs",
"repos_url": "https://api.github.com/users/sashank06/repos",
"events_url": "https://api.github.com/users/sashank06/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashank06/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,549 | 1,549 | 1,549 | NONE | null | updating the repo | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/253/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/253/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/253",
"html_url": "https://github.com/huggingface/transformers/pull/253",
"diff_url": "https://github.com/huggingface/transformers/pull/253.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/253.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/252 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/252/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/252/comments | https://api.github.com/repos/huggingface/transformers/issues/252/events | https://github.com/huggingface/transformers/issues/252 | 406,919,939 | MDU6SXNzdWU0MDY5MTk5Mzk= | 252 | BERT tuning all parameters? | {
"login": "danyaljj",
"id": 2441454,
"node_id": "MDQ6VXNlcjI0NDE0NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danyaljj",
"html_url": "https://github.com/danyaljj",
"followers_url": "https://api.github.com/users/danyaljj/followers",
"following_url": "https://api.github.com/users/danyaljj/following{/other_user}",
"gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions",
"organizations_url": "https://api.github.com/users/danyaljj/orgs",
"repos_url": "https://api.github.com/users/danyaljj/repos",
"events_url": "https://api.github.com/users/danyaljj/events{/privacy}",
"received_events_url": "https://api.github.com/users/danyaljj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"In the examples scripts, it tunes the whole model.\r\nBut BERT models classes are just regular PyTorch `nn.Modules` so you can also freeze layer like you would do in any PyTorch module."
] | 1,549 | 1,549 | 1,549 | CONTRIBUTOR | null | Just a clarification question:
when tuning bert parameters (say, for SQUAD), does it tune the parameters of the final parameter or the whole BERT model?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/252/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/252/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/251 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/251/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/251/comments | https://api.github.com/repos/huggingface/transformers/issues/251/events | https://github.com/huggingface/transformers/pull/251 | 406,419,974 | MDExOlB1bGxSZXF1ZXN0MjUwMTIyMDA2 | 251 | Only keep the active part mof the loss for token classification | {
"login": "Iwontbecreative",
"id": 494951,
"node_id": "MDQ6VXNlcjQ5NDk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/494951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Iwontbecreative",
"html_url": "https://github.com/Iwontbecreative",
"followers_url": "https://api.github.com/users/Iwontbecreative/followers",
"following_url": "https://api.github.com/users/Iwontbecreative/following{/other_user}",
"gists_url": "https://api.github.com/users/Iwontbecreative/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Iwontbecreative/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Iwontbecreative/subscriptions",
"organizations_url": "https://api.github.com/users/Iwontbecreative/orgs",
"repos_url": "https://api.github.com/users/Iwontbecreative/repos",
"events_url": "https://api.github.com/users/Iwontbecreative/events{/privacy}",
"received_events_url": "https://api.github.com/users/Iwontbecreative/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks Thibault!"
] | 1,549 | 1,549 | 1,549 | CONTRIBUTOR | null | If attention mask is not none, then we want to restrict our loss to the items which are not padding (hereby assumed those that have attention_mask = 1). This is important if doing e.g NER. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/251/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/251/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/251",
"html_url": "https://github.com/huggingface/transformers/pull/251",
"diff_url": "https://github.com/huggingface/transformers/pull/251.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/251.patch",
"merged_at": 1549380826000
} |
https://api.github.com/repos/huggingface/transformers/issues/250 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/250/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/250/comments | https://api.github.com/repos/huggingface/transformers/issues/250/events | https://github.com/huggingface/transformers/pull/250 | 406,086,511 | MDExOlB1bGxSZXF1ZXN0MjQ5ODc2Mjk2 | 250 | Fix squad answer start and end position | {
"login": "cooelf",
"id": 7037265,
"node_id": "MDQ6VXNlcjcwMzcyNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7037265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cooelf",
"html_url": "https://github.com/cooelf",
"followers_url": "https://api.github.com/users/cooelf/followers",
"following_url": "https://api.github.com/users/cooelf/following{/other_user}",
"gists_url": "https://api.github.com/users/cooelf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cooelf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cooelf/subscriptions",
"organizations_url": "https://api.github.com/users/cooelf/orgs",
"repos_url": "https://api.github.com/users/cooelf/repos",
"events_url": "https://api.github.com/users/cooelf/events{/privacy}",
"received_events_url": "https://api.github.com/users/cooelf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks @cooelf, there was a PR merging `run_squad` and `run_squad2` that also fixed this issue."
] | 1,549 | 1,549 | 1,549 | CONTRIBUTOR | null | Previous version might miss some overlong answer start and end indices (which should be 0), and sometimes the start/end positions would be outside the model inputs.
Doc_start and doc_end are based on tokenized subword sequences while example.start_position and example.end_position are indices in original word-level, which are usually shorter than the subword sequences. Why not use tok_start_position and tok_end_position for comparison? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/250/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/250/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/250",
"html_url": "https://github.com/huggingface/transformers/pull/250",
"diff_url": "https://github.com/huggingface/transformers/pull/250.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/250.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/249 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/249/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/249/comments | https://api.github.com/repos/huggingface/transformers/issues/249/events | https://github.com/huggingface/transformers/issues/249 | 406,057,158 | MDU6SXNzdWU0MDYwNTcxNTg= | 249 | Fine tuning Bert for Question answering | {
"login": "MathewAlexander",
"id": 36654272,
"node_id": "MDQ6VXNlcjM2NjU0Mjcy",
"avatar_url": "https://avatars.githubusercontent.com/u/36654272?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MathewAlexander",
"html_url": "https://github.com/MathewAlexander",
"followers_url": "https://api.github.com/users/MathewAlexander/followers",
"following_url": "https://api.github.com/users/MathewAlexander/following{/other_user}",
"gists_url": "https://api.github.com/users/MathewAlexander/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MathewAlexander/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MathewAlexander/subscriptions",
"organizations_url": "https://api.github.com/users/MathewAlexander/orgs",
"repos_url": "https://api.github.com/users/MathewAlexander/repos",
"events_url": "https://api.github.com/users/MathewAlexander/events{/privacy}",
"received_events_url": "https://api.github.com/users/MathewAlexander/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The weights of the BERT base model are getting updated while finetuning the network.",
"Indeed"
] | 1,549 | 1,549 | 1,549 | NONE | null | I just wanted to ask if the weights of the Bert base model are getting updated while fine tuning Bert for Question answering. I see that the Bert for QA is a model with A linear layer on top of Bert pre-trained model. I am trying to reproduce the same model in keras. Could any one tell me if i should freeze the layers in the Bert base model or not?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/249/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/249/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/248 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/248/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/248/comments | https://api.github.com/repos/huggingface/transformers/issues/248/events | https://github.com/huggingface/transformers/pull/248 | 405,819,449 | MDExOlB1bGxSZXF1ZXN0MjQ5Njk0NjIx | 248 | fix prediction on run-squad.py example | {
"login": "JoeDumoulin",
"id": 2422288,
"node_id": "MDQ6VXNlcjI0MjIyODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2422288?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoeDumoulin",
"html_url": "https://github.com/JoeDumoulin",
"followers_url": "https://api.github.com/users/JoeDumoulin/followers",
"following_url": "https://api.github.com/users/JoeDumoulin/following{/other_user}",
"gists_url": "https://api.github.com/users/JoeDumoulin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JoeDumoulin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoeDumoulin/subscriptions",
"organizations_url": "https://api.github.com/users/JoeDumoulin/orgs",
"repos_url": "https://api.github.com/users/JoeDumoulin/repos",
"events_url": "https://api.github.com/users/JoeDumoulin/events{/privacy}",
"received_events_url": "https://api.github.com/users/JoeDumoulin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks @JoeDumoulin "
] | 1,549 | 1,549 | 1,549 | CONTRIBUTOR | null | run_squad.py exits with an error when running do_predict without training. The error is due to the model_state_dict not existing when --do_predict is selected.
Traceback (most recent call last):
File "run_squad.py", line 980, in <module>
main()
File "run_squad.py", line 923, in main
model_state_dict = torch.load(output_model_file)
File "/home/joe/pytorchnlp/lib/python3.5/site-packages/torch/serialization.py", line 365, in load
f = open(f, 'rb')
FileNotFoundError: [Errno 2] No such file or directory: '../../BERT_work/Squad/pytorch_model.bin' | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/248/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/248/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/248",
"html_url": "https://github.com/huggingface/transformers/pull/248",
"diff_url": "https://github.com/huggingface/transformers/pull/248.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/248.patch",
"merged_at": 1549378853000
} |
https://api.github.com/repos/huggingface/transformers/issues/247 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/247/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/247/comments | https://api.github.com/repos/huggingface/transformers/issues/247/events | https://github.com/huggingface/transformers/issues/247 | 405,757,654 | MDU6SXNzdWU0MDU3NTc2NTQ= | 247 | Multilabel classification and diverging loss | {
"login": "nicolas-mng",
"id": 37110816,
"node_id": "MDQ6VXNlcjM3MTEwODE2",
"avatar_url": "https://avatars.githubusercontent.com/u/37110816?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nicolas-mng",
"html_url": "https://github.com/nicolas-mng",
"followers_url": "https://api.github.com/users/nicolas-mng/followers",
"following_url": "https://api.github.com/users/nicolas-mng/following{/other_user}",
"gists_url": "https://api.github.com/users/nicolas-mng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nicolas-mng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nicolas-mng/subscriptions",
"organizations_url": "https://api.github.com/users/nicolas-mng/orgs",
"repos_url": "https://api.github.com/users/nicolas-mng/repos",
"events_url": "https://api.github.com/users/nicolas-mng/events{/privacy}",
"received_events_url": "https://api.github.com/users/nicolas-mng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, I am working on something similar. I feel like the original code might be incorrect. They seem to directly take the output of the model as 'loss' without applying any criteria. But I might be totally wrong. ",
"Hey! :)\r\n\r\nReally? What do you mean by criteria?\r\n\r\nI tried to artificially change my dataset so that the target outputs are\r\n[1, 0, 0,..., 0] for every sample. I wanted to see whether the model was\r\nable to learn this dummy case. However, it fails and it predicts the exact\r\nopposite, i.e. [0, 1, 1,.... 1]. That's why I think I must be doing\r\nsomething wrong somewhere.\r\n",
"Hi @nicolas-mng @zhipeng-fan Did you guys manage to get a multilabel problem to work ? Could you please share a gist ?",
"Hi, you should have a look here: https://medium.com/huggingface/multi-label-text-classification-using-bert-the-mighty-transformer-69714fa3fb3d ;) "
] | 1,549 | 1,559 | 1,549 | NONE | null | Hi,
I'm not sure I'm posting this at the right spot but I am trying to use your excellent implementation to do some multi label classification on some text. I basically adapted the run_classifier.py code to a Jupyter Notebook and change a little bit the BERT Sequence Classifier model so it can handle multilabel classification. However, my loss tends to diverge and my outputs are either all ones or all zeros.
The labels distribution in my train dataset is :
`array([ 65, 564, 108, 17, 40, 26, 306, 195, 25, 345, 54, 80, 214]) `
i.e. the label 1 is used 65 times, the label 2 is used 564 times etc... Each sample has between 1 and 4 labels.
I am using the Adam Optimizer on the BCEWithLogitsLoss and I am unable to figure out where the problem comes from? Should I add some weights in my loss function? Do I use it in a right way? Is my model wrong somewhere? I attach to this post a Notebook of my test. Maybe someone encountered the same problem before and could help me?
[NOTEBOOK](https://nbviewer.jupyter.org/github/nicolas-mingione/nlp/blob/master/test_bert.ipynb)
Thanks ! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/247/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/247/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/246 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/246/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/246/comments | https://api.github.com/repos/huggingface/transformers/issues/246/events | https://github.com/huggingface/transformers/pull/246 | 405,581,136 | MDExOlB1bGxSZXF1ZXN0MjQ5NTA5Njg5 | 246 | Accurate SQuAD answer start and end position | {
"login": "cooelf",
"id": 7037265,
"node_id": "MDQ6VXNlcjcwMzcyNjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7037265?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cooelf",
"html_url": "https://github.com/cooelf",
"followers_url": "https://api.github.com/users/cooelf/followers",
"following_url": "https://api.github.com/users/cooelf/following{/other_user}",
"gists_url": "https://api.github.com/users/cooelf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cooelf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cooelf/subscriptions",
"organizations_url": "https://api.github.com/users/cooelf/orgs",
"repos_url": "https://api.github.com/users/cooelf/repos",
"events_url": "https://api.github.com/users/cooelf/events{/privacy}",
"received_events_url": "https://api.github.com/users/cooelf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,549 | 1,549 | 1,549 | CONTRIBUTOR | null | Previous version might miss some overlong answer start and end indices (which should be 0), and sometimes the start/end positions would be outside the model inputs.
Doc_start and doc_end are based on tokenized subword sequences while example.start_position and example.end_position are indices in original word-level, which are usually shorter than the subword sequences. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/246/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/246/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/246",
"html_url": "https://github.com/huggingface/transformers/pull/246",
"diff_url": "https://github.com/huggingface/transformers/pull/246.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/246.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/245 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/245/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/245/comments | https://api.github.com/repos/huggingface/transformers/issues/245/events | https://github.com/huggingface/transformers/issues/245 | 405,396,619 | MDU6SXNzdWU0MDUzOTY2MTk= | 245 | can you do a new release + pypi | {
"login": "joelgrus",
"id": 1308313,
"node_id": "MDQ6VXNlcjEzMDgzMTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1308313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joelgrus",
"html_url": "https://github.com/joelgrus",
"followers_url": "https://api.github.com/users/joelgrus/followers",
"following_url": "https://api.github.com/users/joelgrus/following{/other_user}",
"gists_url": "https://api.github.com/users/joelgrus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joelgrus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joelgrus/subscriptions",
"organizations_url": "https://api.github.com/users/joelgrus/orgs",
"repos_url": "https://api.github.com/users/joelgrus/repos",
"events_url": "https://api.github.com/users/joelgrus/events{/privacy}",
"received_events_url": "https://api.github.com/users/joelgrus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi Joel, yes the new release (0.5.0) is pretty much ready (remaining work on branches `fifth-release` and `transfo-xl` to finish testing the newly added pre-trained OpenAI GPT and Transformer-XL).\r\n\r\nLikely next week.",
"Awesome, thanks!\n\nOn Thu, Jan 31, 2019, 11:06 AM Thomas Wolf <[email protected]> wrote:\n\n> Hi Joel, yes the new release (0.5.0) is pretty much ready (remaining work\n> on branches fifth-release and transfo-xl to finish testing the newly\n> added pre-trained OpenAI GPT and Transformer-XL).\n>\n> Likely next week.\n>\n> β\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/pytorch-pretrained-BERT/issues/245#issuecomment-459506373>,\n> or mute the thread\n> <https://github.com/notifications/unsubscribe-auth/ABP2mYQxXrEKfGvJhESKqKIQzI2dvcolks5vI1rUgaJpZM4ac94Y>\n> .\n>\n",
"Ok @joelgrus, the new release is out: https://github.com/huggingface/pytorch-pretrained-BERT/releases/tag/v0.5.0"
] | 1,548 | 1,549 | 1,549 | CONTRIBUTOR | null | we've been getting some requests to incorporate newer features into allennlp that are only on master (e.g. `never_split`).
thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/245/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/245/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/244 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/244/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/244/comments | https://api.github.com/repos/huggingface/transformers/issues/244/events | https://github.com/huggingface/transformers/pull/244 | 405,184,953 | MDExOlB1bGxSZXF1ZXN0MjQ5MTk4NTU1 | 244 | Avoid confusion of inplace LM masking | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great, thanks @tholor!"
] | 1,548 | 1,549 | 1,549 | NONE | null | Fix confusion of LM masking that happens inplace. Discussed in https://github.com/huggingface/pytorch-pretrained-BERT/issues/243 and https://github.com/huggingface/pytorch-pretrained-BERT/issues/226 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/244/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/244/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/244",
"html_url": "https://github.com/huggingface/transformers/pull/244",
"diff_url": "https://github.com/huggingface/transformers/pull/244.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/244.patch",
"merged_at": 1549019870000
} |
https://api.github.com/repos/huggingface/transformers/issues/243 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/243/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/243/comments | https://api.github.com/repos/huggingface/transformers/issues/243/events | https://github.com/huggingface/transformers/issues/243 | 405,156,658 | MDU6SXNzdWU0MDUxNTY2NTg= | 243 | seems there is a bug in fine tuning language model | {
"login": "imhuim982",
"id": 9867069,
"node_id": "MDQ6VXNlcjk4NjcwNjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9867069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imhuim982",
"html_url": "https://github.com/imhuim982",
"followers_url": "https://api.github.com/users/imhuim982/followers",
"following_url": "https://api.github.com/users/imhuim982/following{/other_user}",
"gists_url": "https://api.github.com/users/imhuim982/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imhuim982/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imhuim982/subscriptions",
"organizations_url": "https://api.github.com/users/imhuim982/orgs",
"repos_url": "https://api.github.com/users/imhuim982/repos",
"events_url": "https://api.github.com/users/imhuim982/events{/privacy}",
"received_events_url": "https://api.github.com/users/imhuim982/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I believe this is not a bug you are referring to, but indeed some confusing part of the code that we should probably change to avoid future confusion. `tokens_a` get masked **inplace** by the method `random_word`. `tokens_a` and `t1_random` refer indeed to the same objects. You can see that the input got masked also by checking the logs of the first examples:\r\n```\r\n Iteration: 0%| | 0/196 [00:00<?, ?it/s]01/31/2019 11:31:35 - INFO - __main__ - *** Example ***\r\n01/31/2019 11:31:35 - INFO - __main__ - guid: 0\r\n01/31/2019 11:31:35 - INFO - __main__ - tokens: [CLS] [MASK] to 95 % of the [MASK] ##y [MASK] ' s [MASK] [SEP] le ##que ##ux ( 2005 : [MASK] ) . [SEP]\r\n01/31/2019 11:31:35 - INFO - __main__ - input_ids: 101 103 1106 4573 110 1104 1103 103 1183 103 112 188 103 102 5837 3530 5025 113 1478 131 103 114 119 102\r\n01/31/2019 11:31:35 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\r\n01/31/2019 11:31:35 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1\r\n01/31/2019 11:31:35 - INFO - __main__ - LM label: [-1, 1146, -1, -1, -1, -1, -1, 6831, -1, 1236, -1, -1, 14296, -1, -1, -1, -1, -1, -1, -1, 125, -1, -1, -1] \r\n01/31/2019 11:31:35 - INFO - __main__ - Is next sentence label: 0 \r\n01/31/2019 11:31:37 - INFO - __main__ - *** Example ***\r\n01/31/2019 11:31:37 - INFO - __main__ - guid: 1\r\n01/31/2019 11:31:37 - INFO - __main__ - tokens: [CLS] a car [MASK] [MASK] 200 mill ##ig ##rams . [SEP] the rain had only ceased with the gray [MASK] of morning [MASK] [SEP]\r\n01/31/2019 11:31:37 - INFO - __main__ - input_ids: 101 170 1610 103 103 2363 6159 6512 24818 119 102 1103 4458 1125 1178 6445 1114 1103 5021 103 1104 2106 103 102\r\n01/31/2019 11:31:37 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\r\n01/31/2019 11:31:37 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1\r\n01/31/2019 11:31:37 - INFO - __main__ - LM label: [-1, -1, -1, 2980, 1110, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 24177, -1, -1, 1120, -1] \r\n01/31/2019 11:31:37 - INFO - __main__ - Is next sentence label: 1 \r\n```\r\n\r\n Please see discussion in https://github.com/huggingface/pytorch-pretrained-BERT/issues/226",
"ah, ok, i see, thx~",
"Added a PR to simplify this.",
"Btw, would inplace mask pollute training data in 'on_memory' way?",
"You mean if the original training sentence stored in `train_dataset.all_docs` get somehow modified (= masked)?!\r\n=> No, this object is not touched by the LM masking / padding / cutting "
] | 1,548 | 1,548 | 1,548 | NONE | null | For masked language model, the input should be tokens masked. But in examples/run_lm_finetuning.py, input is not masked.
In method convert_example_to_features, is it supposed to use masked output as token input?
tokens = []
segment_ids = []
tokens.append("[CLS]")
segment_ids.append(0)
for token in tokens_a:# is tokens_a supposed to be t1_random?
tokens.append(token)
segment_ids.append(0)
tokens.append("[SEP]")
segment_ids.append(0)
assert len(tokens_b) > 0
for token in tokens_b:
tokens.append(token)
segment_ids.append(1)
tokens.append("[SEP]")
segment_ids.append(1)
input_ids = tokenizer.convert_tokens_to_ids(tokens) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/243/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/243/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/242 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/242/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/242/comments | https://api.github.com/repos/huggingface/transformers/issues/242/events | https://github.com/huggingface/transformers/pull/242 | 404,944,006 | MDExOlB1bGxSZXF1ZXN0MjQ5MDEyODU5 | 242 | Fix argparse type error | {
"login": "ksurya",
"id": 932927,
"node_id": "MDQ6VXNlcjkzMjkyNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/932927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ksurya",
"html_url": "https://github.com/ksurya",
"followers_url": "https://api.github.com/users/ksurya/followers",
"following_url": "https://api.github.com/users/ksurya/following{/other_user}",
"gists_url": "https://api.github.com/users/ksurya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ksurya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ksurya/subscriptions",
"organizations_url": "https://api.github.com/users/ksurya/orgs",
"repos_url": "https://api.github.com/users/ksurya/repos",
"events_url": "https://api.github.com/users/ksurya/events{/privacy}",
"received_events_url": "https://api.github.com/users/ksurya/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks Surya!"
] | 1,548 | 1,549 | 1,549 | CONTRIBUTOR | null | Resolved the following error on executing `run_squad2.py --help`
```TypeError: %o format: an integer is required, not dict``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/242/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/242/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/242",
"html_url": "https://github.com/huggingface/transformers/pull/242",
"diff_url": "https://github.com/huggingface/transformers/pull/242.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/242.patch",
"merged_at": 1549019696000
} |
https://api.github.com/repos/huggingface/transformers/issues/241 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/241/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/241/comments | https://api.github.com/repos/huggingface/transformers/issues/241/events | https://github.com/huggingface/transformers/issues/241 | 404,905,842 | MDU6SXNzdWU0MDQ5MDU4NDI= | 241 | Tokenization doesn't seem to match BERT paper | {
"login": "JohnGiorgi",
"id": 8917831,
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnGiorgi",
"html_url": "https://github.com/JohnGiorgi",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"WordPiece tokenization depends on the particular BERT model: In general, one model, say, bert-based-cased will produce a different tokenization than another, say, bert-large-uncased.\r\n\r\nIf you try all models, one or more might produce the tokenization shown in the example in the paper.\r\nIt might also happen that none of them does, in which case the example was probably produced with an unpublished model.\r\n\r\nA bug would be if the same model leads to different tokenizations in the pytorch and tensorflow version.",
"Hmm, I see. I didn't know that, thanks for pointing it out!\r\n\r\nFor what it's worth, `bert-base-multilingual-cased` is the only model (from those currently listed in the readme of this repo) that produces the tokenization shown in the example in the paper."
] | 1,548 | 1,548 | 1,548 | CONTRIBUTOR | null | In the [BERT paper](https://arxiv.org/abs/1810.04805) section 4.3 ("Named Entity Recognition") there is an example of some tokenized text:
```python
['Jim', 'Hen', '##son', 'was', 'a', 'puppet', '##eer']
```
However, when I take that sentence and try to tokenize it myself with `BertTokenizer` from this repo
```python
from pytorch_pretrained_bert import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-cased', do_lower_case=False)
text = "Jim Henson was a puppeteer"
tokenizer.tokenize(text) == ['Jim', 'He', '##nson', 'was', 'a', 'puppet', '##eer']
```
same thing happens if I pre-tokenize and just use `BertTokenizer.wordpiece_tokenizer.tokenize()`
```python
from itertools import chain
from pytorch_pretrained_bert import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-cased', do_lower_case=False)
text = "Jim Henson was a puppeteer".split()
list(chain.from_iterable([tokenizer.wordpiece_tokenizer.tokenize(token) for token in text])) == ['Jim', 'He', '##nson', 'was', 'a', 'puppet', '##eer']
```
Is there something I am misunderstanding / doing wrong or is this an actual bug? The BERT paper and this repo tokenize `"Henson"` as `['Hen', '##son']` and `['He', '##nson']` respectively. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/241/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/240 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/240/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/240/comments | https://api.github.com/repos/huggingface/transformers/issues/240/events | https://github.com/huggingface/transformers/pull/240 | 404,901,084 | MDExOlB1bGxSZXF1ZXN0MjQ4OTc5MjIw | 240 | Minor update in README | {
"login": "girishponkiya",
"id": 2093282,
"node_id": "MDQ6VXNlcjIwOTMyODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2093282?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/girishponkiya",
"html_url": "https://github.com/girishponkiya",
"followers_url": "https://api.github.com/users/girishponkiya/followers",
"following_url": "https://api.github.com/users/girishponkiya/following{/other_user}",
"gists_url": "https://api.github.com/users/girishponkiya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/girishponkiya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/girishponkiya/subscriptions",
"organizations_url": "https://api.github.com/users/girishponkiya/orgs",
"repos_url": "https://api.github.com/users/girishponkiya/repos",
"events_url": "https://api.github.com/users/girishponkiya/events{/privacy}",
"received_events_url": "https://api.github.com/users/girishponkiya/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks Girishkumar!"
] | 1,548 | 1,549 | 1,549 | CONTRIBUTOR | null | Updated links to classes in `modeling.py` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/240/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/240/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/240",
"html_url": "https://github.com/huggingface/transformers/pull/240",
"diff_url": "https://github.com/huggingface/transformers/pull/240.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/240.patch",
"merged_at": 1549019645000
} |
https://api.github.com/repos/huggingface/transformers/issues/239 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/239/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/239/comments | https://api.github.com/repos/huggingface/transformers/issues/239/events | https://github.com/huggingface/transformers/issues/239 | 404,850,329 | MDU6SXNzdWU0MDQ4NTAzMjk= | 239 | cannot load BERTAdam when restoring from BioBert | {
"login": "mikerossgithub",
"id": 20446922,
"node_id": "MDQ6VXNlcjIwNDQ2OTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/20446922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mikerossgithub",
"html_url": "https://github.com/mikerossgithub",
"followers_url": "https://api.github.com/users/mikerossgithub/followers",
"following_url": "https://api.github.com/users/mikerossgithub/following{/other_user}",
"gists_url": "https://api.github.com/users/mikerossgithub/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mikerossgithub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mikerossgithub/subscriptions",
"organizations_url": "https://api.github.com/users/mikerossgithub/orgs",
"repos_url": "https://api.github.com/users/mikerossgithub/repos",
"events_url": "https://api.github.com/users/mikerossgithub/events{/privacy}",
"received_events_url": "https://api.github.com/users/mikerossgithub/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I see. This is because they didn't use the same names for the adam optimizer variables than the Google team. I'll see if I can find a simple way around this for future cases.\r\n\r\nIn the mean time, you can install `pytorch-pretrained-bert` from the master (`git clone ...` and `pip install -e .`) and add the names of these variables (`BERTAdam` to the black-list line 53 in the conversion script: https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py#L53",
"Hmm, loading bioberts parameters works for me. Mabye as a future feature we could have the option to load biobert parameters as an option in the package?\r\nI ran it like this: \r\n\r\n\r\n```\r\nconvert_tf_checkpoint_to_pytorch(\"AI/data/biobert/biobert_model.ckpt.index\",\r\n \"AI/data/biobert/bert_config.json\",\"AI/data/biobert/pytorch_model.bin\")\r\n```\r\n\r\nit also loads afterwards.",
"This can help. https://github.com/MeRajat/SolvingAlmostAnythingWithBert/blob/ner_medical/convert_to_pytorch_wt.ipynb ",
"After I convert the tensorflow checkpoint to pytorch model by excluding some variables as mentioned by @MeRajat , I get the following warnings when I tried to load the model. \r\n\r\n> 02/21/2019 17:33:06 - INFO - pytorch_pretrained_bert.modeling - Weights of BertForQuestionAnswering not initialized from pretrained model: ['qa_outputs.bias', 'qa_outputs.weight']\r\n02/21/2019 17:33:06 - INFO - pytorch_pretrained_bert.modeling - Weights from pretrained model not used in BertForQuestionAnswering: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias']\r\n",
"This is normal. Closing the issue now."
] | 1,548 | 1,551 | 1,551 | NONE | null | I am trying to convert the recently released BioBert checkpoint: https://github.com/naver/biobert-pretrained
The conversion script loads the checkpoint, but appears to balk at BERTAdam when building the Pytorch model.
```
...
Building PyTorch model from configuration: {
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"type_vocab_size": 2,
"vocab_size": 28996
}
Initialize PyTorch weight ['bert', 'embeddings', 'LayerNorm', 'beta']
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/venvs/dev3.6/lib/python3.6/site-packages/pytorch_pretrained_bert/__main__.py", line 19, in <module>
convert_tf_checkpoint_to_pytorch(TF_CHECKPOINT, TF_CONFIG, PYTORCH_DUMP_OUTPUT)
File "/venvs/dev3.6/lib/python3.6/site-packages/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py", line 69, in convert_tf_checkpoint_to_pytorch
pointer = getattr(pointer, l[0])
AttributeError: 'Parameter' object has no attribute 'BERTAdam'
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/239/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/239/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/238 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/238/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/238/comments | https://api.github.com/repos/huggingface/transformers/issues/238/events | https://github.com/huggingface/transformers/issues/238 | 404,624,962 | MDU6SXNzdWU0MDQ2MjQ5NjI= | 238 | padded positions are ignored when embedding position ids | {
"login": "guxd",
"id": 6091014,
"node_id": "MDQ6VXNlcjYwOTEwMTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/6091014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guxd",
"html_url": "https://github.com/guxd",
"followers_url": "https://api.github.com/users/guxd/followers",
"following_url": "https://api.github.com/users/guxd/following{/other_user}",
"gists_url": "https://api.github.com/users/guxd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guxd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guxd/subscriptions",
"organizations_url": "https://api.github.com/users/guxd/orgs",
"repos_url": "https://api.github.com/users/guxd/repos",
"events_url": "https://api.github.com/users/guxd/events{/privacy}",
"received_events_url": "https://api.github.com/users/guxd/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1260952223,
"node_id": "MDU6TGFiZWwxMjYwOTUyMjIz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion",
"name": "Discussion",
"color": "22870e",
"default": false,
"description": "Discussion on a topic (keep it focused or open a new issue though)"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n"
] | 1,548 | 1,557 | 1,557 | NONE | null | When embedding position ids, all positions are considered.
```
seq_length = input_ids.size(1)
position_ids = torch.arange(seq_length, dtype=torch.long, device=input_ids.device)
```
This is different from most transformer implementations.
Should it be
```
position_ids = np.array([
[pos_i+1 if w_i != PAD else 0
for pos_i, w_i in enumerate(seq)] for seq in batch_seq])
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/238/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/238/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/237 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/237/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/237/comments | https://api.github.com/repos/huggingface/transformers/issues/237/events | https://github.com/huggingface/transformers/issues/237 | 404,604,613 | MDU6SXNzdWU0MDQ2MDQ2MTM= | 237 | How can I change vocab size for pretrained model? | {
"login": "hahmyg",
"id": 3884429,
"node_id": "MDQ6VXNlcjM4ODQ0Mjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3884429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hahmyg",
"html_url": "https://github.com/hahmyg",
"followers_url": "https://api.github.com/users/hahmyg/followers",
"following_url": "https://api.github.com/users/hahmyg/following{/other_user}",
"gists_url": "https://api.github.com/users/hahmyg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hahmyg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hahmyg/subscriptions",
"organizations_url": "https://api.github.com/users/hahmyg/orgs",
"repos_url": "https://api.github.com/users/hahmyg/repos",
"events_url": "https://api.github.com/users/hahmyg/events{/privacy}",
"received_events_url": "https://api.github.com/users/hahmyg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\nIf you want to modify the vocabulary, you should refer to this part of the original repo `README` https://github.com/google-research/bert#learning-a-new-wordpiece-vocabulary",
"If you don't want a complete new vocabulary (which would require training from scratch), but extend the pretrained one with a couple of domain specific tokens, this comment from Jacob Devlin might help: \r\n\r\n> [...] if you want to add more vocab you can either:\r\n(a) Just replace the \"[unusedX]\" tokens with your vocabulary. Since these were not used they are effectively randomly initialized.\r\n(b) Append it to the end of the vocab, and write a script which generates a new checkpoint that is identical to the pre-trained checkpoint, but but with a bigger vocab where the new embeddings are randomly initialized (for initialized we used tf.truncated_normal_initializer(stddev=0.02)). This will likely require mucking around with some tf.concat() and tf.assign() calls.\r\n\r\n(https://github.com/google-research/bert/issues/9)\r\n\r\nI am currently experimenting with approach a). Since there are 993 unused tokens this might already help for the most important tokens in your domain.",
"@tholor and @rodgzilla answers are the way to go.\r\nClosing this issue since there no activity.\r\nFeel free to re-open if needed.",
"> If you don't want a complete new vocabulary (which would require training from scratch), but extend the pretrained one with a couple of domain specific tokens, this comment from Jacob Devlin might help:\r\n> \r\n> > [...] if you want to add more vocab you can either:\r\n> > (a) Just replace the \"[unusedX]\" tokens with your vocabulary. Since these were not used they are effectively randomly initialized.\r\n> > (b) Append it to the end of the vocab, and write a script which generates a new checkpoint that is identical to the pre-trained checkpoint, but but with a bigger vocab where the new embeddings are randomly initialized (for initialized we used tf.truncated_normal_initializer(stddev=0.02)). This will likely require mucking around with some tf.concat() and tf.assign() calls.\r\n> \r\n> ([google-research/bert#9](https://github.com/google-research/bert/issues/9))\r\n> \r\n> I am currently experimenting with approach a). Since there are 993 unused tokens this might already help for the most important tokens in your domain.\r\n\r\n@tholor I have exactly the same situation as you had. I'm wondering If you can tell me how your experiment with approach (a) went. Did it improve the accuracy. I really appreciate if you can share your conclusion.",
"> @tholor and @rodgzilla answers are the way to go.\r\n> Closing this issue since there no activity.\r\n> Feel free to re-open if needed.\r\n\r\nHi @thomwolf , for implementing models like VideoBERT we need to append thousands of entries to the word embedding lookup table. How could we do so in Pytorch/any such examples using the library?",
"@tholor Can you guide me on how you are counting 993 unused tokens? I see only first 100 places of unused tokens?",
"For those finding this on the web, I found the following answer helpful: https://github.com/huggingface/transformers/issues/1413#issuecomment-538083512"
] | 1,548 | 1,689 | 1,549 | NONE | null | Is there way to change (expand) vocab size for pretrained model?
When I input the new token id to model, it returns:
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1108 with torch.no_grad():
1109 torch.embedding_renorm_(weight, input, max_norm, norm_type)
-> 1110 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1111
1112
RuntimeError: index out of range at /pytorch/aten/src/TH/generic/THTensorMath.cpp:352 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/237/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/237/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/236 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/236/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/236/comments | https://api.github.com/repos/huggingface/transformers/issues/236/events | https://github.com/huggingface/transformers/issues/236 | 404,360,087 | MDU6SXNzdWU0MDQzNjAwODc= | 236 | Preprocessing necessary for lengthier text | {
"login": "StalVars",
"id": 6938028,
"node_id": "MDQ6VXNlcjY5MzgwMjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6938028?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StalVars",
"html_url": "https://github.com/StalVars",
"followers_url": "https://api.github.com/users/StalVars/followers",
"following_url": "https://api.github.com/users/StalVars/following{/other_user}",
"gists_url": "https://api.github.com/users/StalVars/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StalVars/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StalVars/subscriptions",
"organizations_url": "https://api.github.com/users/StalVars/orgs",
"repos_url": "https://api.github.com/users/StalVars/repos",
"events_url": "https://api.github.com/users/StalVars/events{/privacy}",
"received_events_url": "https://api.github.com/users/StalVars/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ok. I see you included this. (max_sent_length, max_query_length)\r\nI will debug my error.You probably can close this issue. "
] | 1,548 | 1,548 | 1,548 | NONE | null | Hi, I tried to train squad model on a different dataset where I have lengthier questions/contexts. It gave memory error
CUDA out of memory. Tried to allocate 4.50 MiB (GPU 5; 11.78 GiB total capacity;
This error seems to happen in pytorch when there are lengthier data points ( pytorch tells how much it tried to allocate as opposed to normal CUDA out of memory error).
tensorflow code for bert doesn't give this error as it trims the question/text lengths. I think you should include in this package as well. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/236/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/236/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/235 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/235/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/235/comments | https://api.github.com/repos/huggingface/transformers/issues/235/events | https://github.com/huggingface/transformers/issues/235 | 404,298,845 | MDU6SXNzdWU0MDQyOTg4NDU= | 235 | Training BERT behind a proxy server | {
"login": "PeliconA",
"id": 43340947,
"node_id": "MDQ6VXNlcjQzMzQwOTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/43340947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PeliconA",
"html_url": "https://github.com/PeliconA",
"followers_url": "https://api.github.com/users/PeliconA/followers",
"following_url": "https://api.github.com/users/PeliconA/following{/other_user}",
"gists_url": "https://api.github.com/users/PeliconA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PeliconA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PeliconA/subscriptions",
"organizations_url": "https://api.github.com/users/PeliconA/orgs",
"repos_url": "https://api.github.com/users/PeliconA/repos",
"events_url": "https://api.github.com/users/PeliconA/events{/privacy}",
"received_events_url": "https://api.github.com/users/PeliconA/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can download the Tensorflow weights from Google's BERT repo and convert them as detailed in the readme of the present repo.",
"Hi tomwolf,\r\n\r\nI am new to XLNet and have the same issue as above. \r\n\r\nCould you direct me to the readme of the issue? I am not able to find it.\r\n\r\nI modified my code to resolve the issue to \r\n\r\n**model_file_address = '/myfolder/XLNetWork/xlnet-base-cased-config.json'**\r\n\r\nBut I get the below error on the line : \r\n**model = XLNetForSequenceClassification.from_pretrained(model_file_address,num_labels=len(tag2idx))**\r\n\r\nError:\r\n**UnpicklingError: invalid load key, '{'.**\r\n\r\nI am pretty much stucked.\r\n\r\nYour help will be appreciated.\r\n\r\nThanks,\r\n\r\nSaul\r\n",
"@SaulML Did you solve it? I got the same error, i.e. `UnpicklingError: invalid load key, '{'.`, when I tried to load pretrained bert using `model = BertForSequenceClassification.from_pretrained(\"/PathToBert/uncased_L-12_H-768_A-12/bert_config.json\", num_labels=2)`, thanks in advance!",
"You can now supply a `proxies` argument to `from_pretrained` when you are using proxies.\r\nCheck the doc and docstrings.",
"Got the same error as @SaulML and @iamxpy. Has anyone solved it?",
"Got the same unpickling error. Was it solved?",
"I think you need to just have the path as the directory rather than the config file."
] | 1,548 | 1,620 | 1,549 | NONE | null | When I try to run BERT training, I get the following error during the vocabulary download:
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /models.huggingface.co/bert/bert-large-uncased-vocab.txt (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fbb55377828>: Failed to establish a new connection: [Errno 110] Connection timed out'))
I am running the script behind a proxy server which I suspect is the cause of this error. Is there any way to remedy this?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/235/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/235/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/234 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/234/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/234/comments | https://api.github.com/repos/huggingface/transformers/issues/234/events | https://github.com/huggingface/transformers/issues/234 | 404,294,481 | MDU6SXNzdWU0MDQyOTQ0ODE= | 234 | Fine tuning for evaluation | {
"login": "Alexadar",
"id": 14125937,
"node_id": "MDQ6VXNlcjE0MTI1OTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/14125937?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Alexadar",
"html_url": "https://github.com/Alexadar",
"followers_url": "https://api.github.com/users/Alexadar/followers",
"following_url": "https://api.github.com/users/Alexadar/following{/other_user}",
"gists_url": "https://api.github.com/users/Alexadar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Alexadar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Alexadar/subscriptions",
"organizations_url": "https://api.github.com/users/Alexadar/orgs",
"repos_url": "https://api.github.com/users/Alexadar/repos",
"events_url": "https://api.github.com/users/Alexadar/events{/privacy}",
"received_events_url": "https://api.github.com/users/Alexadar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"1. For evaluation I would advise the maximum batch size that your GPU allows. You will be able to use more efficiently this way.\r\n\r\n2. I think you will be better off by using a single thread.",
"Thanks! How can i figure out optimal batch size? I want to try tesla k80",
"You increase it gradually and when the program crashes, it is too big ^^.",
"Thanks!",
"Guys, sorry i reopen this issue, but it might be helpful and on topic of evaluation\r\nI want to load batch of data into model for evaluation. Batch have size of 16 sentences of different length\r\nCode:\r\n```\r\ntokens_tensor = torch.tensor(indexed_tokens)\r\nsegments_tensors = torch.tensor(segments_ids)\r\npredictions = model(tokens_tensor, segments_tensors)\r\n```\r\nindexed_tokens are array of size 16 of arrays of inputs.\r\nI got error\r\nValueError: expected sequence of length 121 at dim 1 (got 23)\r\n\r\nwhen i create tensor from a single element\r\ntokens_tensor = torch.tensor([indexed_tokens[0]])\r\nit works\r\n\r\nWhat im doing wrong? \r\nThanks!",
"Could you create of minimal program that reproduces your problem (with the code you are using to generate `indexed_tokens`)?",
"1. Tensor Input array should have same length for all rows. My sentences had various length. That's why pytorch raise exception\r\n2. If you add zeros to the end of input arrays, to make all rows equal, evaluation will be slower than one per sentence. Batching not improving speed.",
"Hi @Alexadar, you have to batch your examples and pad them indeed. No other way I'm afraid.",
"Sorry, i missed your post request for example. \r\nYes, padding is only way to batch. It is slower than process sentencess one by one, i tested on GPU. "
] | 1,548 | 1,551 | 1,551 | NONE | null | Hi!
1) Help me please figure out, what would be optimal batch size for evaluating nextSentencePrediction model? For performance. Is it same as used during pre-training (128)?
2) If i building high performance evaluating backend on CUDA, would it be a good idea to use several threads with bert model in each, or its better to use one thread with proper batching? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/234/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/234/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/233 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/233/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/233/comments | https://api.github.com/repos/huggingface/transformers/issues/233/events | https://github.com/huggingface/transformers/issues/233 | 403,810,079 | MDU6SXNzdWU0MDM4MTAwNzk= | 233 | What is get_lr() meaning in the optimizer.py | {
"login": "kugwzk",
"id": 15382517,
"node_id": "MDQ6VXNlcjE1MzgyNTE3",
"avatar_url": "https://avatars.githubusercontent.com/u/15382517?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kugwzk",
"html_url": "https://github.com/kugwzk",
"followers_url": "https://api.github.com/users/kugwzk/followers",
"following_url": "https://api.github.com/users/kugwzk/following{/other_user}",
"gists_url": "https://api.github.com/users/kugwzk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kugwzk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kugwzk/subscriptions",
"organizations_url": "https://api.github.com/users/kugwzk/orgs",
"repos_url": "https://api.github.com/users/kugwzk/repos",
"events_url": "https://api.github.com/users/kugwzk/events{/privacy}",
"received_events_url": "https://api.github.com/users/kugwzk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can use it to get the current learning rate of the `BertAdam` optimizer (which vary according to the schedules discussed in #195)."
] | 1,548 | 1,549 | 1,549 | NONE | null | I use a Model based on BertModel, and when I use the BertAdam the learning rate isn't changed. And when I use `get_lr()`, the return result is `[0]`. And I see the length of state isn't 0, but why I get that? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/233/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/233/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/231 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/231/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/231/comments | https://api.github.com/repos/huggingface/transformers/issues/231/events | https://github.com/huggingface/transformers/issues/231 | 403,574,123 | MDU6SXNzdWU0MDM1NzQxMjM= | 231 | Why is the output bias computed separately? | {
"login": "yamrzou",
"id": 40591511,
"node_id": "MDQ6VXNlcjQwNTkxNTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/40591511?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yamrzou",
"html_url": "https://github.com/yamrzou",
"followers_url": "https://api.github.com/users/yamrzou/followers",
"following_url": "https://api.github.com/users/yamrzou/following{/other_user}",
"gists_url": "https://api.github.com/users/yamrzou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yamrzou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yamrzou/subscriptions",
"organizations_url": "https://api.github.com/users/yamrzou/orgs",
"repos_url": "https://api.github.com/users/yamrzou/repos",
"events_url": "https://api.github.com/users/yamrzou/events{/privacy}",
"received_events_url": "https://api.github.com/users/yamrzou/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The code section you linked follows the original TensorFlow code: https://github.com/google-research/bert/blob/f39e881b169b9d53bea03d2d341b31707a6c052b/run_pretraining.py#L257",
"Exactly."
] | 1,548 | 1,548 | 1,548 | NONE | null | Hi !
Sorry if this is a dumb question, but I don't understand why is the bias [added separately](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L379) to the decoder weights instead of using `self.decoder = nn.Linear(num_features, num_tokens, bias=True)`? Isn't it equivalent? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/231/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/231/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/230 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/230/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/230/comments | https://api.github.com/repos/huggingface/transformers/issues/230/events | https://github.com/huggingface/transformers/issues/230 | 403,494,487 | MDU6SXNzdWU0MDM0OTQ0ODc= | 230 | Cleaning `~/.pytorch_pretrained_bert` | {
"login": "ZhaofengWu",
"id": 11954789,
"node_id": "MDQ6VXNlcjExOTU0Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhaofengWu",
"html_url": "https://github.com/ZhaofengWu",
"followers_url": "https://api.github.com/users/ZhaofengWu/followers",
"following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions",
"organizations_url": "https://api.github.com/users/ZhaofengWu/orgs",
"repos_url": "https://api.github.com/users/ZhaofengWu/repos",
"events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhaofengWu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This folder contains the pretrained model weights as they have been trained by google and the vocabulary files for the tokenizer.\r\n\r\nI would not remove it unless you are really tight on disk space, in this case I guess you could only keep the `.json` files with the vocabulary and load your finetuned model.",
"Yes it contains the weights, configuration and vocabulary files. You can remove it if you want. In that case the weights will be downloaded again the next time you initiate a BertModel."
] | 1,548 | 1,548 | 1,548 | CONTRIBUTOR | null | What is inside `~/.pytorch_pretrained_bert`? Is it just the downloaded pre-trained model weights? Is it safe to remove this directory? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/230/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/230/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/229 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/229/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/229/comments | https://api.github.com/repos/huggingface/transformers/issues/229/events | https://github.com/huggingface/transformers/issues/229 | 403,430,719 | MDU6SXNzdWU0MDM0MzA3MTk= | 229 | Is BERT suitable for seq2seq tasks, such as machine translation? | {
"login": "ootts",
"id": 24546823,
"node_id": "MDQ6VXNlcjI0NTQ2ODIz",
"avatar_url": "https://avatars.githubusercontent.com/u/24546823?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ootts",
"html_url": "https://github.com/ootts",
"followers_url": "https://api.github.com/users/ootts/followers",
"following_url": "https://api.github.com/users/ootts/following{/other_user}",
"gists_url": "https://api.github.com/users/ootts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ootts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ootts/subscriptions",
"organizations_url": "https://api.github.com/users/ootts/orgs",
"repos_url": "https://api.github.com/users/ootts/repos",
"events_url": "https://api.github.com/users/ootts/events{/privacy}",
"received_events_url": "https://api.github.com/users/ootts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It is, check the nice recent work of Guillaume Lample and Alexis Conneau: https://arxiv.org/abs/1901.07291"
] | 1,548 | 1,548 | 1,548 | NONE | null | If true, is there an example? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/229/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/229/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/228 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/228/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/228/comments | https://api.github.com/repos/huggingface/transformers/issues/228/events | https://github.com/huggingface/transformers/issues/228 | 403,423,004 | MDU6SXNzdWU0MDM0MjMwMDQ= | 228 | Freezing base transformer weights | {
"login": "ZhaofengWu",
"id": 11954789,
"node_id": "MDQ6VXNlcjExOTU0Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhaofengWu",
"html_url": "https://github.com/ZhaofengWu",
"followers_url": "https://api.github.com/users/ZhaofengWu/followers",
"following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions",
"organizations_url": "https://api.github.com/users/ZhaofengWu/orgs",
"repos_url": "https://api.github.com/users/ZhaofengWu/repos",
"events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhaofengWu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi!\r\n\r\nYou can modify the trainable attributes as described in #95.",
"Thanks!"
] | 1,548 | 1,548 | 1,548 | CONTRIBUTOR | null | As I understand, say if I'm doing a classification task, then the transformer weights, along with the top classification layer weights, are both trainable (i.e. `requires_grad=True`), correct? If so, is there a way to freeze the transformer weights, but only train the top layer? Is that a good idea in general when I have a small dataset? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/228/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/228/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/227 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/227/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/227/comments | https://api.github.com/repos/huggingface/transformers/issues/227/events | https://github.com/huggingface/transformers/issues/227 | 403,186,108 | MDU6SXNzdWU0MDMxODYxMDg= | 227 | RuntimeError: Expected object of backend CUDA but got backend CPU for argument #3 'index' | {
"login": "danyaljj",
"id": 2441454,
"node_id": "MDQ6VXNlcjI0NDE0NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danyaljj",
"html_url": "https://github.com/danyaljj",
"followers_url": "https://api.github.com/users/danyaljj/followers",
"following_url": "https://api.github.com/users/danyaljj/following{/other_user}",
"gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions",
"organizations_url": "https://api.github.com/users/danyaljj/orgs",
"repos_url": "https://api.github.com/users/danyaljj/repos",
"events_url": "https://api.github.com/users/danyaljj/events{/privacy}",
"received_events_url": "https://api.github.com/users/danyaljj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You failed to move the tensors to GPU. \r\nReplace your code with this:\r\n```\r\ninput_ids_tensor = input_ids_tensor.to(self.device)\r\nsegment_ids_tensor = segment_ids_tensor.to(self.device)\r\ninput_mask_tensor = input_mask_tensor.to(self.device)\r\n```",
"Ah I didn't realize they don't work in-place (unlike the syntax for model files `model.to(device)`). ",
"thank you!",
"Great thanks!",
"thanksοΌ",
"use model.to(device) as well"
] | 1,548 | 1,594 | 1,548 | CONTRIBUTOR | null | Here is the complete error message:
```
Traceback (most recent call last):
File "app/set_expantion_eval.py", line 118, in <module>
map_n=flags.map_n)
File "app/set_expantion_eval.py", line 62, in Eval
expansionWithScores = BE.set_expansion_tensorized(seeds, ["1"])
File "/mnt/castor/seas_home/d/danielkh/ideaProjects/bert-analogies/app/bert_expansion.py", line 109, in set_expansion_tensorized
gold_repr_list.append(self.extract_representation(" ".join(seed), x, dim))
File "/mnt/castor/seas_home/d/danielkh/ideaProjects/bert-analogies/app/bert_expansion.py", line 317, in extract_representation
output_all_encoded_layers=output_all_encoded_layers)
File "/mnt/castor/seas_home/d/danielkh/ideaProjects/bert-analogies/env3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/mnt/castor/seas_home/d/danielkh/ideaProjects/bert-analogies/env3.6/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 626, in forward
embedding_output = self.embeddings(input_ids, token_type_ids)
File "/mnt/castor/seas_home/d/danielkh/ideaProjects/bert-analogies/env3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/mnt/castor/seas_home/d/danielkh/ideaProjects/bert-analogies/env3.6/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 193, in forward
words_embeddings = self.word_embeddings(input_ids)
File "/mnt/castor/seas_home/d/danielkh/ideaProjects/bert-analogies/env3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
result = self.forward(*input, **kwargs)
File "/mnt/castor/seas_home/d/danielkh/ideaProjects/bert-analogies/env3.6/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 118, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/mnt/castor/seas_home/d/danielkh/ideaProjects/bert-analogies/env3.6/lib/python3.6/site-packages/torch/nn/functional.py", line 1454, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected object of backend CUDA but got backend CPU for argument #3 'index'
```
Here is a summary of what I do in my code:
```python
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
self.model = BertModel.from_pretrained(bert_model) # loading the model
self.model.to(self.device) # without this there is no error, but it runs in CPU (instead of GPU).
self.model.eval() # declaring to the system that we're only doing 'forward' calculations
# creating the input tensors here
...
# move the tensors to the target device
input_ids_tensor.to(self.device)
segment_ids_tensor.to(self.device)
input_mask_tensor.to(self.device)
output_all_encoded_layers.to(self.device)
encoded_layers, _ = self.model(input_ids_tensor, segment_ids_tensor, input_mask_tensor, output_all_encoded_layers=output_all_encoded_layers)
```
When I don't have `model.to(device) ` the code works fine, but I think it only uses CPU only. When I add it, it fails with the above error.
I did a little investigation and printed the inputs to `.model(.)` to see if they are properly copied to `device`:
```
print("\n * input_ids_tensor \n ")
print(input_ids_tensor)
print(input_ids_tensor.device)
print("\n * segment_ids_tensor \n ")
print(segment_ids_tensor)
print(segment_ids_tensor.device)
print("\n * input_mask_tensor \n ")
print(input_mask_tensor)
print(input_mask_tensor.device)
print("\n * self.device \n ")
print(self.device)
```
which outputs:
```
* input_ids_tensor
tensor([[ 101, 5334, 2148, 1035, 3792, 3146, 102, 5334, 102, 0, 0],
[ 101, 5334, 2148, 1035, 3792, 3146, 102, 2148, 1035, 3792, 102],
[ 101, 5334, 2148, 1035, 3792, 3146, 102, 3146, 102, 0, 0]])
cpu
* segment_ids_tensor
tensor([[0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0]])
cpu
* input_mask_tensor
tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]])
cpu
* self.device
cuda:0
```
As it can be seen, the tensors are still `cpu`, even after running `.to(device)`.
Any thoughts where things are going wrong?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/227/reactions",
"total_count": 27,
"+1": 25,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 2,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/227/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/226 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/226/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/226/comments | https://api.github.com/repos/huggingface/transformers/issues/226/events | https://github.com/huggingface/transformers/issues/226 | 403,125,784 | MDU6SXNzdWU0MDMxMjU3ODQ= | 226 | Logical error in the run_lm_finetuning? | {
"login": "snakers4",
"id": 12515440,
"node_id": "MDQ6VXNlcjEyNTE1NDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/12515440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/snakers4",
"html_url": "https://github.com/snakers4",
"followers_url": "https://api.github.com/users/snakers4/followers",
"following_url": "https://api.github.com/users/snakers4/following{/other_user}",
"gists_url": "https://api.github.com/users/snakers4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/snakers4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/snakers4/subscriptions",
"organizations_url": "https://api.github.com/users/snakers4/orgs",
"repos_url": "https://api.github.com/users/snakers4/repos",
"events_url": "https://api.github.com/users/snakers4/events{/privacy}",
"received_events_url": "https://api.github.com/users/snakers4/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, @snakers4,\r\n\r\nI think this part is correct. \r\nThe input comes from `tokens` and `input_ids` in line 371, some of which are _already_ masked/altered, and the LM targets are `lm_label_ids`, which contain the original tokens. \r\nNote that `random_word`, called in line 331 and 332, masks the words in `tokens_a` and `tokens_b` _in-place_; `t1_random` and `tokens_a` refer the same object actually. \r\n\r\nIf you are trying to pre-train a model from scratch and having slow convergence issue, see discussions in #202. ",
"> The input comes from tokens and input_ids in line 371, some of which are already masked/altered, and the LM targets are lm_label_ids, which contain the original tokens.\r\n\r\nAh, you are right, I see it [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_lm_finetuning.py#L354-L371), sorry. I totally missed the in-place part.\r\n\r\nThis [bit](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_lm_finetuning.py#L301) explains why `lm_label_ids` are the original tokens.\r\n\r\nThis was a bit counter-intuitive.\r\nAnyway, thanks for the explanations, now everything is clear.\r\n"
] | 1,548 | 1,548 | 1,548 | NONE | null | Hi,
@thomwolf @nhatchan
@tholor @deepset-ai
Many thanks for amazing work with this repository =)
I maybe grossly wrong or just missed some line of the code somewhere, but it seems to me that there is a glaring issue in the overall logic of `examples/run_lm_finetuning.py` - I guess you never pre-trained the model till convergence from scratch, right?
_________________________________________
**Context**
I have already been able to fit the model to the Russian version of the SQUAD dataset from scratch (so-called **SberSQUAD** from sdsj 2017), and I was able to obtain **~40% EM w/o any pre-training**. Afaik, ~60% EM is about the top result on this dataset, achieved using BiDAF, so the model worksm which is good =).
Anyway this was a sanity check for me to see that the model is sound, obviously to **achieve good results you need to pre-train first** (afaik the authors of the BERT paper did not even post any results w/o pre-training, right?).
So now I am planning to pre-train BERT for the Russian language with various pre-processing ideas:
- BPE (like in the original);
- Embedding bag (works well for "difficult" languages) + ;
_________________________________________
**The Problem**
First of all let's quote the paper
```
In order to train a deep bidirectional representation, we take a straightforward approach of masking
some percentage of the input tokens at random, and then predicting only those masked tokens.
We refer to this procedure as a βmasked LMβ (MLM), although it is often referred to as a Cloze task in
the literature (Taylor, 1953). In this case, the fi- nal hidden vectors corresponding to the mask tokens are
fed into an output softmax over the vo- cabulary, as in a standard LM. In all of our exper- iments, we
mask 15% of all WordPiece tokens in each sequence at random. In contrast to denoising auto-encoders
(Vincent et al., 2008), we only pre- dict the masked words rather than reconstructing the entire input.
```
So as far as I can see:
- We mask / alter some of the input (afaik the masking scheme [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_lm_finetuning.py#L276) is correct) and make the model correct our "mistakes". It only makes sense - we break the input, and the model corrects it;
- But if you look [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_lm_finetuning.py#L142), [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_lm_finetuning.py#L331-L334) and [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_lm_finetuning.py#L371) - it seems to me that in the code:
- Just padded / processed tokens are passed as input;
- The lm targets are the "messed up" tokens;
So, the training is kind of reversed.
The correct sequence is passed, but the incorrect sequence is the target.
Anyway - I may just have missed some line of code, that changes everything.
I am just trying to understand the model properly, because I need to do a total rewrite of the pre-processing, because in my domain usage of embedding bags proved to be more beneficial than BPE.
Many thanks!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/226/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/226/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/225 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/225/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/225/comments | https://api.github.com/repos/huggingface/transformers/issues/225/events | https://github.com/huggingface/transformers/issues/225 | 402,524,232 | MDU6SXNzdWU0MDI1MjQyMzI= | 225 | max sentence length | {
"login": "RayXu14",
"id": 22774575,
"node_id": "MDQ6VXNlcjIyNzc0NTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/22774575?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RayXu14",
"html_url": "https://github.com/RayXu14",
"followers_url": "https://api.github.com/users/RayXu14/followers",
"following_url": "https://api.github.com/users/RayXu14/following{/other_user}",
"gists_url": "https://api.github.com/users/RayXu14/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RayXu14/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RayXu14/subscriptions",
"organizations_url": "https://api.github.com/users/RayXu14/orgs",
"repos_url": "https://api.github.com/users/RayXu14/repos",
"events_url": "https://api.github.com/users/RayXu14/events{/privacy}",
"received_events_url": "https://api.github.com/users/RayXu14/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, 512 tokens if you use the pre-trained models. Any length you want if you train your models from scratch.",
"could we set it smaller ? cause if i set it as 512, then result is out of memory",
"You can just send a smaller input in the model, no need to go to the max",
"thank you @thomwolf "
] | 1,548 | 1,550 | 1,548 | NONE | null | is there an max sentence length for this bert code? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/225/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/225/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/224 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/224/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/224/comments | https://api.github.com/repos/huggingface/transformers/issues/224/events | https://github.com/huggingface/transformers/issues/224 | 402,517,534 | MDU6SXNzdWU0MDI1MTc1MzQ= | 224 | how to add new vocabulary? | {
"login": "hahmyg",
"id": 3884429,
"node_id": "MDQ6VXNlcjM4ODQ0Mjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3884429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hahmyg",
"html_url": "https://github.com/hahmyg",
"followers_url": "https://api.github.com/users/hahmyg/followers",
"following_url": "https://api.github.com/users/hahmyg/following{/other_user}",
"gists_url": "https://api.github.com/users/hahmyg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hahmyg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hahmyg/subscriptions",
"organizations_url": "https://api.github.com/users/hahmyg/orgs",
"repos_url": "https://api.github.com/users/hahmyg/repos",
"events_url": "https://api.github.com/users/hahmyg/events{/privacy}",
"received_events_url": "https://api.github.com/users/hahmyg/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @hahmyg, please refer to the relevant section in the original implementation repository: https://github.com/google-research/bert#learning-a-new-wordpiece-vocabulary."
] | 1,548 | 1,548 | 1,548 | NONE | null | for specific task, it is required to add new vocabulary for tokenizer.
It is ok that re-training for those vocabulary for me :)
Is it possible to add new vocabulary for tokenizer?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/224/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/224/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/223 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/223/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/223/comments | https://api.github.com/repos/huggingface/transformers/issues/223/events | https://github.com/huggingface/transformers/pull/223 | 402,513,421 | MDExOlB1bGxSZXF1ZXN0MjQ3MTgwNjQ5 | 223 | Feat/9 | {
"login": "davidkim205",
"id": 16680469,
"node_id": "MDQ6VXNlcjE2NjgwNDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/16680469?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidkim205",
"html_url": "https://github.com/davidkim205",
"followers_url": "https://api.github.com/users/davidkim205/followers",
"following_url": "https://api.github.com/users/davidkim205/following{/other_user}",
"gists_url": "https://api.github.com/users/davidkim205/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidkim205/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidkim205/subscriptions",
"organizations_url": "https://api.github.com/users/davidkim205/orgs",
"repos_url": "https://api.github.com/users/davidkim205/repos",
"events_url": "https://api.github.com/users/davidkim205/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidkim205/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,548 | 1,548 | 1,548 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/223/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/223",
"html_url": "https://github.com/huggingface/transformers/pull/223",
"diff_url": "https://github.com/huggingface/transformers/pull/223.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/223.patch",
"merged_at": null
} |
|
https://api.github.com/repos/huggingface/transformers/issues/222 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/222/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/222/comments | https://api.github.com/repos/huggingface/transformers/issues/222/events | https://github.com/huggingface/transformers/issues/222 | 402,509,287 | MDU6SXNzdWU0MDI1MDkyODc= | 222 | ConnectionError returned if Internet network is not stable | {
"login": "renjunxiang",
"id": 34116367,
"node_id": "MDQ6VXNlcjM0MTE2MzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/34116367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/renjunxiang",
"html_url": "https://github.com/renjunxiang",
"followers_url": "https://api.github.com/users/renjunxiang/followers",
"following_url": "https://api.github.com/users/renjunxiang/following{/other_user}",
"gists_url": "https://api.github.com/users/renjunxiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/renjunxiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/renjunxiang/subscriptions",
"organizations_url": "https://api.github.com/users/renjunxiang/orgs",
"repos_url": "https://api.github.com/users/renjunxiang/repos",
"events_url": "https://api.github.com/users/renjunxiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/renjunxiang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I'm guessing you're using some of the classes defined in modeling.py, such as one of the Bert \"pretrained models\" (e.g. any of the models that inherit from `PreTrainedBertModel`)? On construction, each of these classes takes a `config` argument, where `config` is a BertConfig object (also defined in modeling.py). The BertConfig can either be created from a model at one of the links in `PRETRAINED_MODEL_ARCHIVE_MAP` or from a config file stored in a local directory. You just have to set the `pretrained_model_name` to a local directory containing a bert_config.json file and a pytorch_model.bin file rather than one of 'bert-base-uncased', 'bert-large-uncased' etc. Setting `pretrained_model_name` to one of the latter options will try to pull from the Amazon AWS repositories. So if you're running the run_classification.py script, you would set the 'bert-model' flag to the directory with your downloaded bert model if you don't want it to pull from AWS. One thing is if you've downloaded one of the original Google Bert models, you'll need to convert tf checkpoints to pytorch bin files. There's a script for this in the repository. You shouldn't need to worry about this if you've downloaded one of the models at the links in `PRETRAINED_MODEL_ARCHIVE_MAP` (defined at the top of modeling.py)\r\n\r\nTLDR: Set `--bert-model` to the directory with your downloaded Bert model\r\n\r\nDoes that make any sense?",
"Yes! Set local directory in modeling.py and tokenization.py can solve my problem. Thank you so much!",
"Thanks @cmeister747 !"
] | 1,548 | 1,548 | 1,548 | NONE | null | Hi,
although I have download BERT pretrained model, "ConnectionError" returned if my Internet network is not very stable.
Function ```file_utils.cached_path``` needs stable internet.
Is there any way to avoid checking for amazonaws before loading bert-embedding? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/222/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/222/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/221 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/221/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/221/comments | https://api.github.com/repos/huggingface/transformers/issues/221/events | https://github.com/huggingface/transformers/issues/221 | 402,169,653 | MDU6SXNzdWU0MDIxNjk2NTM= | 221 | Using BERT with custom QA dataset | {
"login": "gqoew",
"id": 32342701,
"node_id": "MDQ6VXNlcjMyMzQyNzAx",
"avatar_url": "https://avatars.githubusercontent.com/u/32342701?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gqoew",
"html_url": "https://github.com/gqoew",
"followers_url": "https://api.github.com/users/gqoew/followers",
"following_url": "https://api.github.com/users/gqoew/following{/other_user}",
"gists_url": "https://api.github.com/users/gqoew/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gqoew/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gqoew/subscriptions",
"organizations_url": "https://api.github.com/users/gqoew/orgs",
"repos_url": "https://api.github.com/users/gqoew/repos",
"events_url": "https://api.github.com/users/gqoew/events{/privacy}",
"received_events_url": "https://api.github.com/users/gqoew/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think that you should start by pretraining a BERT model on SQuAD to give it a sense on how to perform question answering and then try finetuning it to your task. This may already give you good results, if it doesn't you might have to dig a bit deeper in the model.\r\n\r\nI don't really know how adding your domain specific tokens to the vocabulary would interact with the tokenizer.",
"One nice recent example is \"A BERT Baseline for the Natural Questions\" by Chris Alberti, Kenton Lee and Michael Collins from Google Research: http://arxiv.org/abs/1901.08634",
"This might help other dev who want to use BERT for custom QA: https://github.com/cdqa-suite/cdQA"
] | 1,548 | 1,562 | 1,548 | NONE | null | Hi,
I want to use BERT to train a QA model on a custom SQuAD-like dataset. Ideally, I would like to leverage the learning from the SQuAD dataset, and add fine-tuning on my custom dataset, which has specific vocabulary.
What is the best way to do this? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/221/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/220 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/220/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/220/comments | https://api.github.com/repos/huggingface/transformers/issues/220/events | https://github.com/huggingface/transformers/issues/220 | 402,120,223 | MDU6SXNzdWU0MDIxMjAyMjM= | 220 | Questions Answering Example | {
"login": "schipiga",
"id": 1479651,
"node_id": "MDQ6VXNlcjE0Nzk2NTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1479651?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/schipiga",
"html_url": "https://github.com/schipiga",
"followers_url": "https://api.github.com/users/schipiga/followers",
"following_url": "https://api.github.com/users/schipiga/following{/other_user}",
"gists_url": "https://api.github.com/users/schipiga/gists{/gist_id}",
"starred_url": "https://api.github.com/users/schipiga/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/schipiga/subscriptions",
"organizations_url": "https://api.github.com/users/schipiga/orgs",
"repos_url": "https://api.github.com/users/schipiga/repos",
"events_url": "https://api.github.com/users/schipiga/events{/privacy}",
"received_events_url": "https://api.github.com/users/schipiga/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi!\r\n\r\nYou can check this file that implements question answering on the SQuAD dataset: https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_squad.py",
"Thank you very much!",
"How can we use pre trained BertForQuestionAnswering model? I have looked into BertForNextSentencePrediction and output of model makes sense given the input vector, but unable to find any good example on BertForQuestionAnswering.",
"Have you tried looking at the official [documentation](https://huggingface.co/pytorch-transformers/model_doc/bert.html#bertforquestionanswering) that provides a simple example for each model?",
"Hi @LysandreJik, the official example was not clear to me. I understood the part of encoding. But I am looking for something like, I will give a question and a paragraph which would contain the answer, and I need the model to predict the answer span. But in the example they have done it with a single sentence, which is quite confusing!",
"Hey @LysandreJik, Sorry my bad, didn't look at run_squad.py, it has been changed a lot since I saw it first during which BERT was only released! It is so good to see everything being integrated at a single place! Thanks for the great work you guys! β€οΈ ",
"@Arjunsankarlal Glad you could get what you were looking for!",
"@Arjunsankarlal @LysandreJik can you guys help me with the example. I got an error when I ran the example given in the [documentation](https://huggingface.co/pytorch-transformers/model_doc/bert.html#bertforquestionanswering) when encoding the sequence, that tokernizer doesn't have attribute \"encode\". So I updated the code as follows:\r\n\r\n`from pytorch_pretrained_bert import BertTokenizer, BertForQuestionAnswering\r\nimport torch\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\nmodel = BertForQuestionAnswering.from_pretrained('bert-base-uncased')\r\n\r\ntokenized_text = tokenizer.tokenize(\"Hello, my dog is cute\")\r\n\r\n\r\nindexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)\r\n\r\ninput_ids = torch.tensor([indexed_tokens]) # Batch size 1\r\nstart_positions = torch.tensor([1])\r\nend_positions = torch.tensor([3])\r\noutputs = model(input_ids, start_positions=start_positions, end_positions=end_positions)\r\nprint(outputs)`\r\n\r\nthis is the output\r\ntensor(1.7739, grad_fn=<DivBackward0>)\r\n\r\nI believe it's a loss but I don't understand the example as in how does it answer the question. Also there isn't any start and end span. Can you please explain the example. Much appreciated.",
"Hi @adilmukhtar82 , could you give a look at the [`run_squad.py` example](https://github.com/huggingface/pytorch-transformers/blob/master/examples/run_squad.py), it shows how to use several models to do question answering.\r\n\r\nYou should probably update your repository version to `pytorch-transformers` too, most of the examples on our documentation won't work with `pytorch_pretrained_bert`.",
"@LysandreJik Thanks I have updated the repository and example is working fine. I am confused about the example mentioned in [documentation](https://huggingface.co/pytorch-transformers/model_doc/bert.html#bertforquestionanswering) (\"hello, my dog is cute\") as to how does it do with single sentence and not paragraph along with it. ",
"A bit late but here you go - \r\n```\r\nfrom transformers import DistilBertTokenizer, DistilBertForQuestionAnswering\r\nimport torch\r\n\r\ntokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased',return_token_type_ids = True)\r\nmodel = DistilBertForQuestionAnswering.from_pretrained('distilbert-base-uncased-distilled-squad')\r\n\r\ncontext = \"The US has passed the peak on new coronavirus cases, President Donald Trump said and predicted that some states would reopen this month.The US has over 637,000 confirmed Covid-19 cases and over 30,826 deaths, the highest for any country in the world.\"\r\nquestion = \"What was President Donald Trump's prediction?\"\r\nencoding = tokenizer.encode_plus(question, context)\r\n\r\ninput_ids, attention_mask = encoding[\"input_ids\"], encoding[\"attention_mask\"]\r\nstart_scores, end_scores = model(torch.tensor([input_ids]), attention_mask=torch.tensor([attention_mask]))\r\n\r\nans_tokens = input_ids[torch.argmax(start_scores) : torch.argmax(end_scores)+1]\r\nanswer_tokens = tokenizer.convert_ids_to_tokens(ans_tokens , skip_special_tokens=True)\r\n\r\nall_tokens = tokenizer.convert_ids_to_tokens(input_ids)\r\n\r\nprint (\"\\nAnswer Tokens: \")\r\nprint (answer_tokens)\r\n\r\nanswer_tokens_to_string = tokenizer.convert_tokens_to_string(answer_tokens)\r\n\r\nprint (\"\\nFinal Answer : \")\r\nprint (answer_tokens_to_string)\r\n\r\n```\r\n\r\nOutput is : \r\nAnswer Tokens:\r\n['some', 'states', 'would', 're', '##open', 'this', 'month']\r\n\r\nFinal Answer :\r\nsome states would reopen this month\r\n",
"@ramsrigouthamg Hey, could you maybe also provide a tensorflow example?",
"Thanks @ramsrigouthamg !\r\n\r\n@mariusjohan there are PyTorch and TensorFlow examples in the [usage](https://huggingface.co/transformers/usage.html#extractive-question-answering) section of the documentation.",
"@LysandreJik The link is now updated to https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py",
"@ramsrigouthamg @LysandreJik \r\nExamples mentioned by you is giving following error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"run_predict.py\", line 30, in <module>\r\n answer_start_scores\r\nTypeError: argmax(): argument 'input' (position 1) must be Tensor, not str\r\n\r\n```\r\nAfter doing investigation, I found that the following code is returning strings instead of integer indices:\r\n\r\n@ramsrigouthamg \r\nstart_scores, end_scores = model(torch.tensor([input_ids]), attention_mask=torch.tensor([attention_mask]))\r\n\r\n@LysandreJik \r\nanswer_start_scores, answer_end_scores = model(**inputs)\r\n\r\nValues returned:\r\nanswer_start_scores = 'answer_start_scores'\r\nanswer_end_scores = 'answer_end_scores '\r\nstart_scores = 'start_scores'\r\nend_scores = 'end_scores'\r\n\r\nI have fine tuned bert-en-base model on squad v1.1 and want to write prediction code. Can you please help?",
"@saurabhhssaurabh I found the solution.\r\nYou just need to change `answer_start_scores, answer_end_scores = model(**inputs)` to either \r\n`answer_start_scores, answer_end_scores = model(**inputs).values()` or `answer_start_scores, answer_end_scores = model(**inputs, return_dicts=True)`\r\n\r\nI got it from here: https://stackoverflow.com/questions/64901831/huggingface-transformer-model-returns-string-instead-of-logits",
"Thank you, @JacobLoe "
] | 1,548 | 1,618 | 1,548 | NONE | null | Hello folks! Can you provide simple example how to use pytorch bert with pretrained model for questions answering? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/220/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/220/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/219 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/219/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/219/comments | https://api.github.com/repos/huggingface/transformers/issues/219/events | https://github.com/huggingface/transformers/issues/219 | 402,103,567 | MDU6SXNzdWU0MDIxMDM1Njc= | 219 | How can I get the confidence score for the classification task | {
"login": "fenneccat",
"id": 22452009,
"node_id": "MDQ6VXNlcjIyNDUyMDA5",
"avatar_url": "https://avatars.githubusercontent.com/u/22452009?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fenneccat",
"html_url": "https://github.com/fenneccat",
"followers_url": "https://api.github.com/users/fenneccat/followers",
"following_url": "https://api.github.com/users/fenneccat/following{/other_user}",
"gists_url": "https://api.github.com/users/fenneccat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fenneccat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fenneccat/subscriptions",
"organizations_url": "https://api.github.com/users/fenneccat/orgs",
"repos_url": "https://api.github.com/users/fenneccat/repos",
"events_url": "https://api.github.com/users/fenneccat/events{/privacy}",
"received_events_url": "https://api.github.com/users/fenneccat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"You can use `torch.nn.functional.softmax` on the `logits` that the model outputs here:\r\n\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT/blob/0a9d7c7edb20a3e82cfbb4b72515575543784823/examples/run_classifier.py#L589-L591\r\n\r\nIt will give you the confidence score for each class."
] | 1,548 | 1,548 | 1,548 | NONE | null | In evaluation step, it seems it only shows the predicted label for the data instance.
How can I get the confidence score for each class? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/219/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/219/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/218 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/218/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/218/comments | https://api.github.com/repos/huggingface/transformers/issues/218/events | https://github.com/huggingface/transformers/pull/218 | 401,987,478 | MDExOlB1bGxSZXF1ZXN0MjQ2Nzc4NTEz | 218 | Fix learning rate problems in run_classifier.py | {
"login": "matej-svejda",
"id": 7644362,
"node_id": "MDQ6VXNlcjc2NDQzNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7644362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matej-svejda",
"html_url": "https://github.com/matej-svejda",
"followers_url": "https://api.github.com/users/matej-svejda/followers",
"following_url": "https://api.github.com/users/matej-svejda/following{/other_user}",
"gists_url": "https://api.github.com/users/matej-svejda/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matej-svejda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matej-svejda/subscriptions",
"organizations_url": "https://api.github.com/users/matej-svejda/orgs",
"repos_url": "https://api.github.com/users/matej-svejda/repos",
"events_url": "https://api.github.com/users/matej-svejda/events{/privacy}",
"received_events_url": "https://api.github.com/users/matej-svejda/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks @matej-svejda.\r\nSo this problem was actually introduced by adding NVIDIA's fp16 optimizer (`FusedAdam`) to the examples. This optimizer is a simple Adam which doesn't incorporate a learning rate schedule so we had to add a manual learning rate schedule in the examples.\r\nSo a better solution is to keep the `warmup_linear` function but to only modify the learning rates when the fp16 optimiser is used (i.e. updating the weights only if `args.fp16==True`).\r\nAlso it would be great to update the other examples similarly.\r\nDo you want to do that in your PR?\r\nI can also do that if you don't have the time. ",
"Sure, I can do that. Wanted to try out fp16 anyways :+1: ",
"@thomwolf Something like this?",
"Thanks @matej-svejda, I was a bit late on this PR. I've made a small commit to make the notation more explicit (removed `t_total` which was mainly a duplicate of `num_train_steps` and renamed `num_train_steps` in a more explicit `num_train_optimization_steps`).\r\nMerging this now"
] | 1,548 | 1,549 | 1,549 | CONTRIBUTOR | null | - Don't do warmup twice (in BertAdam and manually)
- Compute num_train_steps correctly for the case where gradient_accumulation_steps > 1. The current version might lead the the LR never leaving the warmup phase, depending on the value of gradient_accumulation_steps.
With these changes I get > 84% accuracy on MRPC, without them its around 77%. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/218/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/218/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/218",
"html_url": "https://github.com/huggingface/transformers/pull/218",
"diff_url": "https://github.com/huggingface/transformers/pull/218.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/218.patch",
"merged_at": 1549377644000
} |
https://api.github.com/repos/huggingface/transformers/issues/217 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/217/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/217/comments | https://api.github.com/repos/huggingface/transformers/issues/217/events | https://github.com/huggingface/transformers/issues/217 | 401,971,392 | MDU6SXNzdWU0MDE5NzEzOTI= | 217 | Loading fine_tuned BertModel fails due to prefix error | {
"login": "sebastianruder",
"id": 6792642,
"node_id": "MDQ6VXNlcjY3OTI2NDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6792642?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sebastianruder",
"html_url": "https://github.com/sebastianruder",
"followers_url": "https://api.github.com/users/sebastianruder/followers",
"following_url": "https://api.github.com/users/sebastianruder/following{/other_user}",
"gists_url": "https://api.github.com/users/sebastianruder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sebastianruder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sebastianruder/subscriptions",
"organizations_url": "https://api.github.com/users/sebastianruder/orgs",
"repos_url": "https://api.github.com/users/sebastianruder/repos",
"events_url": "https://api.github.com/users/sebastianruder/events{/privacy}",
"received_events_url": "https://api.github.com/users/sebastianruder/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think that you have find the problem but I'm not sure if your fix is the most appropriate way to deal with it. As this problem will only happen when we are loading a `BertModel` pretrained instance, maybe \r\n\r\n```\r\nload(model, prefix='' if hasattr(model, 'bert') or cls == BertModel else 'bert.')\r\n```\r\nwould be more logical. Could you check if this change also fixes your problem?",
"The problem is that this only happens when we load a `BertModel` that was previously fine-tuned. If we load a pretrained `BertModel`, then the pretrained parameters don't have the `bert.` prefix, so we have to add it and it works. However, if we load the fine-tuned `BertModel`, then the parameters already have the `bert.` prefix, so we don't need to add it anymore. But this is not recognized at the moment.\r\nSo the above change causes the loading of a pretrained `BertModel` to fail.",
"I tried to reproduce your problem to better understand but I'm getting some pretty strange results. I don't get any error but the weights do not load properly. Am I missing something obvious or is it more or less what you are doing?\r\n\r\n```python\r\nimport torch\r\nfrom pytorch_pretrained_bert import BertModel\r\nfrom pytorch_pretrained_bert.file_utils import PYTORCH_PRETRAINED_BERT_CACHE\r\n\r\nbert_model = 'bert-base-uncased'\r\nsave_file = 'test_ruder/model.bin'\r\nmodel_base = BertModel.from_pretrained(\r\n bert_model,\r\n cache_dir=PYTORCH_PRETRAINED_BERT_CACHE / 'distributed_{}'.format(-1)\r\n)\r\n\r\n# Saving\r\nmodel_to_save = model_base.module if hasattr(model_base, 'module') else model_base\r\ntorch.save(model_to_save.state_dict(), save_file)\r\n\r\n# Loading\r\nmodel_state_dict = torch.load(save_file)\r\nmodel_loaded = BertModel.from_pretrained(\r\n bert_model,\r\n state_dict = model_state_dict\r\n)\r\n\r\n# Tests\r\nparam_orig = list(model_base.parameters())\r\nparam_load = list(model_loaded.parameters())\r\nprint(len(param_orig) == len(param_load)) # True\r\nprint(all(x.shape == y.shape for x, y in zip(param_orig, param_load))) # True\r\nfor p_orig, p_load in zip(param_orig, param_load):\r\n print(torch.all(p_orig == p_load)) # prints tensor(0, dtype=torch.uint8) everytime\r\n```",
"Thanks for adding this working example. Yep, that's the issue I'm facing. I've slightly amended it to load the model from the config and weights file in the archive instead:\r\n```python\r\nimport torch\r\nfrom pytorch_pretrained_bert import BertModel, modeling\r\nfrom pytorch_pretrained_bert.file_utils import PYTORCH_PRETRAINED_BERT_CACHE\r\nfrom pathlib import Path\r\n\r\nsave_dir = Path('test_ruder')\r\nsave_dir.mkdir(exist_ok=True)\r\nbert_model = 'bert-base-uncased'\r\nsave_file = save_dir / modeling.WEIGHTS_NAME\r\nconfig_file = save_dir / modeling.CONFIG_NAME\r\nmodel_base = BertModel.from_pretrained(\r\n bert_model,\r\n cache_dir=PYTORCH_PRETRAINED_BERT_CACHE / 'distributed_{}'.format(-1)\r\n)\r\n\r\n# Saving\r\nmodel_to_save = model_base.module if hasattr(model_base, 'module') else model_base\r\ntorch.save(model_to_save.state_dict(), save_file)\r\nwith open(config_file, 'w') as f:\r\n f.write(model_base.config.to_json_string())\r\n\r\n# Loading\r\nmodel_state_dict = torch.load(save_file)\r\nmodel_loaded = BertModel.from_pretrained(save_dir)\r\n\r\n# Tests\r\nparam_orig = list(model_base.parameters())\r\nparam_load = list(model_loaded.parameters())\r\nprint(len(param_orig) == len(param_load)) # True\r\nprint(all(x.shape == y.shape for x, y in zip(param_orig, param_load))) # True\r\nfor p_orig, p_load in zip(param_orig, param_load):\r\n print(torch.all(p_orig == p_load)) # prints tensor(0, dtype=torch.uint8) everytime\r\n```",
"I don't get the warnings that you are mentioning in your original post, the piece of code that I've created seems to fail for another reason. Could you please try to reproduce your original problem in a minimal piece of code?",
"That's the message printed by the logger [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L545). You need to enable logging first. You can also just print the same message instead.",
"Hi sebastian, indeed, the pretrained loading script is currently designed to load the weights from `BertForPreTraining ` models.\r\n\r\nI will fix that in the next release. We just have to slightly modify [the line you indicated](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L543) to check the keys of the state dictionary:\r\n```python\r\nload(model, prefix='bert.' if not hasattr(model, 'bert') and any(s.startwith('bert.') in state_dict.keys()) else '')\r\n```\r\n\r\nIn the meantime you can fix your problem by adding `bert.` to the keys of your state dictionary.\r\nIn your example, you can either change the saving operation:\r\n```python\r\n# Saving\r\nmodel_to_save = model_base.module if hasattr(model_base, 'module') else model_base\r\nto_save_dict = model_to_save.state_dict()\r\nto_save_with_prefix = {}\r\nfor key, value in to_save_dict.items():\r\n to_save_with_prefix['bert.' + key] = value\r\ntorch.save(to_save_with_prefix, save_file)\r\nwith open(config_file, 'w') as f:\r\n f.write(model_base.config.to_json_string())\r\n```\r\nor the loading operation:\r\n```python\r\n# Loading\r\nmodel_state_dict = torch.load(save_file)\r\nstate_dict_with_prefix = {}\r\nfor key, value in model_state_dict.items():\r\n state_dict_with_prefix['bert.' + key] = value\r\nmodel_loaded = BertModel.from_pretrained(save_dir, state_dict=state_dict_with_prefix)\r\n```",
"Actually Sebastian, since the model you save and the model you load are instances of the same `BertModel` class, you can also simply use the standard PyTorch serialization practice (we only have a special `from_pretrained` loading function to be able to load various type of models using the same pre-trained model stored on AWS).\r\n\r\nJust build a new `BertModel` using the configuration file you saved.\r\n\r\nHere is a snippet :\r\n```python\r\n# Saving (same as you did)\r\nmodel_to_save = model_base.module if hasattr(model_base, 'module') else model_base\r\ntorch.save(model_to_save.state_dict(), save_file)\r\nwith open(config_file, 'w') as f:\r\n f.write(model_base.config.to_json_string())\r\n\r\n# Loading (using standard PyTorch loading practice)\r\nconfig = BertConfig(config_file)\r\nmodel = BertModel(config)\r\nmodel.load_state_dict(torch.load(save_file))\r\n```",
"Thanks a lot for the comprehensive suggestions, @thomwolf. You're totally right that just loading it as normally in PyTorch is the most straightforward and simplest way. Your last suggestion works. Thanks! π ",
"Hi All,\r\n\r\niam facing following issue while loading pretrained BERT Sequence model with my own data\r\n\r\nRuntimeError: Error(s) in loading state_dict for DataParallel:\r\n\tMissing key(s) in state_dict: \"module.out.weight\", \"module.out.bias\". \r\n\tUnexpected key(s) in state_dict: \"bert.embeddings.word_embeddings.weight\", \"bert.embeddings.position_embeddings.weight\", \"bert.embeddings.token_type_embeddings.weight\", \"bert.embeddings.LayerNorm.weight\", \"bert.embeddings.LayerNorm.bias\", \"bert.encoder.layer.0.attention.self.query.weight\", \"bert.encoder.layer.0.attention.self.query.bias\", \"bert.encoder.layer.0.attention.self.key.weight\", \"bert.encoder.layer.0.attention.self.key.bias\", \"bert.encoder.layer.0.attention.self.value.weight\", \"bert.encoder.layer.0.attention.self.value.bias\", \"bert.encoder.layer.0.attention.output.dense.weight\", \"bert.encoder.layer.0.attention.output.dense.bias\", \"bert.encoder.layer.0.attention.output.LayerNorm.weight\", \"bert.encoder.layer.0.attention.output.LayerNorm.bias\", \"bert.encoder.layer.0.intermediate.dense.weight\", \"bert.encoder.layer.0.intermediate.dense.bias\", \"bert.encoder.layer.0.output.dense.weight\", \"bert.encoder.layer.0.output.dense.bias\", \"bert.encoder.layer.0.output.LayerNorm.weight\", \"bert.encoder.layer.0.output.LayerNorm.bias\", \"bert.encoder.layer.1.attention.self.query.weight\", \"bert.encoder.layer.1.attention.self.query.bias\", \"bert.encoder.layer.1.attention.self.key.weight\", \"bert.encoder.layer.1.attention.self.key.bias\", \"bert.encoder.layer.1.attention.self.value.weight\", \"bert.encoder.layer.1.attention.self.value.bias\", \"bert.encoder.layer.1.attention.output.dense.weight\", \"bert.encoder.layer.1.attention.output.dense.bias\", \"bert.encoder.layer.1.attention.output.LayerNorm....\r\n\r\n\r\nany idea about this error"
] | 1,548 | 1,592 | 1,548 | NONE | null | I am loading a pretrained BERT model with `BertModel.from_pretrained` as I feed the `pooled_output` representation directly to a loss without a head. After fine-tuning the model, I save it as in [`run_classifier.py`](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py#L553).
Afterwards, I want to load the fine-tuned model, again without a head, so I'm using `BertModel.from_pretrained` model again to initialize it, this time from the directory where the config and model files are stored.
When trying to load the pretrained model, none of the weights are found and I get:
```
Weights of BertModel not initialized from pretrained model: ['bert.embeddings.word_embeddings.weight'
, 'bert.embeddings.position_embeddings.weight', 'bert.embeddings.token_type_embeddings.weight', 'bert
.embeddings.LayerNorm.weight', 'bert.embeddings.LayerNorm.bias', 'bert.encoder.layer.0.attention.self
.query.weight', 'bert.encoder.layer.0.attention.self.query.bias', 'bert.encoder.layer.0.attention.self.key.weight', ...]
```
This seems to be due to [this line](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L543) in `modeling.py`. As `BertModel.from_pretrained` does not create a `bert` attribute (in contrast to the BertModels with a head), the `bert.` prefix is used erroneously instead of the `''` prefix, which causes the weights of the fine-tuned model not to be found.
If I change this line to check additionally if we load a fine-tuned model, then this works:
```
load(model, prefix='' if hasattr(model, 'bert') or pretrained_model_name not in PRETRAINED_MODEL_ARCHIVE_MAP else 'bert.')
```
Does this make sense? Let me know if I'm using `BertModel.from_pretrained` in the wrong way or if I should be using a different model for fine-tuning if I just care about the `pooled_output` representation. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/217/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/216 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/216/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/216/comments | https://api.github.com/repos/huggingface/transformers/issues/216/events | https://github.com/huggingface/transformers/issues/216 | 401,890,579 | MDU6SXNzdWU0MDE4OTA1Nzk= | 216 | Training classifier does not work for more than two classes | {
"login": "satyakesav",
"id": 7447204,
"node_id": "MDQ6VXNlcjc0NDcyMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7447204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/satyakesav",
"html_url": "https://github.com/satyakesav",
"followers_url": "https://api.github.com/users/satyakesav/followers",
"following_url": "https://api.github.com/users/satyakesav/following{/other_user}",
"gists_url": "https://api.github.com/users/satyakesav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/satyakesav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/satyakesav/subscriptions",
"organizations_url": "https://api.github.com/users/satyakesav/orgs",
"repos_url": "https://api.github.com/users/satyakesav/repos",
"events_url": "https://api.github.com/users/satyakesav/events{/privacy}",
"received_events_url": "https://api.github.com/users/satyakesav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"What version of `pytorch-pretrained-BERT` are you using?\r\n\r\nIt seems to me that the change you are describing is already implemented.\r\n\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT/blob/0a9d7c7edb20a3e82cfbb4b72515575543784823/examples/run_classifier.py#L560",
"Okay. It is my bad that I did not have the latest version while debugging the issue. Thanks for pointing though. I will close the issue."
] | 1,548 | 1,548 | 1,548 | NONE | null | I am trying to run a classifier on the AGN data which has four classes. I am using the following command to train and evaluate the classifier.
python examples/run_classifier.py \
--task_name agn \
--do_train \
--do_eval \
--do_lower_case \
--data_dir $GLUE_DIR/AGN/ \
--bert_model bert-base-uncased \
--max_seq_length 128 \
--train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 2.0 \
--output_dir /tmp/agn_output/
I have created a task named agn similar to cola, mnli and others. The model is trained properly but during evaluation it throws the following error.
'''
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [5,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.
/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes` failed.
Traceback (most recent call last):
File "examples/run_classifier.py", line 690, in <module>
main()
File "examples/run_classifier.py", line 663, in main
logits = logits.detach().cpu().numpy()
RuntimeError: CUDA error: device-side assert triggered
'''
The reason for this issue is:
The model is trained with output size of 4 (since four classes), but during testing the model has output size of 2 because the BertForSequenceClassification class has default value for num_labels as 2.
So, if we change the following line in run_classifier.py
model = BertForSequenceClassification.from_pretrained(args.bert_model, state_dict=model_state_dict)
to
model = BertForSequenceClassification.from_pretrained(args.bert_model, state_dict=model_state_dict, num_labels=num_labels), the issue will be resolved.
Please let me know If I can push the changes. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/216/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/216/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/215 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/215/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/215/comments | https://api.github.com/repos/huggingface/transformers/issues/215/events | https://github.com/huggingface/transformers/issues/215 | 401,444,984 | MDU6SXNzdWU0MDE0NDQ5ODQ= | 215 | Loading fine tuned BertForMaskedLM | {
"login": "tgriseau",
"id": 26754621,
"node_id": "MDQ6VXNlcjI2NzU0NjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/26754621?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tgriseau",
"html_url": "https://github.com/tgriseau",
"followers_url": "https://api.github.com/users/tgriseau/followers",
"following_url": "https://api.github.com/users/tgriseau/following{/other_user}",
"gists_url": "https://api.github.com/users/tgriseau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tgriseau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tgriseau/subscriptions",
"organizations_url": "https://api.github.com/users/tgriseau/orgs",
"repos_url": "https://api.github.com/users/tgriseau/repos",
"events_url": "https://api.github.com/users/tgriseau/events{/privacy}",
"received_events_url": "https://api.github.com/users/tgriseau/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Maybe update to a recent version of `pytorch-pretrained-bert`?",
"I'm already using the last release.\r\n\r\nI don't have any issues running it on gpu. The problem append when using map_location\r\n",
"yeah yes. if you trained model in GPU, can't be loaded. we will change map_location=\"CPU\" in modeling.py line 511. \r\nhttps://github.com/huggingface/pytorch-pretrained-BERT/blob/0a9d7c7edb20a3e82cfbb4b72515575543784823/pytorch_pretrained_bert/modeling.py#L511\r\n\r\n`state_dict = torch.load(weights_path, map_location=\"CPU\" )`\r\n\r\nnow it will load our finetuned model. this is only for inference running on CPU. ",
"All good ! It works, thanks. \r\n\r\nMaybe adding a device parameter to the function from_pretrained could be usefull.\r\n\r\nThanks for your help.",
"Great, thanks @MuruganR96 ",
"@tgriseau - I want to fine-tune bert on MaskedLM using domain-specific text. could you please provide an example of how you fine-tuned or provide some details about what kind of inputs need to be passed? will I be using the true sentence as the output for fine-tuning? "
] | 1,548 | 1,600 | 1,548 | NONE | null | Hi,
I tried to fine tune BertForMaskedLM and it works. But i'm facing issues when I try to load the fine tuned model.
Here is the code I used to load the model :
```
model_state_dict = torch.load("./finetunedmodel/pytorch_model.bin", map_location='cpu')
model_fine = BertForMaskedLM.from_pretrained(pretrained_model_name='bert-base-multilingual-cased', state_dict=model_state_dict, cache_dir='./data')
```
The error I'm facing is : __init__() got an unexpected keyword argument 'state_dict'
Does someone already faced this issue ?
Thanks
Edit : I trained my model on gpu and try to use it on cpu. When I use it on gpu it works ! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/215/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/215/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/214 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/214/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/214/comments | https://api.github.com/repos/huggingface/transformers/issues/214/events | https://github.com/huggingface/transformers/issues/214 | 401,264,959 | MDU6SXNzdWU0MDEyNjQ5NTk= | 214 | SQuAD output layer and the computation loss | {
"login": "jianyucai",
"id": 28853070,
"node_id": "MDQ6VXNlcjI4ODUzMDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/28853070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jianyucai",
"html_url": "https://github.com/jianyucai",
"followers_url": "https://api.github.com/users/jianyucai/followers",
"following_url": "https://api.github.com/users/jianyucai/following{/other_user}",
"gists_url": "https://api.github.com/users/jianyucai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jianyucai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jianyucai/subscriptions",
"organizations_url": "https://api.github.com/users/jianyucai/orgs",
"repos_url": "https://api.github.com/users/jianyucai/repos",
"events_url": "https://api.github.com/users/jianyucai/events{/privacy}",
"received_events_url": "https://api.github.com/users/jianyucai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes you can also do that."
] | 1,548 | 1,548 | 1,548 | NONE | null | Hi, I noticed that in the final linear layer of `BertForQuestionAnswering`, the loss is computed based on `start_logits` and `end_logits `. That means the positions of questions are also considered to compute loss. Maybe we should only care about the positions of context? e.g., by setting the question part of `start_logits` and `end_logits ` to `-inf`?
https://github.com/huggingface/pytorch-pretrained-BERT/blob/0a9d7c7edb20a3e82cfbb4b72515575543784823/pytorch_pretrained_bert/modeling.py#L1089-L1113
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/214/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/213 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/213/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/213/comments | https://api.github.com/repos/huggingface/transformers/issues/213/events | https://github.com/huggingface/transformers/issues/213 | 401,219,022 | MDU6SXNzdWU0MDEyMTkwMjI= | 213 | will examples update the parameters of bert model? | {
"login": "susht3",
"id": 12723964,
"node_id": "MDQ6VXNlcjEyNzIzOTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/12723964?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/susht3",
"html_url": "https://github.com/susht3",
"followers_url": "https://api.github.com/users/susht3/followers",
"following_url": "https://api.github.com/users/susht3/following{/other_user}",
"gists_url": "https://api.github.com/users/susht3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/susht3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/susht3/subscriptions",
"organizations_url": "https://api.github.com/users/susht3/orgs",
"repos_url": "https://api.github.com/users/susht3/repos",
"events_url": "https://api.github.com/users/susht3/events{/privacy}",
"received_events_url": "https://api.github.com/users/susht3/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Could you please show the part of the paper where you have seen mentioned, I haven't found it.\r\n\r\nAre you talking about this paragraph?\r\n\r\n>In this section we evaluate how well BERT performs in the feature-based approach by generating ELMo-like pre-trained contextual representations on the CoNLL-2003 NER task. To do this, we use the same input representation as in Section 4.3, but use the activations from one or more layers with- out fine-tuning any parameters of BERT. These contextual embeddings are used as input to a randomly initialized two-layer 768-dimensional BiLSTM before the classification layer.",
"Closing this since there no activity. Feel free to re-open if needed.",
"i think it is asking whether we are fine-tuning the whole bert model or use bert outputs as a fixed feature for representing the sentences (like ELMO)"
] | 1,548 | 1,550 | 1,549 | NONE | null | on the examples, it loads bert-base model and do some tasks, the paper says that it will fix the parameters of bert and only update the parameters of our tasks, but i find that it seems not fix parameters of bert? just load the model, and adds some layers to train | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/213/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/213/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/212 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/212/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/212/comments | https://api.github.com/repos/huggingface/transformers/issues/212/events | https://github.com/huggingface/transformers/issues/212 | 401,080,530 | MDU6SXNzdWU0MDEwODA1MzA= | 212 | Pytorch-Bert: Why this command: pip install pytorch-pretrained-bert doesn't work for me | {
"login": "abril4416",
"id": 31852492,
"node_id": "MDQ6VXNlcjMxODUyNDky",
"avatar_url": "https://avatars.githubusercontent.com/u/31852492?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abril4416",
"html_url": "https://github.com/abril4416",
"followers_url": "https://api.github.com/users/abril4416/followers",
"following_url": "https://api.github.com/users/abril4416/following{/other_user}",
"gists_url": "https://api.github.com/users/abril4416/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abril4416/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abril4416/subscriptions",
"organizations_url": "https://api.github.com/users/abril4416/orgs",
"repos_url": "https://api.github.com/users/abril4416/repos",
"events_url": "https://api.github.com/users/abril4416/events{/privacy}",
"received_events_url": "https://api.github.com/users/abril4416/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, most likely you'll have to switch to python 3.5 or newer!\r\nhttps://pypi.org/project/pytorch-pretrained-bert/ (check for requirements in the page: `Requires: Python >=3.5.0`) ",
"Indeed",
"Need some help. I'm experiencing the same problem with Python 3.7.3\r\n\r\nError code:\r\nERROR: Could not find a version that satisfies the requirement torch>=0.4.1 (from pytorch-pretrained-bert==0.4.0) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)\r\nERROR: No matching distribution found for torch>=0.4.1 (from pytorch-pretrained-bert==0.4.0)\r\n\r\n",
"Try installing pytorch first following the official instruction on the pytorch website: https://pytorch.org/"
] | 1,547 | 1,566 | 1,549 | NONE | null | I try to install pytorch-bert using the command: pip install pytorch-pretrained-bert
However this doesn't work for me.
And the feedback is below:
Could not find a version that satisfies the requirement pytorch-pretrained-be
rt (from versions: )
No matching distribution found for pytorch-pretrained-bert
I also tried to update my pip, but in vain.
So how can I install bert? (Will the root of the issue be that I use python2.7?)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/212/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/212/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/211 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/211/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/211/comments | https://api.github.com/repos/huggingface/transformers/issues/211/events | https://github.com/huggingface/transformers/issues/211 | 401,008,858 | MDU6SXNzdWU0MDEwMDg4NTg= | 211 | How convert pytorch to tf checkpoint? | {
"login": "semsevens",
"id": 13362968,
"node_id": "MDQ6VXNlcjEzMzYyOTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/13362968?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/semsevens",
"html_url": "https://github.com/semsevens",
"followers_url": "https://api.github.com/users/semsevens/followers",
"following_url": "https://api.github.com/users/semsevens/following{/other_user}",
"gists_url": "https://api.github.com/users/semsevens/gists{/gist_id}",
"starred_url": "https://api.github.com/users/semsevens/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/semsevens/subscriptions",
"organizations_url": "https://api.github.com/users/semsevens/orgs",
"repos_url": "https://api.github.com/users/semsevens/repos",
"events_url": "https://api.github.com/users/semsevens/events{/privacy}",
"received_events_url": "https://api.github.com/users/semsevens/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1108649053,
"node_id": "MDU6TGFiZWwxMTA4NjQ5MDUz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Help%20wanted",
"name": "Help wanted",
"color": "008672",
"default": false,
"description": "Extra attention is needed, help appreciated"
},
{
"id": 1314768611,
"node_id": "MDU6TGFiZWwxMzE0NzY4NjEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"I don't think such a conversion is currently implemented in this repository, but I have my own implementation here (if you're interested in adapting it for your use-case): https://github.com/nikitakit/self-attentive-parser/blob/8238e79e2089300db059eddff78229a09e254f70/export/export_bert.py#L94-L141",
"Thanks @nikitakit, do you think your scripts would make sense in the present repo as well or is it tided to your parsing application?",
"Closing this for now. Feel free to re-open.",
"Would be actually quite nice to have such a conversion script in order to serve pytorch models via [bert-as-service](https://github.com/hanxiao/bert-as-service)",
"Any progress on this? As @tholor says, would be nice for bert-as-service. @nikitakit is it possible to run your script for any finetuned pytorch model? If so, any tips/suggestions on how to do that?",
"Re-opening the issue.\r\nI don't have time to work on that at the moment but I would be happy to welcome a PR if there is interest in this feature.",
"This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n",
"Any news on this ?"
] | 1,547 | 1,561 | 1,558 | NONE | null | How convert pytorch to tf checkpoint? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/211/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/210 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/210/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/210/comments | https://api.github.com/repos/huggingface/transformers/issues/210/events | https://github.com/huggingface/transformers/issues/210 | 401,006,360 | MDU6SXNzdWU0MDEwMDYzNjA= | 210 | error: the following arguments are required: --bert_model, --output_dir | {
"login": "CaesarLuvAI",
"id": 41228551,
"node_id": "MDQ6VXNlcjQxMjI4NTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/41228551?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CaesarLuvAI",
"html_url": "https://github.com/CaesarLuvAI",
"followers_url": "https://api.github.com/users/CaesarLuvAI/followers",
"following_url": "https://api.github.com/users/CaesarLuvAI/following{/other_user}",
"gists_url": "https://api.github.com/users/CaesarLuvAI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CaesarLuvAI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CaesarLuvAI/subscriptions",
"organizations_url": "https://api.github.com/users/CaesarLuvAI/orgs",
"repos_url": "https://api.github.com/users/CaesarLuvAI/repos",
"events_url": "https://api.github.com/users/CaesarLuvAI/events{/privacy}",
"received_events_url": "https://api.github.com/users/CaesarLuvAI/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Check this site : https://github.com/huggingface/pytorch-pretrained-BERT and find your 2 Parameters \"--bert_model\",\" --output_dir\" . You'll find the Below example π \r\n\r\nExample π π― \r\n --bert_model : We can use BERT Models like : bert-base-uncased, bert-base-cased, bert-large-uncased, bert-large-cased, etc. All BERT models Refer : [https://github.com/google-research/bert#pre-trained-models](url)\r\n\r\n--output_dir : Output Directory in Local : C:/0.GITHUB_Desktop/pytorch-pretrained-BERT [https://github.com/huggingface/pytorch-pretrained-BERT](url)\r\n\r\n--bert_model bert-base-uncased\r\n--output_dir /tmp/mrpc_output/\r\n\r\n-------------------------------------------------------------------------------------------------------------------\r\n\r\nexport GLUE_DIR=/path/to/glue\r\n\r\npython run_classifier.py \\\r\n --task_name MRPC \\\r\n --do_train \\\r\n --do_eval \\\r\n --do_lower_case \\\r\n --data_dir $GLUE_DIR/MRPC/ \\\r\n --bert_model bert-base-uncased \\\r\n --max_seq_length 128 \\\r\n --train_batch_size 32 \\\r\n --learning_rate 2e-5 \\\r\n --num_train_epochs 3.0 \\\r\n --output_dir /tmp/mrpc_output/",
"What do you do specifically with the above? I have downloaded the BERT model, placed it in the same directory and am still getting this error."
] | 1,547 | 1,564 | 1,547 | NONE | null | the above error arose when i ran the run_squad.py in pycharm(i just copied and ran locally). so can anbody tell how to input these two parameters "--bert_model"," --output_dir" in the IDE? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/210/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/210/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/209 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/209/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/209/comments | https://api.github.com/repos/huggingface/transformers/issues/209/events | https://github.com/huggingface/transformers/issues/209 | 400,968,613 | MDU6SXNzdWU0MDA5Njg2MTM= | 209 | Missing softmax in BertForQuestionAnswering after linear layer? | {
"login": "jianyucai",
"id": 28853070,
"node_id": "MDQ6VXNlcjI4ODUzMDcw",
"avatar_url": "https://avatars.githubusercontent.com/u/28853070?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jianyucai",
"html_url": "https://github.com/jianyucai",
"followers_url": "https://api.github.com/users/jianyucai/followers",
"following_url": "https://api.github.com/users/jianyucai/following{/other_user}",
"gists_url": "https://api.github.com/users/jianyucai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jianyucai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jianyucai/subscriptions",
"organizations_url": "https://api.github.com/users/jianyucai/orgs",
"repos_url": "https://api.github.com/users/jianyucai/repos",
"events_url": "https://api.github.com/users/jianyucai/events{/privacy}",
"received_events_url": "https://api.github.com/users/jianyucai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It depends on what you use as loss, as mentioned in the [documentation](https://pytorch.org/docs/stable/nn.html#crossentropyloss):\r\n\r\n>This criterion combines nn.LogSoftmax() and nn.NLLLoss() in one single class. "
] | 1,547 | 1,547 | 1,547 | NONE | null | https://github.com/huggingface/pytorch-pretrained-BERT/blob/0a9d7c7edb20a3e82cfbb4b72515575543784823/pytorch_pretrained_bert/modeling.py#L1089-L1113
It seems there should be a softmax after the linear layer, or did I miss something? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/209/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/209/timeline | completed | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.